| |||||||||
Workshops
The first day of the conference, Tuesday 16 December 2025, comprises a range of workshops. Delegates will find these events to be especially valuable where there is a current need to consider the introduction of new AI technologies into their own organisations. There will be four half-day workshops. Delegates are free to choose any combination of sessions to attend. The programme of workshops is shown below. Note that the first session starts at 11 a.m. to reduce the need for delegates to stay in Cambridge on the previous night. There is a lunch break from 12.30-13.15 and there are refreshment breaks from 14.45-15.15 and from 16.45-17.00. Please register for either the workshops day (16 December) or a suitable combination of days (16-18 December). Workshops organiser: Professor Adrian Hopgood, University of Portsmouth, UK
Sessions 1 and 2 - Stream 1 (11.00-12.30 and 13.15-14.45 Peterhouse Lecture Theatre)Generative AI
Chair: Dr Carla Di Cairano-Gilfedder, BT Labs, UK 11:00 Keynote: Generative AI in Robotics and Automation by Dr Erfu Yang, University of Strathclyde, UK 11:50 Overcoming Instabilities in LLM Training: Harnessing optimization strategies for enhanced performance by Dr Tianjin Huang, University of Exeter, UK 12:30 Lunch 13:15 Keynote: Safety and Trustworthiness of GenAI, Dr Mark Post, University of York, UK 13:55 GenAI Safety in Physical Human–Robot Interaction, Dr Maria Elena Giannaccini, Aberdeen University, UK 14:20 Evaluating Human–LLM Alignment Requires Transparent and Adaptable Statistical Guarantees, Dr Jay Jin, University of Exeter, UK 14:45 End
Sessions 1 and 2 - Stream 2 (11.00-12.30 and 13.15-14.45 Upper Hall)Ethical and Legal Aspects of AI Chairs: Assoc. Prof. Dr Carlisle George and Prof. Dr Juan Carlos Augusto, Middlesex University London, UK Artificial Intelligence (AI) is a field in continuous development and more recently has become more prominent in the news. The pros and cons of its use in various contexts and the possible impact on society is now more openly discussed. There are significant components of AI that cause concern to experts and to the public in general. This is especially true in regard to the use of unregulated AI in safety-critical applications. There are also potential problems emerging from AI being used in many other systems without sufficient transparency regarding how and to what end it is used. These concerns and problems call for the need to properly regulate the development and use of AI in order to ensure its safety and conformity with human-centric values. This workshop brings together colleagues interested in, or actively trying to develop, solutions to protect society from the negative uses of AI, especially in the context of exploring legal and ethical aspects (including regulatory compliance, trustworthy AI, responsible AI, and AI governance). The Workshop will combine discussion and debate opportunities with presentations of technical work through papers. One objective is to agree on community initiatives which may help with the regulation of AI developments at a national or international level. Further details and call for papers.
Contact Details:
Sessions 3 and 4 - Stream 1 (15.15-16.45 and 17.00-18.30 Peterhouse Lecture Theatre)Promoting Safer AI for Health and Care Chairs: Prof Jeremy Wyatt, University of Southampton, UK and Prof. Philip Scott, University of Wales Trinity Saint David, UK In keeping with all service industries, health and care are very data- and decision-intensive, but more safety-critical than most other sectors. So, in this workshop we will explore how to promote safer innovation in health and care AI. Topics to be covered in this workshop will include:
Sessions 3 and 4 - Stream 2 (15.15-16.45 and 17.00-18.30 Upper Hall)AI in Business and Finance Chair: Professor Carl Adams, Cosmopolitan University Abuja, Nigeria / Mobi Publishing, Chichester, UK Outline programme: 15.15-16.45: AI for project management 16:45-17:00: Tea break 17:00-18:30: AI for financial management and advice
Sessions 3 and 4 - Stream 3 (15.15-16.45 and 17.00-18.30 Davidson Room)
Applied XAI in the field
Chair: Dr Christoph Manss, DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz; German Research Centre for AI) In the project HAI-x (Hybrid AI explainer), we work on interactive explanations of a system of AIs. As a use case, we consider a route optimisation for a weed-harvesting scenario in the Lake Maschsee in Hanover, Germany. Along with this use case, we present different AI and ML algorithms together with ways to generate their explanations, how they can be passed to subsequent processes, algorithms, and users, as well as how to present the algorithmic results to the end user. The outline programme is below. 15.15-16.45:
17:00-18:30:
|