SGAI

AI-2025 Forty-fifth SGAI International Conference on Artificial Intelligence
CAMBRIDGE, ENGLAND 16-18 DECEMBER 2025


home | schedule | technical stream | application stream | poster sessions
workshops | proceedings | exhibition | registration | sponsors | organisers
enquiries | social | visa info | venue | accommodation | panel session | short paper presentations
ai open mic | information for speakers | previous conferences | letter of invitation
call for papers | accepted papers | internet access | walking tour

PAPER SUBMISSION AND INFO FOR AUTHORS

BCS

Workshops

The first day of the conference, Tuesday 16 December 2025, comprises a range of workshops. Delegates will find these events to be especially valuable where there is a current need to consider the introduction of new AI technologies into their own organisations.

There will be four half-day workshops. Delegates are free to choose any combination of sessions to attend. The programme of workshops is shown below. Note that the first session starts at 11 a.m. to reduce the need for delegates to stay in Cambridge on the previous night. There is a lunch break from 12.30-13.15 and there are refreshment breaks from 14.45-15.15 and from 16.45-17.00.

Please register for either the workshops day (16 December) or a suitable combination of days (16-18 December).

Workshops organiser: Professor Adrian Hopgood, University of Portsmouth, UK


Sessions 1 and 2 - Stream 1 (11.00-12.30 and 13.15-14.45 Peterhouse Lecture Theatre)

Generative AI

Chair: Dr Carla Di Cairano-Gilfedder, BT Labs, UK
Organising Committee: Prof. Liu Lu (Exeter University), Prof. Huiyu Zhou (University of Leicester), Dr Erfu Yang (University of Strathclyde), Dr Simon Hadfield (University of Surrey), and Dr James Haworth (University College London)

11:00 Keynote: Generative AI in Robotics and Automation by Dr Erfu Yang, University of Strathclyde, UK

11:50 Overcoming Instabilities in LLM Training: Harnessing optimization strategies for enhanced performance by Dr Tianjin Huang, University of Exeter, UK

12:30 Lunch

13:15 Keynote: Safety and Trustworthiness of GenAI, Dr Mark Post, University of York, UK

13:55 GenAI Safety in Physical Human–Robot Interaction, Dr Maria Elena Giannaccini, Aberdeen University, UK

14:20 Evaluating Human–LLM Alignment Requires Transparent and Adaptable Statistical Guarantees, Dr Jay Jin, University of Exeter, UK

14:45 End

Sessions 1 and 2 - Stream 2 (11.00-12.30 and 13.15-14.45 Upper Hall)

Ethical and Legal Aspects of AI

Chairs: Assoc. Prof. Dr Carlisle George and Prof. Dr Juan Carlos Augusto, Middlesex University London, UK

Artificial Intelligence (AI) is a field in continuous development and more recently has become more prominent in the news. The pros and cons of its use in various contexts and the possible impact on society is now more openly discussed. There are significant components of AI that cause concern to experts and to the public in general. This is especially true in regard to the use of unregulated AI in safety-critical applications. There are also potential problems emerging from AI being used in many other systems without sufficient transparency regarding how and to what end it is used. These concerns and problems call for the need to properly regulate the development and use of AI in order to ensure its safety and conformity with human-centric values. This workshop brings together colleagues interested in, or actively trying to develop, solutions to protect society from the negative uses of AI, especially in the context of exploring legal and ethical aspects (including regulatory compliance, trustworthy AI, responsible AI, and AI governance).

The Workshop will combine discussion and debate opportunities with presentations of technical work through papers. One objective is to agree on community initiatives which may help with the regulation of AI developments at a national or international level.

Further details and call for papers.

Contact Details:
Assoc. Prof. Dr Carlisle George, C.George@mdx.ac.uk, Research Group on Aspects of Law & Ethics Related to Technology, Middlesex University London, UK
Prof. Dr Juan Carlos Augusto, J.Augusto@mdx.ac.uk, Research Group on Development of intelligent Environments, Middlesex University London, UK


Sessions 3 and 4 - Stream 1 (15.15-16.45 and 17.00-18.30 Peterhouse Lecture Theatre)

Promoting Safer AI for Health and Care

Chairs: Prof Jeremy Wyatt, University of Southampton, UK and Prof. Philip Scott, University of Wales Trinity Saint David, UK

In keeping with all service industries, health and care are very data- and decision-intensive, but more safety-critical than most other sectors. So, in this workshop we will explore how to promote safer innovation in health and care AI. Topics to be covered in this workshop will include:

  1. What restrictions need to be placed on AI applications in health and care to promote safety ? For example, how to balance the pros and cons of continuous learning ?
  2. Is validation enough, or should we ask for more evidence before widely disseminating an algorithm for health and care ?
  3. How can health and care systems mobilise the huge number of facts that practitioners need at the point of decision making safely and efficiently – and are computable guidelines the answer ?
  4. There is much interest in Learning Health Systems (LHS). How can we use this model to respond safely when an algorithm or clinical decision support system gives incorrect advice, misses exceptions or does not correctly synthesise previous knowledge with new findings?
We are looking forward to a highly interactive multi-disciplinary discussion, stimulated by some excellent speakers.

Sessions 3 and 4 - Stream 2 (15.15-16.45 and 17.00-18.30 Upper Hall)

AI in Business and Finance

Chair: Professor Carl Adams, Cosmopolitan University Abuja, Nigeria / Mobi Publishing, Chichester, UK

Outline programme:

15.15-16.45: AI for project management

16:45-17:00: Tea break

17:00-18:30: AI for financial management and advice

Sessions 3 and 4 - Stream 3 (15.15-16.45 and 17.00-18.30 Davidson Room)

Applied XAI in the field
exemplified with an aquatic weed-harvesting use case

Chair: Dr Christoph Manss, DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz; German Research Centre for AI)

In the project HAI-x (Hybrid AI explainer), we work on interactive explanations of a system of AIs. As a use case, we consider a route optimisation for a weed-harvesting scenario in the Lake Maschsee in Hanover, Germany. Along with this use case, we present different AI and ML algorithms together with ways to generate their explanations, how they can be passed to subsequent processes, algorithms, and users, as well as how to present the algorithmic results to the end user. The outline programme is below.

15.15-16.45:

  1. Workshop and use case overview
  2. Explain the detection of weeding areas with remote sensing
  3. AI for explanation of detected objects in SONAR data
    • Prototype and counterfactual explanations
    • Attention as a mean of explanation for object detection in SONAR data
    • Saliency maps for object detection in SONAR data
  4. Discussion
16:45-17:00: Tea break

17:00-18:30:

  1. Data fusion and how to provide levels of explainability (layers of explainability)
  2. Development of a user interface to present the explanations for this routing use case
  3. Counterfactual explanations for path planning
  4. Discussion


Peterhouse Peterhouse Peterhouse Peterhouse
Peterhouse College, Cambridge, the venue for AI-2025