AI-2022 Forty-second SGAI International Conference on Artificial Intelligence
CAMBRIDGE, ENGLAND 13-15 DECEMBER 2022


home | schedule | technical stream | application stream | poster sessions
workshops | proceedings | exhibition | registration | sponsors | organisers
enquiries | social | visa info | venue | accommodation | panel session | special poster session
ai open mic | information for speakers | previous conferences | letter of invitation
call for papers | paper submission and info for authors | accepted papers
internet access for delegates | walking tour

BCS

Workshops

The first day of the conference comprises a range of workshops, to be held on Tuesday 13th December. Delegates will find these events to be especially valuable where there is a current need to consider the introduction of new AI technologies into their own organisations.

There will be four half-day workshops, and delegates are free to choose any combination of sessions to attend. The programme of workshops is shown below. Note that the first session starts at 11 a.m. to reduce the need for delegates to stay in Cambridge on the previous night. There is a lunch break from 12.30-13.15 and there are refreshment breaks from 14.45-15.15 and from 16.45-17.00.

Workshops organiser: Professor Adrian Hopgood, University of Portsmouth, UK


Sessions 1 and 2 - Stream 1 (11.00-12.30 and 13.15-14.45 Lubbock Room)

Sustainability & AI

Chair:
Mathias Kern, BT Technology

Details to follow.

Sessions 1 and 2 - Stream 2 (11.00-12.30 and 13.15-14.45 Peterhouse Lecture Theatre)

User Preferences in Intelligent Systems

Chairs:
Prof. Juan Augusto and Dr Mark Springett, Middlesex University

Details to follow.


Sessions 3 and 4 - Stream 1 (15.15-16.45 and 17.00-18.30 Lubbock Room)

To be announced

Sessions 3 and 4 - Stream 2 (15.15-16.45 and 17.00-18.30 Peterhouse Lecture Theatre)

Explainable AI

Chairs:
Dr Mercedes Arguello Casteleiro, University of Southampton, Dr Anne Liret, BT, and Dr Christoph Tholen, DFKI: German Research Center for Artificial Intelligence

Explainable AI (XAI) aims to enhance machine learning (ML) techniques with the aim of producing more explainable ML models that would enable human users to understand and appropriately trust ML models.

Part 1: Explainability hands-on in deep learning - Dr Mercedes Arguello Casteleiro, University of Southampton
Deep Learning algorithms are considered black box algorithms, where a close examination by humans does not reveal the features used to generate the prediction. This part of the workshop will focus on explainable AI for Deep Learning algorithms in domains with abundant unlabelled text, such as biomedicine. The workshop will exemplify how to provide predictions (outcome) with accompanying justifications (outcome explanation). The approach presented belongs to the new field of explainable active learning (XAL), combining active learning (AL) and local explanations.

Part 2: Reusing Explanation experience - Dr Anne Liret, BT
Even with the growing list of explanation libraries that are published, real-world decision-makers still face the challenge of designing the right questions and measuring instruments to prove that the evaluation fits for purpose and brings benefits to the end-user. iSee (isee4xai.com) is an interactive toolbox for XAI with Case-based Reasoning at heart which focuses on explanation experience evaluation and reusing across different use cases. This part of the workshop will look at why it is important to model the end-to-end experience of user, will showcase explanation methods, and exemplify the importance of validation according to the intent of users and human perception, and present real examples.

Speakers:

  • Anne Liret, BT Applied Research: “Evaluating and reusing explanation experience across use cases”
  • David Corsar, Robert Gordon University: “Modelling explanation strategies and experiences with iSeeOnto”
  • iSee team: “Demo of the iSee cockpit: Reusing explanation strategy in action”
  • Matthew Wallwork, BT Technology: “Connected Care - supporting people in their homes”
  • Mahsa Abazari Kia and Aygul Garifullina, Essex University and BT: “Using NLP to understand complex technical notes - a telecoms case study”

Part 3: Application cases - Dr Christoph Tholen, Mattis Wolf, and Dr Frederic Stahl, DFKI: German Research Center for Artificial Intelligence
This part of the workshop will focus on XAI applications in the maritime domain. Here, on one hand, safety concerns prevent the use of deep learning techniques for many applications. Nevertheless, XAI techniques have the potential to enable, for instance, control systems for autonomous ships. Another example is the use of convolutional neural networks (CNNs) for plastic waste identification and classification. In this use case, the acceptance of potential end users depends on the confidence of the human stakeholder in AI systems used. Here XAI methods, like result explanations, can help to increase the acceptance of the end users. In this workshop, both use cases and other possible applications of XAI in the maritime domain will be discussed.


AI-2022 Forty-second SGAI International Conference on Artificial Intelligence
CAMBRIDGE, ENGLAND 13-15 DECEMBER 2022


home | schedule | technical stream | application stream | poster sessions
workshops | proceedings | exhibition | registration | sponsors | organisers
enquiries | social | visa info | venue | accommodation | panel session | special poster session
ai open mic | information for speakers | previous conferences | letter of invitation
call for papers | paper submission and info for authors | accepted papers
internet access for delegates | walking tour

BCS