| |||||||||||||||||||||||
|
Chair Andrew Lea | ![]() |
Panel Members
Dr. Frederic Stahl (DFKI: German Research Center for Artificial Intelligence) | ![]() |
Professor Michael Fisher (University of Manchester) | ![]() |
|||
Dr Mercedes Arguello Casteleiro, University of Southampton | ![]() |
Dr Anne Liret, BT | ![]() |
Unlike other complex or powerful software, AI is often non-deterministic or will learn from experience, and therefore may (rightly) respond differently even on the same input. We hope, too, that AI will work usefully - “correct” often being hard to define - even in scenarios in which it was not trained.
Explainability is the ability of an AI system to explain why it took a certain decision, and in our discussion we will consider at least two facets:
Firstly, the technical ability to even provide explanations. For some techniques, such as decision trees, explanation is a natural bi-product. Other techniques, such as artificial neural nets, are intrinsically black boxes. How best can explainability be implemented? Can it be retrofitted?
And then there are the ethical considerations. When is explainability needed - for example to help in development, testing, and proving of utility - and when can it simply be dropped? Where AI is used to make decisions - say in granting of a loan - do we need to understand those decisions to be confident they are just? Are there other reasons? Should, and to what extent, should explainability be reflected in regulations? What should we do when both explainability and accuracy are desirable - for example with self-driving cars - where one technique is explainable but the other is more accurate?
AI-2022 Forty-second SGAI International Conference on Artificial Intelligence |
![]() |
|