SGAI

SGAI Virtual Seminar Series 2026

Wednesday May 6th 2026 from 6 p.m. to 7.30 p.m.

home | zoom link | speakers | programme | organisers | contact

BCS

Chair: Dr. Carlisle George (Middlesex University)

Arrival and introduction

Dr Carlisle George is a lawyer and Associate Professor in the Department of Computer Science at Middlesex University. Among other roles, he serves as the Research Convenor of the ALERT (Aspects of Law and Ethics Related to Technology) Research Group (https://www.eis.mdx.ac.uk/research/groups/Alert/) and Chair of the Faculty Ethics Committee of the Faculty of Science and Technology. He has contributed to several UK and EU-funded projects and studies as a senior legal consultant, expert, and researcher. His research focuses on legal and ethical challenges arising from emerging technologies, including issues such as privacy, data protection, liability, intellectual property rights and governance in the context of Artificial Intelligence.

Prof. Bernd Stahl (University of Nottingham)

Artificial Intelligence for a Better Future - An Ecosystem Perspective on the Ethics of AI

Drawing on the findings of the SHERPA project (www.project-sherpa.eu), the presentation will suggest that one perspective to better understand the social and ethical consequences of AI is to use the metaphor of an ecosystem to describe them, a metaphor already widely used in the policy discourse on AI. The talk will analyse what the use of the ecosystem metaphor means for the evaluation of ethical issues of AI and which conclusions can be drawn from it and how these can inform recommendations for policymakers and other stakeholders.

The presentation is based on the material developed in a book, which is freely available from: https://link.springer.com/book/10.1007%2F978-3-030-69978-9 A set of case studies based on these ideas were developed and published open access here https://link.springer.com/book/10.1007/978-3-031-17040-9 :

Prof Bernd Carsten Stahl is Professor of Critical Research in Technology at the School of Computer Science of the University of Nottingham where he leads the Responsible Digital Futures group (https://www.responsible-digital-futures.org/). His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes ethical questions of current and emerging ICTs, critical approaches to information systems and issues related to responsible research and innovation.

Prof. Dr. Griet Verhenneman (Ghent University, Belgium)

Patch Day in Brussels: How the EU AI Act ran into the unusual scenario of reform prior to application

The EU AI Act’s enforcement status is an ongoing story. While obligations regarding forbidden AI and general-purpose AI are already applicable and have been the subject of further clarifications, requirements for high-risk AI and AI systems with transparency requirements have been delayed. Exploring the impact of the `Digital Omnibus on AI`, we dissect the unprecedented case of the proposal for 59 amendments prior to full application. This presentation maps the current enforcement status, revised implementation timelines, and the unique challenges of a legal framework being reformed before its day one.

Prof Dr Griet Verhenneman is an Assistant Professor and legal expert in data protection, privacy, artificial intelligence and law, at the Department of Criminology, Criminal Law, and Social Law, Ghent University, Belgium. Over many years she has built extensive expertise in data privacy and regulatory frameworks as: an (affiliated) legal researcher and/or lecturer at the Center for IT & IP Law (KU Leuven, Faculty of Law and Criminology, Belgium); the Data Protection Officer at the University Hospital Leuven; and an external expert for the Belgian Data Protection Authority and legal expert for the Belgian Information Security Committee (IVC).

Dr Uchenna Nnawuchi (Sheffield University)

The “Why” being Algorithmic Decision Making: Explainability as the Backbone of AI Governance

As artificial intelligence migrates from experimental laboratories into domains of real-world consequence, the once-tolerated opacity of black box algorithms becomes increasingly untenable. This talk argues that explainability is emerging as the normative backbone of AI governance, transforming opaque computational systems into accountable socio-technical actors. Drawing on developments in AI regulation, including the EU AI Act, it explores how explanation mediates between algorithmic decision-making, legal responsibility, and societal trust. For technologists, the challenge is no longer merely to optimise models, but to design systems whose reasoning can be meaningfully understood.

Dr Uchenna Nnawuchi is a lawyer and Lecturer in the School of Law at Sheffield University. He is also an associate member of the ALERT (Aspects of Law and Ethics Related to Technology) Research Group at Middlesex University (https://www.eis.mdx.ac.uk/research/groups/Alert/). His research focuses on the intersection of law and technology, specifically AI governance, Generative AI, copyright, data protection, digital rights, and accountability in automated decision-making systems.

Dr. Carlisle George (Middlesex University)

Wrapping up and close

SGAI

Organised by BCS SGAI
The Specialist Group on Artificial Intelligence
http://www.bcs-sgai.org

BCS