Panel Session
How should developments in AI be regulated over the next 25 years?
Chair: Andrew Lea, University of Brighton
Panel Members
Dr Penny Douquenoy, chair of the BCS Ethics Specialist Group
Dr. Kevin Maynard, Institute for Ethical AI, Oxford Brookes University
Dr. Detlef Nauck, BT Research
Chris Rees, Immediate Past President, British Computer Society
The term “artificial intelligence” is receiving prominence it never had before. Whilst much of that “AI” is simply “computer programs”, a substantial proportion is either genuine AI technique or derivative. With prominence or wide-spread use comes the discussion - or even threat - of regulation.
This discussion may include such elements as:-
- What is the nature of “AI” that makes it different from “ordinary” software, and therefore requires particular regulation?
- What are the ethical implications of AI that mean regulation should even be considered?
- What alternatives are there to regulation? Eg code of conduct? No regulation?
- Why is ordinary regulation - eg GDPR - not sufficient?
- How could AI be regulated, given that, by its very nature, it is frequently non-deterministic and learns in-flight and therefore can (correctly) exhibit unexpected behaviours?
- What sort of regulation should be applied specifically to AI?
- Who should we trust to determine any regulations? Who should that regulatory body be?
- Could an over-strong or simplistic regulatory framework simply hinder development of AI in the UK?
- Would regulations need to be global to be effective? If so, how are global regulations to be agreed?
- Assuming AI has a rapid development (will it?) over the next 25 years, how can regulations be future-proofed?
- Should the development, rather than the deployment, of AI be regulated?
- Will regulations ever need to consider “machine rights”?
|
AI-2019
Thirty-ninth SGAI International Conference on Artificial Intelligence.
CAMBRIDGE, ENGLAND 17-19 DECEMBER 2019
|
| | | |
|
| | | | | |
|
|
|
|
| |
|
paper submission and info for authors |
|
|
|
|
|
|