Aysenur Bilgin (VIQTOR DAVIS)

Responsible AI in Practice: Concepts, Challenges and Lessons

In recent years, we have witnessed a rapid spread in the utilization of Artificial Intelligence (AI). Subsequently, this has led to a proliferation of automated systems and data-driven tools that not only impact us as practitioners but also as citizens. Besides bringing valuable operational efficiency by mimicking human decision making in certain contexts, such systems have also been raising concerns that they might be biased and unintentionally discriminatory. In response to these concerns, there is growing need and interest from academic, industrial and public perspectives regarding ethical and responsible use of AI. This talk will present a working vocabulary of concepts such as transparency, fairness, accountability; and highlight the practical challenges of developing and deploying AI responsibly in various industries while noting lessons to be aware of under different circumstances.

Aysenur Bilgin is a data scientist (VIQTOR DAVIS) and a researcher (Centrum Wiskunde & Informatica) with a broad interdisciplinary interest in artificial intelligence, and with particular attention to the design, development and deployment of automated data-driven systems. She has a background in computer engineering and adaptive intelligent systems; and holds a PhD from University of Essex. Since 2017, she focuses on responsible data science practices such as enabling trust, transparency, and quality in automated decision-making systems driven by data.

Rob Claxton (BT)

Managing AI at Scale

This talk will consider some of the issues around management and governance of AI deployed at scale. How does AI differ from traditional IT? What steps do organisations need to take to ensure they can enjoy the benefits of AI whilst ensuring that it remains safe, effective and reliable? Using hypothetical and real-world examples of 'AI going wrong', Rob will argue the need for some specific measures and controls that will help organisations to operate AI systems whilst maintaining visibility of and accountability for their actions.

Rob Claxton is a Senior Manager at BT Applied Research. He graduated from the University of York in 1993 with a MEng in Electronic Systems Engineering and has spent his career with BT in a variety of technology roles including work on signalling systems and software development for BT’s Intelligent Network platform. In 2005 Rob joined BT Research where his work focussed on network science with a particular interest in the applications of community-finding algorithms and the analysis of networks in a spatial context. Rob currently leads the Big Data, Insight and Analytics team and played a key role in BT’s adoption of this technology. His team are actively applying machine learning and AI to real-world problems. Rob also leads the AI Management Standards work stream at the TM Forum where he is helping to develop a framework for the governance of AI deployed at scale.

Nello Cristianini (University of Bristol)

The Interface between Artificial Intelligence and the Social Sciences - and Why it Matters

The 'secret sauce' that made AI successful contains an important ingredient: vast samples of human behavior. From those, machine learning algorithms can extract the statistical rules that guide their own behavior: rules for recommendations, translations, image analysis, and more. Recently there have been concerns about subtle biases that might be found in AI agents, and some may be tracked just to the data that was used to train them, as well as to the fact that these agents are 'unreadable' to humans. Understanding the biases that are found in media content is important, as this is often what is used to teach machines to understand language. More generally, we need to understand the interface between AI and society if we want to live safely with intelligent machines.

Nello Cristianini is Professor of Artificial Intelligence at the University of Bristol. His research covers machine learning methods, and applications of AI to the analysis of media content, as well as the social and ethical implications of AI. Cristianini is the co-author of two widely known books in machine learning, as well as a book in bioinformatics. He is a recipient of the Royal Society Wolfson Research Merit Award, and of a European Research Council Advanced Grant. Before joining the University of Bristol, he has been a professor of statistics at the University of California, Davis. Currently he is working on social and ethical implications of AI. His animated videos dealing with the social aspects of AI can be found here: https://www.youtube.com/seeapattern.

Peter Garraghan (University of Lancaster)

The Efficiency Death-March: The Unintended Consequences of AI Research upon Climate Change

ICT now consumes approximately 10% of global electricity, with large-scale computing systems operating as a core foundation to facilitate digital demand, most notably AI services. Computer science systems researchers have predominantly tackled this problem via enhancing the energy-efficiency of individual components – from software to hardware to cooling – to reduce system energy consumption via improved scheduling, fault-tolerance, security, and hardware design. However enhancing system component efficiency has still resulted in a rapidly growing global ICT footprint – more data, greater compute ability, and more devices. This is due to Jevon’s paradox, whereby technological progress enhances system efficiency, however increases the rate of consumption and end-use demand. This is a growing concern to the community, whereby our best efforts to achieve sustainable and energy-efficient systems have the unintended consequence of making the problem worse. This problem will become even worse with the rising prominence of AI within society due to growing computing and data demands. This presentation discusses how AI is both the cure and the disease to an ever growing global ICT footprint. We will discuss the enables of this problem, various approaches how we have both used AI to improve systems, as well as designed better AI systems. Moreover, we will discuss and provide insights into the complex operation of some of the world’s largest ICT systems operated by Google and Alibaba.

Dr. Peter Garraghan is a Lecturer (Assistant Professor) in Computer Science at Lancaster University, UK. His research interests lie in enhancing the performance, sustainability, security, and resiliency of massive-scale computing infrastructure underpinning the nation (Cloud Datacenters, AI Clusters, IoT). Moreover, Peter explores both radical and feasible solutions towards meaningfully controlling and shrinking the global ICT footprint in the face of growing AI usage. Peter has industrial experience studying and building large-scale production distributed systems from Google and Alibaba, and his collaborators also include Microsoft, BT, STFC, CONACYT, and the UK datacenter and IoT industry.

Simon Llewellyn, Gareth Eley & Chris Simons (VQ Communications/ University of the West of England)

Enhancing the Videoconference Experience

This case study outlines the practical application of artificial intelligence in the form of machine learning techniques to enhance the quality of experience for videoconference users. Although videoconferencing software has become ubiquitous, a number of factors can adversely influence the quality of videoconference experience, e.g. network availability, information loss, and the sheer complexity of managing videoconference calls with thousands of participants. It can be difficult for engineers to understand videoconference call performance despite the huge quantities of call data generated. In partnership with the University of the West of England, Bristol, VQ Communications have recently applied a variety of machine learning algorithms to classify the quality of video calls, and exploit these classifications to develop an enhanced user experience. This case study presentation outlines some of the benefits AI has brought at VQ Communications, together with some of the wider issues its application raises. The partnership between VQ Communications and the University of the West of England is sponsored by the UK Government via Innovate UK, part of UK Research and Innovation.

Simon Llewellyn is Knowledge Transfer Partnership Associate for the partnership between the University of the West of England, Bristol and VQ Communications. Simon received his BSc (Hons) in Computer Science from the University of the West of England in 2018 where he specialised artificial intelligence, and machine learning techniques in particular. Simon’s current research interests focus on the application of machine learning to the quality of user experience in videoconferencing.

Dr Gareth Eley is lead software engineer at VQ Communications Ltd (https://www.vqcomms.com/). He has over 20 years’ experience developing software applications, mainly in the Unified Communications space with occasional forays into other areas. Gareth has accumulated a wealth of experience of the product life cycle from product design through to implementation, testing and product maintenance while working for large multinational corporations as well as small UK companies, usually involving scalable server-based projects written in C++ and, in recent years, C#. Throughout his career Gareth has been involved in solving the complex, difficult problems that are inevitable when releasing large, complex software products.

Dr Chris Simons is senior lecturer in Computer Science at the University of the West of England (UWE), Bristol, UK. After many years as a practicing software designer, architect and developer, Chris joined UWE, Bristol, in 2002. Chris now lectures in areas such as artificial intelligence and software development. Chris's research interests include interactive machine learning and metaheuristic search, and particularly the practical applications of artificial intelligence for people, wherein artificial intelligence can learn from people, and vice versa, for mutual learning. https://fetstudy.uwe.ac.uk/~clsimons/.

Dr Torgyn Shaikhina (QuantumBlack)

Explainable Machine Learning in Deployment Post COVID-19

As countries around the world begin to slowly emerge from the global pandemic, many hope that the intense disruption facing organisations will diminish. Yet for Data Science practitioners working on advanced analytics and Machine Learning, this ‘next normal’ is rife with challenges. The last few months have brought enormous disruption to how people, organisations and technology behave. How can Machine Learning stay resilient and effective when model assumptions are based on behaviours that are no longer prevalent?

This talk will explore actions that data practitioners can take to mitigate the impact of COVID-19 on models in development and deployment. It will offer in-depth perspective on Explainable AI (XAI) techniques as a foundation for more resilient Machine Learning.

Related link: https://medium.com/@QuantumBlack/how-to-continue-trusting-in-machine-learning-models-post-covid-19-5c5fd53fb83b

Dr Torgyn Shaikhina is a Senior Consultant in Data Science at QuantumBlack, where she is also co-leading R&D on Explainable AI. Torgyn’s area of expertise is algorithmic transparency, sparse data modelling, survival analyses, and data-efficient modelling; she authored multiple peer-reviewed publications on those subjects since 2014.

Torgyn has over 7 years of applied Machine Learning experience both in industry and academia. Torgyn was Honorary Researcher at Nuffield Department of Primary Care Health Science of the University of Oxford and is a founder of Next Generation Programmers outreach initiative for rural developing countries.

Torgyn holds a PhD in Engineering and BEng in Computer and Information Engineering, both from the University of Warwick, UK.