Dr Jonathan Francis Roscoe (BT Applied Research)

Automated Cyber Threat Detection and Response

Cyber defence is traditionally managed in direct response to vulnerabilities and threats within an IT system. However, security is a non-exhaustive exercise and in the face of an ever-changing threat landscape with highly-motivated threat actors there is a strong need to automate our cyber capabilities. In this talk we’ll explore how artificially intelligence and machine techniques are being practically deployed for the protection of enterprise networks and critical infrastructure. Firstly, we’ll how we better utilise threat intelligence and network telemetry to detect threat activity on our networks. AI/ML technologies are powerful tools to enhance the work of security professionals faced with increasingly complex and broad challenges. We’ll discuss practical measures in augmenting human expertise with new automated technology. We’ll go on to explore the use of reinforcement learning to drive the simulation of threat events on our network in order to automate the response to developing events. Such simulation can support red/blue-team testing in a much more exhaustive manner.

Jonathan Francis Roscoe leads the Automated Detection and Response research group at BT. Jonathan is responsible for managing teams of researchers delivering innovation in cyber defence for BT and its customers. His research interests include anomaly detection, neural networks, malware, red-teaming, disinformation, privacy and open-source intelligence. His work investigating dark web marketplaces and malware vendors was given the TEISS Information Security award and ITP Innovator of the Year. Outside of BT, Jonathan is chair of the IEEE UK&I Cyber Security group as well as an expert fellow to the SPRITE+ network on security, privacy and identity.

Dr Giovanna Martinez-Arellano (University of Nottingham)

AI in Manufacturing: Applications and Challenges

AI is one of the digital technologies that is expected to deliver new levels of productivity, efficiency and sustainabiilty in the manufacturing sector. Although there has been some success of Machine Learning and other AI techniques in areas such as process monitoring and automated inspection, the technology is still at the pilot level when we look at the industrial case studies. In this talk we will introduce some of the most successful applications of AI in manufacturing and discuss limitations and lines of research that need further development for AI to scale up in the factory floor.

Giovanna Martinez-Arellano has a PhD in Computer Science from Nottingham Trent University and is currently an Anne McLaren Research Fellow in Digital and Smart Manufacturing at the Institute for Advanced Manufacturing at the University of Nottingham. Her area of research particularly focuses on the development of robust Machine Learning models for complex and reconfigurable manufacturing systems.

Dr Shyam Krishna (Alan Turing Institute)

Data justice across the AI lifecycle

How can a call for 'Data Justice' be applied across AI design, development, and deployment? What ethical hurdles arise when generative AI becomes entwined with ethical considerations? This talk introduces organisational and societal factors, in a bid to reconcile the aims of technologists, policymakers, and community stakeholders in the pursuit of equitable AI, looking beyond the prominently technocentric perspective of the AI lifecycle.

Dr Shyam Krishna is a researcher in AI Ethics and Public Policy and currently stewards the research output and partner engagement of the Advancing Data Justice project. As an engineer-turned-researcher he has an interdisciplinary background in developing an ethical and social justice-oriented view of emergent digital innovations and the technopolitical ecosystems they inhabit. His research areas include digital identity, gig economy, fintech and generative AI. He has also provided advisory services for government projects and engagements in the UK, India, Vietnam, and Kenya.

Mark Welsh (Accenture - Data & AI Practice)

GenAI in Business

Generative AI is a disruptive technology impacting organisations in many areas of their business. The rapid rise of this capability is leading to challenges in delivering Gen AI based solutions safely that are able to scale whilst meeting an organisations constraints in areas such as security, governance, and compliance. This talk will highlight why Gen AI has scaled so rapidly, the challenges organisations are facing delivering use-cases and where they are focusing their efforts.

Mark Welshis Accenture UK’s Chief Architect and Engineer for their Data Science Group. He specialises in helping organisations to define and build their data science capabilities and in solutioning and building AI and Gen AI based use-cases, overcoming the challenges of releasing AI based systems into production.

Dr Genovefa Kefalidou (University Of Leicester)

Human-in-the-Loop in AI-Driven Innovations: Is it still Relevant?

Increased automation sits at the heart of new Artificial Intelligence (AI)-oriented innovations. However, one of the challenges increased automation introduces is transparency of operations and data processing, both of which impact on technology acceptance and User Experience (UX) of people interacting with these technologies. One of the ways to mediate such impact is by revisiting (and designing ways) to sustain (and further) a ‘human-in-the-loop’ approach in AI interactions. This talk will provide a potpourri of different projects where ‘human-in-the-loop’ approach is demonstrated and discuss implications of full automation in Autonomous Systems (AS) and Data Analytics. The talk will also identify how 'human-in-the-loop' approaches can enhance Inclusive Design for AS.

Dr. Genovefa Kefalidou is a Lecturer in Human-Computer Interaction and Director of EDI within the School of Computing and Mathematical Sciences (CMS) at the University of Leicester. Her research focuses on User Experience (UX), Design and Evaluation of Cognitive Systems, Intelligent Service Design and Human-Data Interaction. Her research is applied in Transport and Healthcare and has a particular focus on designing novel intelligent personalised decision-support systems utilising Ambient Intelligence (AmI) and Mixed –Reality technologies that explore novel 'human-in-the-loop' approaches for optimisation, enhanced UX and performance of AI-based systems and services. She is a Co-I and member of the Management Board of the Trustworthy Autonomous Systems (TAS) Verifiability Node (https://verifiability.org/) where she looks on how trust enhances verifiable Autonomous Systems (AS) and service acceptance. She is also the Athena SWAN Lead and Ethics Officer at CMS.

Shakeel Khan (Validate AI)

Developing more trusted AI systems

Advances in artificial intelligence (AI) are increasingly applied to every aspect of our lives and are being put to work either as guides to human decision makers or in systems that replace human workers. Unsurprisingly, the scalability of AI has raised concerns about safety and protecting humanity against potential harms resulting from such technology. We observe AI being used in decision-making for banking, health, taxation, environmental, agriculture and defence sectors to name but a few. More recently generative AI creativity in the form of text and images has resulted in unprecedented coverage in the media. We must now focus on the characterisation and management of harms that could come with entrusting human work to AI systems at scale. This presentation poses the question, what evidence do developers and commissioners of AI need to produce for assurance purposes to build trust?

Shakeel Khan is a co-founder and CEO at Validate AI Community Interest Company. He has been a great advocate of Artificial Intelligence supporting capability building in HMRC as well as sharing his expertise across government departments and tax administrations globally. Prior to this he worked in Financial Services leading various machine learning projects. He is also a great advocate of Operational Research methods to better embed AI into organisational processes.