Causal XAI Workshop

Are you interested in generating explanations in AI?

What do you think is the role that causality should play in explainable AI (XAI)?

The Causal XAI workshop is a forum for discussing recent research, highlighting, and documenting promising approaches, and encouraging further work on causal XAI. The main topics of discussion include (but are not limited to):

  1. XAI definition, attributes, and need.
  2. Foundational issues in the relationship between causality and XAI.
  3. Methodologies for generating causal XAI.
  4. Understanding and evaluation of causal XAI.

This workshop is aimed at PhD students, researchers, and academics. The audience will have the chance to network and hear invited speakers who are experts on XAI. There is also the opportunity to display a poster (abstract submission required – see below). This will be a free in-person meeting, including refreshments and lunch.

Conference Details

Date & Time: Thursday 26 October 2023, from 9:30 to 17:30

Location: LG01, Peoples Palace, Queen Mary University of London, Mile End Rd, London E1 4NS.

Organisation committee:

Dr Evangelia Kyrimi – QMUL

Dr William Marsh – QMUL

Prof David Lagnado – UCL

Dr David Glass – Ulster University

How to apply: Please complete the Application form

There is a limit on the number of attendees.

Posters: If you would like to present a poster, please submit an abstract (max 300 words) using the above application form.

Programme:

  • 09.30 – 09.45:  Registration and arrival refreshments
  • 09:45 – 10.00:  Opening and Introductions
  • 10:00 – 11:30: Session 1: Generic discussion on XAI and causality – Chair: Dr David Glass

Keynote talks:

Keynote talks:

Keynote talks:

Speakers:

Prof Ruth Byrne – Trinity College Dublin

Ruth Byrne is the Professor of Cognitive Science at Trinity College Dublin, University of Dublin, in the School of Psychology and the Institute of Neuroscience. Her research expertise is in the cognitive science of human thinking, including experimental and computational investigations of reasoning and imaginative thought. Her books include, ‘The rational imagination: how people create alternatives to reality’ (2005, MIT press). Her current research focuses on  experimental investigations of human explanatory reasoning and the use of counterfactual explanations in eXplainable Artificial Intelligence.

Dr Hana Chockler – King’s College London

Hana Chockler is a Reader in the Department of Informatics, King’s College London. She is the Head of the Software Systems group and the coordinator of the Year in Industry programme in the department. Prior to joining King’s College in 2013, Hana was a Research Staff Member at IBM Research between 2005-2013, a Postdoctoral Associate in Worcester Polytechnic Institute (WPI) and in Northeastern University, and a visiting scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of Massachusetts Institute of Technology (MIT) between 2003 – 2005. Her research interests are broadly in investigating reasons, causes, and explanations of software engineering and machine learning procedures. Her current research focus in causes and explanations is mostly on the explanations of the results of deep neural networks’ decisions. I also have an ongoing research project on explanations of reinforcement learning policies.

Dr Evangelia Kyrimi – Queen Mary University of London

Evangelia Kyrimi is a lecturer in AI and Data Science in the School of Electronic Engineering and Computer Science at QMUL. Since 2023 she is a Royal Academy of Engineering Research Fellow. Her research interests lie in Bayesian modelling and decision support under uncertainty in healthcare. She focuses on methodologies for eliciting expert knowledge and developing causal graphical models. Her research is also about translating causal AI models into explainable and responsible AI systems that users can trust and adopt. This includes (1) investigating the fundamentals of explanation, (2) developing explanation algorithms that incorporate causality, (3) creating user-specific dynamic explanation outputs, and (4) generating an XAI evaluation protocol.

Prof Timothy Miller – University of Queensland

Timothy Miller is a Professor in Artificial Intelligence in the School of Electrical Engineering and Computer Science. Tim’s primary interest lies in the area of artificial intelligence, in particular Explainable AI (XAI) and human-AI interaction. His work is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology. His areas of education expertise are in artificial intelligence, software engineering, and technology innovation. He has extensive experience developing novel and innovative solution with industry and defence collaborators.

Dr Marko Tešić – University of Cambridge

Dr Tesic is a Postdoctoral Researcher at Leverhulme Centre for the Future of Intelligence, University of Cambridge. He currently explores the capabilities of AI systems and how these capabilities translate onto the specific demands in the human workforce. This research is carried out in collaboration with the OECD and experts in occupational psychology. Previously, he was a Royal Academy of Engineering UK IC postdoctoral research fellow investigating the impact of explanations of AI predictions on our beliefs. He also studied people’s causal and probabilistic reasoning and have a strong interest in data analysis, causal modeling and Bayesian network analysis. Dr Tesic received a Ph.D. in Psychology from Birkbeck’s Psychological Sciences department, an M.A. in Logic and Philosophy of Science from the Munich Center for Mathematical Philosophy, LMU and a B.A. in Philosophy from University of Belgrade, Serbia.

Prof Mihaela van der Schaar – University of Cambridge

Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge, where she leads the van der Schaar Lab. Mihaela was elected IEEE Fellow in 2009. She has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award. In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Mihaela is personally credited as inventor on 35 USA patents, many of which are still frequently cited and adopted in standards. She has made over 45 contributions to international standards for which she received 3 ISO Awards. Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine.

Prof Jon Williamson – University of Kent

Jon Williamson works in the area of philosophy of science and medicine. He is co-director of the Centre for Reasoning and is a member of the Theoretical Reasoning research cluster. Jon Williamson works on the philosophy of causality, the foundations of probability, formal epistemology, inductive logic, and the use of causality, probability and inference methods in science and medicine. His books Bayesian Nets and Causality and In Defence of Objective Bayesianism develop the view that causality and probability are features of the way we reason about the world, not a part of the world itself. His books Probabilistic Logics and Probabilistic Networks and Lectures on Inductive Logic apply recent developments in Bayesianism to motivate a new approach to inductive logic. Jon’s latest book, Evidential Pluralism in the Social Sciences, provides a new account of causal enquiry in the social sciences. His previous book, Evaluating Evidence of Mechanisms in Medicine, seeks to broaden the range of evidence considered by evidence-based medicine.