{"id":395,"date":"2023-08-04T16:12:38","date_gmt":"2023-08-04T15:12:38","guid":{"rendered":"https:\/\/minds.qmul.ac.uk\/?page_id=395"},"modified":"2023-11-04T23:54:52","modified_gmt":"2023-11-04T23:54:52","slug":"causal-xai-workshop","status":"publish","type":"page","link":"https:\/\/minds.qmul.ac.uk\/index.php\/causal-xai-workshop\/","title":{"rendered":"Causal XAI Workshop"},"content":{"rendered":"\n<p>Are you interested in generating explanations in AI?<\/p>\n\n\n\n<p>What do you think is the role that causality should play in explainable AI (XAI)?<\/p>\n\n\n\n<p>The Causal XAI workshop is a forum for discussing recent research, highlighting, and documenting promising approaches, and encouraging further work on causal XAI. The main topics of discussion include (but are not limited to):<\/p>\n\n\n\n<ol class=\"wp-block-list\" type=\"1\">\n<li>XAI definition, attributes, and need.<\/li>\n\n\n\n<li>Foundational issues in the relationship between causality and XAI.<\/li>\n\n\n\n<li>Methodologies for generating causal XAI.<\/li>\n\n\n\n<li>Understanding and evaluation of causal XAI.<\/li>\n<\/ol>\n\n\n\n<p>This workshop is aimed at PhD students, researchers, and academics. The audience will have the chance to network and hear invited speakers who are experts on XAI. There is also the opportunity to display a poster (abstract submission required \u2013 see below). This will be a free in-person meeting, including refreshments and lunch.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>Conference Details<\/strong><\/p>\n\n\n\n<p><strong>Date &amp; Time: <\/strong>Thursday 26 October 2023, from 9:30 to 17:30<\/p>\n\n\n\n<p><strong>Location:<\/strong> LG01, Peoples Palace, Queen Mary University of London,&nbsp;Mile End Rd, London E1 4NS.<\/p>\n\n\n\n<p><strong>Organisation committee:<\/strong><\/p>\n\n\n\n<p>Dr Evangelia Kyrimi \u2013 QMUL<\/p>\n\n\n\n<p>Dr William Marsh \u2013 QMUL<\/p>\n\n\n\n<p>Prof David Lagnado \u2013 UCL<\/p>\n\n\n\n<p>Dr David Glass \u2013 Ulster University<\/p>\n\n\n\n<p><strong>How to apply:<\/strong> Please complete the <a href=\"https:\/\/forms.office.com\/Pages\/ResponsePage.aspx?id=kfCdVhOw40CG7r2cueJYFJR2_Febb81LjUyABkHx3NZUNEowRUYyMlUyM1lCRU4wUjZJWFJZMlZQRy4u\">Application form<\/a><\/p>\n\n\n\n<p>There is a limit on the number of attendees.<\/p>\n\n\n\n<p><strong>Posters:<\/strong> If you would like to present a poster, please submit an abstract (max 300 words) using the above application form.<\/p>\n\n\n\n<p><strong>Programme:<\/strong> <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>09.30 &#8211; 09.45:<\/strong>&nbsp; Registration and arrival refreshments<\/li>\n\n\n\n<li><strong>09:45 &#8211; 10.00:<\/strong>&nbsp; Opening and Introductions<\/li>\n\n\n\n<li><strong>10:00 \u2013 11:30:<\/strong> Session 1: Generic discussion on XAI and causality \u2013 Chair: Dr David Glass<\/li>\n<\/ul>\n\n\n\n<p>    Keynote talks:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prof Timothy Miller \u2013 <a href=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/1.%20TimMiller_XAI_is_dead_causal_XAI.pdf\">Explainable AI is dead! Long live explainable AI<\/a><\/li>\n\n\n\n<li>Prof Jon Williamson \u2013 <a href=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/2.%20JonWilliamson_EP-XAI_slides.pdf\" data-type=\"link\" data-id=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/2.%20JonWilliamson_EP-XAI_slides.pdf\">XAI from the perspective of Evidential Pluralism<\/a><\/li>\n\n\n\n<li>Dr Evangelia Kyrimi \u2013 <a href=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/6.%20LinaKyrimiFundametalsChallenge.pdf\" data-type=\"link\" data-id=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/6.%20LinaKyrimiFundametalsChallenge.pdf\">XAI fundamentals challenge<\/a><\/li>\n\n\n\n<li><strong>11.30 \u2013 12:00:<\/strong> Session 1 Panel Discussion<\/li>\n\n\n\n<li><strong>12:00 &#8211; 13:30:<\/strong> Lunch + Poster Session<\/li>\n\n\n\n<li><strong>13:30 \u2013 14:30:<\/strong> Session 2: Methodologies for generating causal XAI \u2013 Chair: Dr William Marsh<\/li>\n<\/ul>\n\n\n\n<p>    Keynote talks:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prof Mihaela van der Schaar \u2013 <a href=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/4.%20Vanderschaar_Causal-XAI.pdf\">Benchmarking Heterogeneous Treatment Effect Models through the Lens of XAI<\/a><\/li>\n\n\n\n<li>Dr Hana Chockler \u2013 <a href=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/5.%20HanaChockler_Causal%20XAI%20workshop%20102023.pdf\" data-type=\"link\" data-id=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/5.%20HanaChockler_Causal%20XAI%20workshop%20102023.pdf\">ReX: Explanations for Image Classifiers: Algorithms, Insights, and Challenges<\/a><\/li>\n\n\n\n<li><strong>14:30 \u2013 15:00:<\/strong> Session 2 Panel Discussion<\/li>\n\n\n\n<li><strong>15.00 &#8211; 15.15:<\/strong> Break with refreshments<\/li>\n\n\n\n<li><strong>15:15 \u2013 16:15:<\/strong> Session 3: Understanding and evaluation of causal XAI \u2013 Chair: Prof David Lagnado<\/li>\n<\/ul>\n\n\n\n<p>    Keynote talks:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prof Ruth Byrne \u2013 How people think about counterfactual explanations in XAI<\/li>\n\n\n\n<li>Dr Marko Tesic \u2013 <a href=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/7.%20MarkoTesic_CF_explanations_Causal_XAI_workshop_2023.pdf\" data-type=\"link\" data-id=\"http:\/\/constantinou.info\/downloads\/slides\/xai_2023\/7.%20MarkoTesic_CF_explanations_Causal_XAI_workshop_2023.pdf\">Can AI explanations skew our causal intuitions about the world? If so, can we correct for that?<\/a><\/li>\n\n\n\n<li><strong>16.15-16.45:<\/strong> Session 3 Panel Discussion<\/li>\n\n\n\n<li><strong>16:45 \u2013 17:00:<\/strong> Wrap up and thank you<\/li>\n<\/ul>\n\n\n\n<p><strong>Speakers:<\/strong><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"140\" height=\"210\" src=\"https:\/\/minds.qmul.ac.uk\/wp-content\/uploads\/2023\/08\/workshop1-edited-3.png\" alt=\"\" class=\"wp-image-413\" style=\"width:148px;height:222px\"\/><figcaption class=\"wp-element-caption\"><strong>Prof Ruth Byrne \u2013 Trinity College Dublin<\/strong><\/figcaption><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p>Ruth Byrne is the Professor of Cognitive Science at Trinity College Dublin, University of Dublin, in the School of Psychology and the Institute of Neuroscience. Her research expertise is in the cognitive science of human thinking, including experimental and computational investigations of reasoning and imaginative thought. Her books include, \u2018The rational imagination: how people create alternatives to reality\u2019 (2005, MIT press). Her current research focuses on &nbsp;experimental investigations of human explanatory reasoning and the use of counterfactual explanations in eXplainable Artificial Intelligence.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"155\" height=\"232\" src=\"https:\/\/minds.qmul.ac.uk\/wp-content\/uploads\/2023\/08\/workshop2-edited.png\" alt=\"\" class=\"wp-image-415\" style=\"width:146px;height:220px\"\/><figcaption class=\"wp-element-caption\"><strong>Dr Hana Chockler \u2013 King\u2019s College London<\/strong><\/figcaption><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p>Hana Chockler is a Reader in the Department of Informatics, King\u2019s College London. She is the Head of the Software Systems group and the coordinator of the Year in Industry programme in the department. Prior to joining King\u2019s College in 2013, Hana was a Research Staff Member at IBM Research between 2005-2013, a Postdoctoral Associate in Worcester Polytechnic Institute (WPI) and in Northeastern University, and a visiting scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of Massachusetts Institute of Technology (MIT) between 2003 \u2013 2005. Her research interests are broadly in investigating reasons, causes, and explanations of software engineering and machine learning procedures. Her current research focus in causes and explanations is mostly on the explanations of the results of deep neural networks\u2019 decisions. I also have an ongoing research project on explanations of reinforcement learning policies.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"113\" height=\"200\" src=\"https:\/\/minds.qmul.ac.uk\/wp-content\/uploads\/2023\/08\/lina-edited-1.jpg\" alt=\"\" class=\"wp-image-434\" style=\"width:148px;height:undefinedpx\"\/><figcaption class=\"wp-element-caption\"><strong>Dr Evangelia Kyrimi \u2013 Queen Mary University of London<\/strong><\/figcaption><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p>Evangelia Kyrimi is a lecturer in AI and Data Science in the School of Electronic Engineering and Computer Science at QMUL. Since 2023 she is a Royal Academy of Engineering Research Fellow. Her research interests lie in Bayesian modelling and decision support under uncertainty in healthcare. She focuses on methodologies for eliciting expert knowledge and developing causal graphical models. Her research is also&nbsp;about translating causal AI models into explainable and responsible AI systems that users can trust and adopt. This includes (1) investigating the fundamentals of explanation, (2) developing explanation algorithms that incorporate causality, (3) creating user-specific dynamic explanation outputs, and (4) generating an XAI evaluation protocol.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><div class=\"wp-block-image\">\n<figure class=\"alignright size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"147\" height=\"221\" src=\"https:\/\/minds.qmul.ac.uk\/wp-content\/uploads\/2023\/08\/workshop4-e1691232039133.png\" alt=\"\" class=\"wp-image-401\"\/><figcaption class=\"wp-element-caption\"><strong>Prof Timothy Miller \u2013 University of Queensland<\/strong><\/figcaption><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p>Timothy Miller is a Professor in Artificial Intelligence in the School of Electrical Engineering and Computer Science. Tim&#8217;s primary interest lies in the area of artificial intelligence, in particular Explainable AI (XAI) and human-AI interaction. His work is at the intersection of artificial intelligence, interaction design, and cognitive science\/psychology. His areas of education expertise are in artificial intelligence, software engineering, and technology innovation. He has extensive experience developing novel and innovative solution with industry and defence collaborators.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"142\" height=\"213\" src=\"https:\/\/minds.qmul.ac.uk\/wp-content\/uploads\/2023\/08\/workshop5-edited.png\" alt=\"\" class=\"wp-image-417\" style=\"width:149px;height:227px\"\/><figcaption class=\"wp-element-caption\"><strong>Dr Marko Te\u0161i\u0107 \u2013 University of Cambridge<\/strong><\/figcaption><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p>Dr Tesic is a Postdoctoral Researcher at Leverhulme Centre for the Future of Intelligence, University of Cambridge. He currently explores the capabilities of AI systems and how these capabilities translate onto the specific demands in the human workforce. This research is carried out in collaboration with the OECD and experts in occupational psychology. Previously, he was a Royal Academy of Engineering UK IC postdoctoral research fellow investigating the impact of explanations of AI predictions on our beliefs. He also studied people\u2019s causal and probabilistic reasoning and have a strong interest in data analysis, causal modeling and Bayesian network analysis. Dr Tesic received a Ph.D. in Psychology from Birkbeck\u2019s Psychological Sciences department, an M.A. in Logic and Philosophy of Science from the Munich Center for Mathematical Philosophy, LMU and a B.A. in Philosophy from University of Belgrade, Serbia.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"140\" height=\"210\" src=\"https:\/\/minds.qmul.ac.uk\/wp-content\/uploads\/2023\/08\/workshop6-edited.png\" alt=\"\" class=\"wp-image-418\" style=\"width:152px;height:231px\"\/><figcaption class=\"wp-element-caption\"><strong>Prof Mihaela van der Schaar \u2013 University of Cambridge<\/strong><\/figcaption><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p>Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge, where she leads the van der Schaar Lab. Mihaela was elected IEEE Fellow in 2009. She has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award. In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 \u201cStar in Computer Networking and Communications\u201d by N\u00b2Women. Mihaela is personally credited as inventor on 35 USA patents, many of which are still frequently cited and adopted in standards. She has made over 45 contributions to international standards for which she received 3 ISO Awards. Mihaela\u2019s research focus is on machine learning, AI and operations research for healthcare and medicine.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"159\" height=\"239\" src=\"https:\/\/minds.qmul.ac.uk\/wp-content\/uploads\/2023\/08\/workshop7-edited.png\" alt=\"\" class=\"wp-image-419\" style=\"width:148px;height:223px\"\/><figcaption class=\"wp-element-caption\"><strong>Prof Jon Williamson \u2013 University of Kent<\/strong><\/figcaption><\/figure>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p>Jon Williamson works in the area of philosophy of science and medicine. He is co-director of the Centre for Reasoning and is a member of the Theoretical Reasoning research cluster. Jon Williamson works on the philosophy of causality, the foundations of probability, formal epistemology, inductive logic, and the use of causality, probability and inference methods in science and medicine. His books Bayesian Nets and Causality and In Defence of Objective Bayesianism develop the view that causality and probability are features of the way we reason about the world, not a part of the world itself. His books Probabilistic Logics and Probabilistic Networks and Lectures on Inductive Logic apply recent developments in Bayesianism to motivate a new approach to inductive logic. Jon&#8217;s latest book, Evidential Pluralism in the Social Sciences, provides a new account of causal enquiry in the social sciences. His previous book, Evaluating Evidence of Mechanisms in Medicine, seeks to broaden the range of evidence considered by evidence-based medicine.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:25%\"><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Are you interested in generating explanations in AI? What do you think is the role that causality should play in explainable AI (XAI)? The Causal XAI workshop is a forum for discussing recent research, highlighting, and documenting promising approaches, and encouraging further work on causal XAI. The main topics of discussion include (but are not &hellip;<br \/><a href=\"https:\/\/minds.qmul.ac.uk\/index.php\/causal-xai-workshop\/\" class=\"more-link pen_button pen_element_default pen_icon_arrow_double\">Continue reading <span class=\"screen-reader-text\">Causal XAI Workshop<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-395","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/pages\/395","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/comments?post=395"}],"version-history":[{"count":24,"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/pages\/395\/revisions"}],"predecessor-version":[{"id":478,"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/pages\/395\/revisions\/478"}],"wp:attachment":[{"href":"https:\/\/minds.qmul.ac.uk\/index.php\/wp-json\/wp\/v2\/media?parent=395"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}