ArgXAI-23: 2nd International Workshop on Argumentation for eXplainable AI Imperial College London London, UK, July 10, 2023 |
Conference website | https://people.cs.umu.se/tkampik/argxai/2023.html |
Submission link | https://easychair.org/conferences/?conf=argxai23 |
Submission deadline | May 15, 2023 |
ICLP 2023 Workshop: Argumentation for XAI (ArgXAI)
We kindly invite contributions to the 2nd Workshop on Argumentation for eXplainable Artificial Intelligence (ArgXAI-23, https://people.cs.umu.se/tkampik/argxai/2023.html) to be held on the 9/10th July 2023 at the International Conference on Logic Programming (ICLP 2023, https://iclp2023.imperial.ac.uk/, Imperial College London, UK).
In recent years, research on intelligent systems that can explain their inferences and decisions to (human and machine) users has emerged as an important subfield of Artificial Intelligence (AI). In this context, the interest in symbolic and hybrid approaches to AI and their ability to facilitate explainable and trustworthy reasoning and decision-making -- often in combination with machine learning algorithms -- is increasing. Computational argumentation is considered a particularly promising paradigm for facilitating explainable AI (XAI), and one that has deep roots in logic programming (LP). This trend is reflected by the fact that many researchers who study argumentation have started to:
- apply argumentation as a method of explainable reasoning;
- combine argumentation with other subfields of AI, such as knowledge representation and reasoning (KR), particularly forms of LP, as well as machine learning (ML), to facilitate the latter's explainability;
- study explainability properties of argumentation and other defeasible reasoning approaches.
Given the substantial interest in these different facets of XAI in the context of argumentation and LP, this workshop aims at providing a forum for focused discussions of the recent developments on the topic. The workshop will feature works and discussions on diverse perspectives of argumentative and LP explainability. Specifically, we will cover the formal foundations of explaining argumentative inferences and argumentative properties of explanations, as well as applications of argumentation to facilitate explainability, with an extended focus on LP-based inference and explanations. The event would appeal to a wide range of researchers including: the growing sector of the core argumentation community that works on explainable argumentation; those working on explainability in the context of other approaches to defeasible reasoning, particularly LP; as well as to applied researchers who intend to use computational argumentation or LP for explainability purposes.
Submission Guidelines
We will welcome:
- Original contributions in the form of mature papers or work in progress;
- Incremental developments (of at least 30% new material) of already published work.
Submissions of 7 - 14 pages in PDF format, including abstracts, figures and references, and according to the CEUR-WS template https://www.overleaf.com/read/gwhxnqcghhdt (single column). The reviewing will be single-blind.
All submissions will be made electronically, through the EasyChair conference system at https://easychair.org/conferences/?conf=argxai23.
Accepted papers will be included in CEUR-WS proceedings after a careful review process, likely before the workshop date. At least one of the authors will be required to register and attend the ICLP conference to present the paper in order for it to be included in the workshop proceedings.
Important Dates:
- Paper submission deadline: 15th May 2023 AoE
- Notification of acceptance: 15th June
- Camera-ready version: 30th June
- Workshop date: 9th or 10th July (exact day TBA)
List of Topics
Topics include, but are not limited to:
- Symbolic Explainability
- Formal definitions of explanations
- Defeasible reasoning- and LP-based inference and explanations
- Computational properties of explanations
- Neuro-symbolic explainable argumentation
- Explanation as a form of argumentation or defeasible reasoning
- Human intelligibility of formal argumentation
- Interpretability of LP-based explanations
- Dialectical, dialogical and conversational explanations
- AI methods to support argumentative explainability
- Analogous topics for other defeasible reasoning approaches
- Argumentation for XAI
- Applications of argumentation for explainability in the fields of AI (e.g. machine learning, machine reasoning, multi-agent systems, natural language processing) and overlapping fields of research (e.g. optimisation, human-computer interaction, philosophy and social sciences)
- User-acceptance and evaluation of argumentation-based explanations
- Software systems that provide argumentation-based explanations
- Applications of other defeasible reasoning approaches to XAI
Organisation
Program Committee
- Nick Bassiliades, Aristotle University of Thessaloniki
- Francesca Mosca, King's College London
- Isabel Sassoon, Brunel University London
- Tjitze Rienstra, Maastricht University
- Roberta Calegari, Alma Mater Studiorum–Università di Bologna
- Mariela Morveli Espinoza, Federal University of Technology of Parana
- Wijnand van Woerkom, Universiteit Utrecht
- Simon Parsons, University of Lincoln
- Fernando Tohme, Universidad Nacional del Sur
- Jérôme Delobelle, Université de Paris, LIPADE
- Xiuyi Fan, Nanyang Technological University
- Pietro Baroni, University of Brescia
- Zeynep G. Saribatur, TU Wien
- Johannes P. Wallner, TU Graz
- Alexandros Vassiliades, Aristotle University of Thessaloniki
- Alison R. Panisson, UFSC
- Christian Strasser, Ruhr-University Bochum
- Federico Castagna, University of Lincoln
- Jieting Luo, Zhejiang University
- Alessandro Dal Palù, Università degli Studi di Parma
- Giuseppe Contissa, University of Bologna
- Thomas Eiter, Vienna University of Technology
- Kees van Berkel, Ruhr University Bochum
- Alejandro Garcia, Universidad Nacional del Sur
- Antonis Kakas, University of Cyprus
- Anthony Hunter, University College London
- Beishui Liao, Zhejiang University
- Andreas Xydis, University of Lincoln
- Alireza Tamaddoni-Nezhad, Imperial College, London
Organisers
- Antonio Rago, Imperial College London
- Timotheus Kampik, Umeå University and SAP
- Kristijonas Čyras, Ericsson Research
- Oana Cocarascu, King’s College London