Submit to this session

XAI:ACA 2026

Special Session on

Explainable Artificial Intelligence: Advances, Challenges and Applications

at the 18th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2026)
13-15 April 2026, Kaohsiung, Taiwan

Special Session Organizers

Jerzy Stefanowski
Poznan University of Technology Institute of Computing Science
Polish Academy of Sciences
E-mail: jerzy.stefanowski@cs.put.poznan.pl
WWW: http://www.cs.put.poznan.pl/jstefanowski/

Marek Śmieja
Jagiellonian University in Kraków
E-mail: marek.smieja@uj.edu.pl
WWW: https://mareksmieja.github.io/

Maciej Zięba
Wrocław University of Science and Technology
E-mail: maciej.zieba@pwr.edu.pl
WWW: https://genwro.ai.pwr.edu.pl/

Oleksii Furman (Technical Committee Member)
Wrocław University of Science and Technology
E-mail: oleksii.furman@pwr.edu.pl

Objectives and topics

Despite of the development of Explainable AI (XAI) in recent years, many issues still require further research. Some of them concern: dealing with difficult data (especially with different formats), providing explanations for the reasoning of particularly complex neural networks, new methods for evaluating the usefulness of generated explanations, conducting evaluation studies with human experts, and incorporating their preferences into the interactive process of building credible explanations as well as better handling causality or background domain knowledge. Ensuring interpretable ML systems requires careful consideration of the trade-off between comprehensibility and predictive capabilities. Other problems require dealing with safety of solutions, societal fairness, various biases, supporting accountability or other postulates within Trustworthy AI and being in compliance with ethical and legal regulations concerning AI risky systems. Equally important is the impact of XAI in practical applications. Transparent and reliable explanations can increase trust in AI-driven medical diagnoses, financial decision-making, legal assessments, and autonomous systems, where the cost of errors is high.

This special session aims to explore a wide spectrum of methods and advances in Explainable Artificial Intelligence (XAI), with particular emphasis on emerging challenges that drive new research directions. We welcome contributions addressing both methodological innovations and practical issues, as well as studies that highlight limitations and inspire future developments. In addition, we strongly encourage submissions that share experiences, insights, and results from real-world applications of XAI across diverse domains.

The scope of this special session includes, but is not limited to the following topics:


    METHODS AND APPROACHES
  • XAI methods for different types of data (tabular, images, texts, graphs, time series, NLP, and other non-structural data)
  • Multi-modal XAI
  • Interpretable ML approaches
  • Explainable methods for evolving data (with concept drift) and time series
  • Counterfactual and contrastive explanations
  • Integrating causality and knowledge graphs with XAI
  • Explanations for hybrid nero-symbolic approaches
  • XAI for foundation models / LLMs and generative AI
  • Visualization techniques for explanations

  • EVALUATION AND ROBUSTNESS
  • Explainability measures and evaluation procedures
  • Handling different properties of explanations
  • Benchmarking datasets, standards, and frameworks for XAI

  • HUMAN FACTORS
  • Interactive XAI
  • Human studies and evaluation of XAI
  • Personalized and user-adaptive explanations (for experts, end-users, laypeople)
  • Cognitive and psychological aspects of explanation acceptance
  • Human-in-the-loop explainability (supporting collaborative decision-making)

  • APPLICATIONS
  • Domain-specific XAI (healthcare, biomedicine, finance, legal, education, cybersecurity, autonomous systems, etc.)
  • XAI for uncertainty quantification and risk assessment

  • SOCIETAL, ETHICAL, AND REGULATORY ASPECTS
  • Dealing with trustworthy AI postulates
  • Compliance with ethical guidelines
  • Fairness, accountability, transparency, and explainability (FATE)
  • Regulatory, legal, and societal aspects of XAI (e.g., GDPR, EU AI Act, right to explanation)

ACIIDS 2026 important dates

Paper submission: December 15, 2025 (hard deadline)
Notification of acceptance: January, 2026
Camera-ready papers: February, 2026
Registration & payment: February, 2026
Conference date: April 13-15, 2026