Explainable AI Planning:literature review

Informations générales

Année de publication

2025

Type

Conférence

Description

Automated planning systems have become indispensable tools in a wide range of applications, from robotics and healthcare to logistics and autonomous systems. However, as these systems grow in complexity, their decision-making processes often become opaque

Résumé

Explainable AI Planning (XAIP) is a pivotal research
area focused on enhancing the transparency, interpretability,
and trustworthiness of automated planning systems. This
paper provides a comprehensive review of XAIP, emphasizing key
techniques for plan explanation, such as contrastive explanations,
hierarchical decomposition, and argumentative reasoning frameworks.
We explore the critical role of argumentation in justifying
planning decisions and address the challenges of replanning in
dynamic and uncertain environments, particularly in high-stakes
domains like healthcare, autonomous systems, and logistics.
Additionally, we discuss the ethical and practical implications
of deploying XAIP, highlighting the importance of human-AI
collaboration, regulatory compliance, and uncertainty handling.
By examining these aspects, this paper aims to provide a detailed
understanding of how XAIP can improve the transparency,
interpretability, and usability of AI planning systems across
various domains.

BibTeX
] M. Fox, D. Long, and D. Magazzeni, “The emerging landscape of ex-
plainable automated planning & decision making,” Artificial Intelligence,
vol. 289, p. 103387, 2020.
[2] M. Ghallab, Automated Planning and Acting. Cambridge, 2016.
[3] M. Fox, “Explainable planning,” in International Joint Conference on
Artificial Intelligence (IJCAI), 2017.
[4] J. Hoffmann, “Ff: The fast-forward planning system,” Artificial Intelli-
gence Journal, 2001.
[5] L. Kaelbling, “Planning under uncertainty,” Journal of Artificial Intelli-
gence Research (JAIR), 1998.
[6] T. Vidal, “Handling time in planning,” in International Conference on
Automated Planning and Scheduling (ICAPS), 2000.
[7] E. Durfee, “Multi-agent planning,” in International Conference on
Autonomous Agents and Multiagent Systems (AAMAS), 2001.
[8] M. Johnson, “Ethical alignment in ai planning,” Ethics & Information
Technology, 2022.
[9] J. Smith, “Contrastive methods in automated planning,” Journal of
Explainable AI (XAI), 2022.
[10] T. Zhang, “Hierarchical plan decomposition,” in AAAI Conference on
Artificial Intelligence (AAAI), 2020.
[11] B. Williams, “Abductive reasoning in planning,” AI Magazine, 2021.
[12] T. Chakraborti, S. Sreedharan, Y. Zhang, and S. Kambhampati, “Plan
explanations as model reconciliation: Moving beyond explanation as so-
liloquy,” Proceedings of the International Joint Conference on Artificial
Intelligence (IJCAI), pp. 156–163, 2017.
[13] S. Sreedharan, T. Chakraborti, and S. Kambhampati, “Foundations of
explanations as model reconciliation,” Artificial Intelligence, vol. 281,
p. 103234, 2020.
[14] K. Erol, J. Hendler, and D. S. Nau, “Hierarchical task network planning:
Formalization and analysis,” Proceedings of the International Confer-
ence on Knowledge Representation (KR), 1994.
[15] X. Fan and F. Toni, “Argumentation-based explainable planning using
strips,” Journal of Artificial Intelligence Research, vol. 61, pp. 1–30,
2018.
[16] S. Kim, K. Lee, and N. Patel, “Machine learning techniques for
contrastive explanations in ai planning,” Journal of Artificial Intelligence
Research, vol. 65, pp. 123–150, 2021.
[17] Y. Zhang, S. Sreedharan, and S. Kambhampati, “Interactive plan expla-
nations for human-ai collaboration,” Proceedings of the AAAI Confer-
ence on Artificial Intelligence, pp. 8152–8159, 2019.
[18] S. L. Vasileiou and W. Yeoh, “Explainable planning for ethical ai
systems,” Proceedings of the International Conference on Principles of
Knowledge Representation and Reasoning (KR), pp. 678–687, 2021.
[19] B. Williams, “Abductive reasoning in planning,” AI Magazine, vol. 42,
no. 2, pp. 123–150, 2021.
[20] T. Chakraborti, S. Sreedharan, Y. Zhang, and S. Kambhampati, “Plan
explanations as model reconciliation: Moving beyond explanation as
soliloquy,” in Proceedings of the International Joint Conference on
Artificial Intelligence (IJCAI), pp. 156–163, 2017.
[21] S. Sreedharan, T. Chakraborti, and S. Kambhampati, “Foundations of
explanations as model reconciliation,” Artificial Intelligence, vol. 281,
p. 103234, 2020.
[22] Y. Zhang, S. Sreedharan, and S. Kambhampati, “Interactive plan ex-
planations for human-ai collaboration,” in Proceedings of the AAAI
Conference on Artificial Intelligence (AAAI), pp. 8152–8159, 2019.
[23] S. L. Vasileiou and W. Yeoh, “Explainable planning for ethical ai
systems,” in Proceedings of the International Conference on Principles
of Knowledge Representation and Reasoning (KR), pp. 678–687, 2021.
[24] S. Kim, K. Lee, and N. Patel, “Machine learning techniques for
contrastive explanations in ai planning,” Journal of Artificial Intelligence
Research (JAIR), vol. 65, pp. 123–150, 2021.
[25] I. Rahwan and G. Simari, Argumentation in Artificial Intelligence.
Springer, 2009.
[26] S. Modgil and M. Caminada, “A general account of argumentation with
preferences,” Artificial Intelligence, vol. 217, pp. 1–42, 2014.
[27] A. Toniolo and T. J. Norman, “Argumentation-based decision making for
autonomous agents,” Artificial Intelligence Journal, vol. 232, pp. 1–25,
2015.
[28] M. Fox and D. Long, “Pddl2.1: An extension to pddl for expressing
temporal planning domains,” Journal of Artificial Intelligence Research,
vol. 20, pp. 61–124, 2003.
[29] X. Fan and F. Toni, “Explanations in automated planning: Argumentation
and interactive justifications,” Artificial Intelligence, vol. 283, pp. 103–
113, 2020.
[30] S. Heras and S. Villata, “Explainable argumentation for human-ai col-
laboration,” Journal of Artificial Intelligence Research, vol. 67, pp. 151–
177, 2018.
[31] M. Caminada and M. Podlaszewski, “Natural language generation for
argumentation-based explainability,” Computational Models of Argu-
ment, pp. 209–220, 2020.
[32] S. Bistarelli and F. Santini, “Complexity considerations in structured
argumentation,” Artificial Intelligence Review, vol. 55, pp. 213–234,
2022.
[33] L. Cohen and S. Modgil, “Justification-based argumentation for explain-
ability in ai systems,” Knowledge-Based Systems, vol. 227, pp. 107–124,
2021.
[34] I. Rahwan and G. R. Simari, “Argumentation in artificial intelligence,”
Springer, 2009.
[35] S. Modgil and M. Caminada, “A general account of argumentation with
preferences,” Artificial Intelligence, vol. 217, pp. 1–42, 2014.
[36] A. Toniolo and T. J. Norman, “Argumentation-based decision making for
autonomous agents,” Artificial Intelligence Journal, vol. 232, pp. 1–25,
2015.
[37] M. Fox and D. Long, “Pddl2.1: An extension to pddl for expressing
temporal planning domains,” Journal of Artificial Intelligence Research,
vol. 20, pp. 61–124, 2003.
[38] X. Fan and F. Toni, “Explanations in automated planning: Argumentation
and interactive justifications,” Artificial Intelligence, vol. 283, pp. 103–
113, 2020.
[39] S. Heras and S. Villata, “Explainable argumentation for human-ai col-
laboration,” Journal of Artificial Intelligence Research, vol. 67, pp. 151–
177, 2018.
[40] M. Caminada and M. Podlaszewski, “Natural language generation for
argumentation-based explainability,” Computational Models of Argu-
ment, pp. 209–220, 2020.
[41] S. Bistarelli and F. Santini, “Complexity considerations in structured
argumentation,” Artificial Intelligence Review, vol. 55, pp. 213–234,
2022.
[42] L. Cohen and S. Modgil, “Justification-based argumentation for explain-
ability in ai systems,” Knowledge-Based Systems, vol. 227, pp. 107–124,
2021.

Axes de recherche