MODELING EXPLANATIONS IN INTELLIGENT SYSTEMS BASED ON THE INTEGRATION OF TEMPORAL AND CAUSAL DEPENDENCIES
DOI:
https://doi.org/10.26906/SUNZ.2024.4.163Keywords:
intelligent system, artificial intelligence system, explanation, decision-making process, temporal dependencies, causal dependencies, causality, possibilityAbstract
The article’s subject matter is the processes of constructing explanations in intelligent systems using temporal and causal dependencies. The aim is to develop an approach to constructing explanations based on the integration of temporal and causal dependencies regarding the decision-making process in order to provide the possibility of forming explanations for both external and internal users of intelligent information systems. Tasks: determining the differences in access to information in IIS for external and internal users; developing a hierarchical model of explanation based on tempora l and causal dependencies; developing a method for constructing explanations using temporal and cau sal dependencies. The approaches used are: methods of constructing explanations, methods of constructing causal dependencies. The following results were obtained. The structuring of the differences in access to information of users of intelligent systems was performed to justify the need for multilevel detailing of explanations. A hierarchical model of explanation based on temporal and causal dependencies is proposed. A method for constructing explanations using temporal and causal dependencies between states or actions of the decision-making process in an intelligent system is proposed. Conclusions. The scientific novelty of the obtained results is as follows. A hierarchical model of explanation is proposed, which contains local, process, and global levels of explanations according to the possibilities of user access to information about the decision -making process, which makes it possible to take into account the incompleteness of information about the state of the intelligent system when explaining its decisions for external and internal users. A method for constructing explanations has been developed, which contains the phases of constructing explanations using temporal and causal dependencies between input data, states of the decision-making process, and the obtained result, and presenting explanations, which makes it possible to form causal dependencies at several levels of detail and present them to the user in the form of an ordered by weight set of alternative explanations.Downloads
References
Castelvecchi, D. (2016). Can we open the black box of AI? Nature, 538(7623), 20-23. https://doi.org/10.1038/538020a.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007.
Gunning, D., & Aha, D. (2019). DARPA's explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44-58. https://doi.org/10.1609/aimag.v40i2.2850.
Tintarev, N., & Masthoff, J. (2007). A survey of explanations in recommender systems. In IEEE 23rd International Conference on Data Engineering Workshop (pp. 801-810). https://doi.org/10.1109/ICDEW.2007.4401070.
Pearl, J. (2009). Causality: Models, Reasoning and Inference (2nd ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511803161.