A POSSIBILITY-BASED MODEL OF CAUSAL RELATION FOR INPUT VARIABLE IN EXPLANATION CONSTRUCTION WITHIN AN INTELLIGENT SYSTEM
DOI:
https://doi.org/10.26906/SUNZ.2023.3.138Keywords:
intelligent information system, explanation, causal relationship, causal dependence, cognitive activityAbstract
The article’s subject matter is the processes of constructing explanations for the decisions made by an intelligent information system. The goal is to build a model of causal relationships for explanation construction under conditions of uncertainty regarding the states of the intelligent information system, especially when it is considered as a black box. The tasks: structuring explanations considering the specifics of human cognitive activity; establishing necessary and sufficient conditions for causal dependence as a component of explanations using possibility theory; developing a possibility model of causal dependence for a single input variable that considers the uncertainty regarding the states of the intelligent system. The used approaches: approaches to explanation construction in human cognitive activity and approaches to explanation construction in explainable artificial intelligence. The obtained results are as follows: explanations have been structured as an element of human cognitive activity. It has been demonstrated that explanations can be represented in two aspects: conceptual, through comparing input information with the existing human knowledge system; interpretive, through comparing the properties of input objects. Possibility-based necessary and sufficient conditions for causal dependence based on a single input variable, which forms the basis of explanation, have been proposed. A possibility model of causal dependence for explanation construction in an intelligent system has been suggested. Conclusions. The scientific novelty of the obtained results lies in the following: a possibility model of causal dependence between an input variable and the outcome of an intelligent system's operation has been proposed, which combines the necessary condition of causality in the form of confidence level in the impact of the input variable on the outcome and the sufficient condition of causality in the form of the maximum possible influence of the input variable's value on the outcome of the intelligent system. The model enables the formation of causaloriented explanations based on the connection between the input variable and the obtained result in conditions of incomplete knowledge regarding the state of the intelligent system.Downloads
References
Engelbrecht Andries P. Computational Intelligence: An Introduction. NJ: John Wiley & Sons, 2007. 632 р.
Castelvecchi D. (2016), “Can we open the black box of AI?” Nature, Vol. 538 (7623), pp. 20-23.
Miller T. (2019), “Explanation in artificial intelligence: Insights from the social sciences”, Artificial Intelligence, vol. 267, pp.1-38, DOI: https://doi.org/10.1016/j.artint.2018.07.007
Q. Zhang, Y. Nian Wu, S.-C. Zhu, Interpretable convolutional neural networks, IEEE Conference on Computer Vision and Pattern Recognition. 2018. pp. 8827–8836.
Chalyi, S., Leshchynskyi, V., Leshchynska, I. (2019). Method of forming recommendations using temporal constraints in a situation of cyclic cold start of the recommender system. EUREKA: Physics and Engineering, 4, 34–40. doi: https://doi.org/10.21303/2461-4262.2019.00952 . Available at: http://eu-jr.eu/engineering/article/view/952/934.
Adadi, A., Berrada, M. (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138– 52160.
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J Gershman, Finale Doshi-Velez. ( 2019) Human evaluation of models built for interpretability. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol.7, pp 59–67.
Gunning і D. Aha, (2019) “DARPA’s Explainable Artificial Intelligence (XAI) Program”, AI Magazine, Vol. 40(2), pp.44-58, doi: 10.1609/aimag.v40i2.2850.
Chalyi, S., Leshchynskyi, V. (2020). Method of constructing explanations for recommender systems based on the temporal dynamics of user preferences. EUREKA: Physics and Engineering, 3, 43-50. doi: 10.21303/2461-4262.2020.001228. Available at: http://journal.eu-jr.eu/engineering/article/view/14.
Чалий С. Ф. Реляційно-темпоральна модель набору сутностей предметної області для процесу формування рішення в інтелектуальній інформаційній системі / С. Ф. Чалий, В. О. Лещинський, І. О. Лещинська // Вісник Національного технічного університету "ХПІ". Сер. : Системний аналіз, управління та інформаційні технології = Bulletin of the National Technical University "KhPI". Ser. : System analysis, control and information technology : зб. наук. пр. – Харків : НТУ "ХПІ", 2022. – № 1 (7). – С. 84-89.
Чалий С.Ф., Лещинський В.О., Лещинська I.О. Декларативно-темпоральний підхід до побудови пояснень в інтелектуальних інформаційних системах. Вісник Нац. техн. ун-ту "ХПІ": зб. наук. пр. Темат. вип. Системний аналіз, управління та інформаційні технології. Харків: НТУ «ХПІ». 2020. № 2(4). С. 51-56.
Halpern J. Y., Pearl J. Causes and explanations: A structural-model approach. Part I: Causes. The British Journal for the Philosophy of Science. 2005. № 56 (4). P. 843-887.
Chalyi S., Leshchynskyi V. Temporal representation of causality in the construction of explanations in intelligent systems. Advanced Information Systems. Kharkiv: NTU "KhPI"2020. Vol. 4, № 3. P. 113-117.
Чалий С. Ф., Лещинський В. О., Лещинська І. О. (2021) Контрфактуальна темпоральна модель причиннонаслідкових зв'язків для побудови пояснень в інтелектуальних системах,/ Вісник Національного технічного університету "ХПІ". Сер. : Системний аналіз, управління та інформаційні технології = Bulletin of the National Technical University "KhPI". Ser. : System analysis, control and information technology : зб. наук. пр. – Харків : НТУ "ХПІ", № 2 (6), С. 41-46.
Murphy, G. L., & Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological Review, 92(3), 289–316. https://doi.org/10.1037/0033-295X.92.3.289
Carey, S. (1985). Conceptual change in childhood. Cambridge, MA: MIT Press.
Rips, L. J. (1989). Similarity, typicality, and categorization. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 21–59). Cambridge University Press. https://doi.org/10.1017/CBO9780511529863.004
Thagard, P. (2006). Evaluating explanations in science, law, and everyday life. Current Directions in Psychological Science, 15, 141–145.
Chin-Parker S, Bradner A. A contrastive account of explanation generation. Psychon Bull Rev. 2017 Oct;24(5):1387-1397. doi: 10.3758/s13423-017-1349-x. PMID: 28762030.