INFORMATION TECHNOLOGY FOR EVALUATING EXPLANATIONS IN AN INTELLIGENT INFORMATION SYSTEM
DOI:
https://doi.org/10.26906/SUNZ.2023.4.120Keywords:
intellectual system, explanation, decision-making process, causal relationship, evaluation of explanationsAbstract
The article’s subject matter is the process of constructing explanations regarding the decision-making process and the obtained results in the intellectual information system. The goal is to develop a technology for evaluating explanations, taking into account both the sensitivity of these explanations to differences in input data, and the possibility of using explanations by the user according to the concept of using an intelligent system solution. Tasks: structuring the tasks of constructing explanations in the aspect of evaluating interpretations; structuring of indicators of evaluation of explanations considering the dependencies between these indicators; development of a sequence of stages of information technology for comprehensive evaluation of explanations in an intelligent system. The approaches used are: methods of construction of explanations, methods and approaches to evaluation of explanations in artificial intelligence systems. The following results were obtained. The structuring of the tasks of constructing explanations taking into account the evaluation of the received interpretations was carried out. The structuring of the evaluation indicators of the explanations was carried out, taking into account the limitations of access to the decision-making process in the intelligent system. It is shown that, depending on the availability of data on the decision-making process in the intelligent system, it is advisable to use indicators of accuracy or correctness. The sensitivity indicator makes it possible to evaluate the explanation when categorizing knowledge about the properties of objects or input data. The simplicity index determines the effect of the number of input variables on the explanation. Conclusions. The scientific novelty of the obtained results is as follows. An information technology for evaluating explanations in an intelligent information system is proposed. The technology includes a sequence of stages for calculating indicators of sensitivity, correctness and simplicity of explanation, as well as selecting a subset of explanations based on these indicators using interdependencies between them and the possibility of restrictions on the indicator of correctness. In practical terms, the proposed technology creates conditions for the selection of explanations according to their sensitivity and simplicity for the user, taking into account the peculiarities of the input data and the process of using the solution.Downloads
References
Castelvecchi D. (2016), “Can we open the black box of AI?” Nature, Vol. 538 (7623), pp. 20-23.
Adadi, A., Berrada, M. (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138– 52160.
Чалий С. Ф., Лещинський В. О., Лещинська І. О. (2021) Контрфактуальна темпоральна модель причинно-наслідкових зв'язків для побудови пояснень в інтелектуальних системах,/ Вісник НТУ "ХПІ".– Харків : НТУ "ХПІ", № 2 (6), С. 41-46.
Chi, M., de Leeuw, N., Chiu, M., & LaVancher, C. Eliciting self-explanations improves understanding. Cognitive Science. 1994. Vol.18. P. 439–477.
Carey, S. The origin of concepts. New York, NY: Oxford University Press. 2009. 608 p.
Чалий, С., and Лещинська, І. (2023). Концептуальна ментальна модель пояснення в системі штучного інтелекту. Вісник НТУ «ХПІ». (1 (9), 70–75. https://doi.org/10.20998/2079-0023.2023.01.11
Miller T. (2019), “Explanation in artificial intelligence: Insights from the social sciences”, Artificial Intelligence, vol. 267, pp.1-38, DOI: https://doi.org/10.1016/j.artint.2018.07.007
Tintarev N., Masthoff J. (2012), “Evaluating the effectiveness of explanations for recommender systems”, User Model UserAdap Inter., Vol. 22, pp. 399– 439, https://doi.org/10.1007/s11257-011-9117-5.
Gunning і D. Aha, (2019) “DARPA’s Explainable Artificial Intelligence (XAI) Program”, AI Magazine, Vol. 40(2), pp.44-58, doi: 10.1609/ aimag.v40i2.2850.
Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom.( 2019) Can I trust the explainer? Verifying post-hoc explanatory methods. arXiv:1910.02065.
M. Yang and B. Kim (2019). BIM: Towards quantitative evaluation of interpretability methodswith ground truth. arXiv:1907.09701
Chalyi, S., Leshchynskyi, V. (2020). Method of constructing explanations for recommender systems based on the temporal dynamics of user preferences. EUREKA: Physics and Engineering, 3, 43-50. doi: 10.21303/2461-4262.2020.001228.
Чалий С. Ф. Реляційно-темпоральна модель набору сутностей предметної області для процесу формування рішення в інтелектуальній інформаційній системі / С. Ф. Чалий, В. О. Лещинський, І. О. Лещинська // Вісник НТУ "ХПІ. Серія: Системний аналiз, управління та iнформацiйнi технологiї – Харків : НТУ "ХПІ", 2022. – № 1 (7). – С. 84-89.
Chalyi, Sergii & Leshchynskyi, V.. (2023). Оцінка чутливості пояснень в інтелектуальній інформаційній системі. Системи управління, навігації та зв’язку. Збірник наукових праць. 2. 165-169. 10.26906/SUNZ.2023.2.165.