EVALUATION OF THE SENSITIVITY OF EXPLANATIONS IN THE INTELLIGENT INFORMATION SYSTEM

Authors

  • S. Chalyi
  • V. Leshchynskyi

DOI:

https://doi.org/10.26906/SUNZ.2023.2.165

Keywords:

intellectual system, explanation, decision-making process, causal relationship, evaluation of explanations

Abstract

The article’s subject matter is the process of constructing explanations for the received decisions in the intellectual information system. The goal is to evaluate the sensitivity of the explanations based on the analysis of the properties of the input data and the corresponding decisions in the intelligent information system to support the selection of the best explanation from the standpoint of satisfying the user's interests. Task: structuring of criteria for quantitative assessment of explanations when presenting an intelligent system in the form of a black box; development of a method for assessing the sensitivity of explanations in an intelligent information system. The used approaches are approaches to constructing explanations and approaches to evaluating explanations in intelligent information systems. The following results were obtained. Criteria for evaluating explanations for intelligent systems presented according to the black box principle are structured. The specified criteria consider the impact on the explanation of the input and output data of the intelligent system, the appropriateness of the explanation of the decision-making process in the intelligent system, as well as the appropriateness of the explanation and understanding of the results of the intelligent system by the user. On the basis of the performed structuring, a method for assessing the sensitivity of explanations for an intelligent system presented according to the black box principle is proposed. Conclusions. The scientific novelty of the obtained results is as follows. A method for assessing the sensitivity of explanations for an intelligent system presented according to the black box principle is proposed. The method includes steps related to testing and determining the similarity of input data and results for alternative models of intelligent systems by quantitative and qualitative indicators, as well as quantifying the input data and determining the sensitivity of the explanation. The proposed method makes it possible to compare and select explanations considering the properties and importance of input data in order to determine the possibilities of applying alternative approaches to constructing explanations for the results of an intelligent information system. Further development of the proposed approach is focused on the definition and implementation of metrics for assessing the accuracy and transparency of explanations.

Downloads

References

Castelvecchi D. (2016), “Can we open the black box of AI?” Nature, Vol. 538 (7623), pp. 20-23.

Tintarev N., Masthoff J. (2012), “Evaluating the effectiveness of explanations for recommender systems”, User Model UserAdap Inter., Vol. 22, pp. 399– 439, https://doi.org/10.1007/s11257-011-9117-5.

Miller T. (2019), “Explanation in artificial intelligence: Insights from the social sciences”, Artificial Intelligence, vol. 267, pp.1-38, DOI: https://doi.org/10.1016/j.artint.2018.07.007

Adadi, A., Berrada, M. (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138– 52160.

Chalyi, S., Leshchynskyi, V. (2020). Method of constructing explanations for recommender systems based on the temporal dynamics of user preferences. EUREKA: Physics and Engineering, 3, 43-50. doi: 10.21303/2461-4262.2020.001228. Available at: http://journal.eu-jr.eu/engineering/article/view/14.

Gunning і D. Aha, (2019) “DARPA’s Explainable Artificial Intelligence (XAI) Program”, AI Magazine, Vol. 40(2), pp.44-58, doi: 10.1609/aimag.v40i2.2850.

Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J Gershman, Finale Doshi-Velez. ( 2019) Human evaluation of models built for interpretability. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol.7, pp 59–67.

Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom.( 2019) Can I trust the explainer? Verifying post-hoc explanatory methods. arXiv:1910.02065.

Mengjiao Yang and Been Kim (2019). BIM: Towards quantitative evaluation of interpretability methodswith ground truth. arXiv:1907.09701.

Peter Lipton. Inference to the best explanation.Routledge, 2003.

Published

2023-06-09

Most read articles by the same author(s)