COORDINATING THE EXPLANATION IN THE INTELLECTUAL INFORMATION SYSTEM WITH THE BACKGROUND KNOWLEDGE

Authors

  • S. Chalyi
  • V. Leshchynskyi
  • I. Leshchynska

DOI:

https://doi.org/10.26906/SUNZ.2021.1.115

Keywords:

background knowledge, consistency of knowledge, intelligent information system, explanations, explanation patterns

Abstract

The subject matter of the article is the processes of constructing explanations for the solutions proposed by the intelligent information system. The goal is to develop a method for coordinating explanations in an intelligent information system, taking into account the constraints that are determined by basic knowledge about objects and processes in the subject area. Tasks: structuring the process of constructing explanations, taking into account the limitations in the form of knowledge about the subject area; highlighting the aspects of the agreement of the explanation; development of a method for harmonizing knowledge regarding the explanation and the subject area. The approaches used are: approaches to constructing explanations, approaches to harmonizing knowledge. The following results were obtained. The structuring of the process of constructing explanations was carried out, taking into account the stage of knowledge coordination. The aspects of the coordination of the knowledge of the explanation with the input data in the sense of using data for interpretation, with the solution obtained by the intelligent information system in the sense of matching with the user's tasks, as well as with the knowledge of the subject area in the sense of restrictions for the use of the explanation, are highlighted. A method for reconciling the explanation with knowledge about the subject area is proposed. Conclusions. The scientific novelty of the results obtained is as follows. A method for matching the explanation with basic knowledge about objects and processes in the subject area is proposed. The method provides for iterative execution of the sequence of reconciliation of data on the decision-making process in an intelligent system with knowledge describing the subject area, checking the consistency of the developed explanation with a lot of knowledge about the subject area, as well as reconciling the explanation with the resulting solution of the intelligent information system. In practical terms, the method is focused on the formation of a subset of explanations, does not contradict the basic one with knowledge of the subject area. The explanation in the composition of this set can be further ordered according to the criterion of efficiency, taking into account the peculiarities of the tasks for the solution of which the solutions are formed by the intelligent system

Downloads

References

Miller T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, pp. 1-38. DOI: https://doi.org/10.1016/j.artint.2018.07.007.

Tsai C., Brusilovsky P. (2019). Explaining recommendations in an interactive hybrid social recommender. Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 391-396.

Dominguez V., Messina P., Donoso-Guzmán I., Parra D. (2019). The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19), pp. 408-416.

Wooley, B. A. (1998). Explanation component of software system. Crossroads, 5(1), pp. 24–28.

Cunningham P., Doyle D., Loughrey J. (2003). An evaluation of the usefulness of case-based reasoning explanation. Proceedings of the International Conference on Case-Based Reasoning, Trondheim, Springer, pp. 122–130.

Cleger S., Fernбndez-Luna J., F Huete J. Learning from explanations in recommender systems. Information Sciences, 2014, 287, pp. 90–108.

Chalyi S., Leshchynskyi V., Leshchynska I. (2019). Method of forming recommendations using temporal constraints in a situation of cyclic cold start of the recommender system. EUREKA: Physics and Engineering, 4, pp. 34-40. DOI: 10.21303/2461-4262.2019.00952.

Чалий С.Ф., Лещинський В.О., Лещинська І.О. (2019). Моделювання пояснень щодо рекомендованого переліку об’єктів з урахуванням темпорального аспекту вибору користувача. Системи управління, навігації та зв’язку, 6 (58), 97-101.

Chalyi, S., Leshchynskyi, V. (2020). Method of constructing explanations for recommender systems based on the temporal dynamics of user preferences. EUREKA: Physics and Engineering, 3, 43-50. DOI: 10.21303/2461-4262.2020.001228.

Cleger S., Fernбndez-Luna J., F Huete J. (2014). Learning from explanations in recommender systems. Information Sciences, 287, pp. 90–108.

Daher J, Brun A., Boyer A. A. (2017). Review on Explanations in Recommender Systems. Technical Report. LORIA Université de Lorraine, 26 p.

Thagard P. (2007). Coherence, truth, and the development of scientific knowledge. Philosophy of Science, 74, pp. 28-47.

Thagard P. (2004). Causal inference in legal decision making: Explanatory coherence vs. Bayesian networks. Applied Artificial Intelligence, 18, pp. 231-249.

Thagard P., Verbeurgt K. (1998). Coherence as constraint satisfaction. Cognitive Science, 22, pp. 1-24.

Thagard P. (2010). Causal inference in legal decision making: explanatory coherence vs. Bayesian networks. Applied Artificial Intelligence, 18:3-4, 231-249. DOI: https://doi.org/10.1080/08839510490279861.

Fayyad, U., Piatetsky-Shapiro, G., and Smyth, P. (1996). From Data Mining to Knowledge Discovery in Databases. AI magazine, 17(3), 37.

Чалий С. Ф., Лещинський В. О., Лещинська І. О. (2020). Модель пояснення в інтелектуальній інформаційній системі на основі концепції узгодженості знань. Вісник Національного технічного університету "ХПІ". Сер. : Системний аналіз, управління та інформаційні технології, 1 (3), 19-23.

Published

2021-02-26

Most read articles by the same author(s)