Searching for Explainable Solutions in Sudoku
Explainable AI is an emerging field that studies how to explain the rationality behind the decisions of intelligent computer-based systems in human-understandable terms. The research-focus so far has though almost exclusively been on model interpretability, in particular, on trying to explain the learned concepts of (deep) neural networks. However, for many tasks, constraint- or heuristic-based search is also an integral part of the decision-making process of intelligent systems, for example, in planning and game-playing agents. This paper explores how to alter the search-based reasoning process used in such agents to generate more easily human-explainable solutions, using the domain of Sudoku puzzles as our test-bed. We model the perceived human mental effort of using different familiar Sudoku solving techniques. Based on that, we show how to find an explanation understandable to human players of varying expert levels, and evaluate the algorithm empirically on a wide range of puzzles of different difficulty.