Home // International Journal On Advances in Intelligent Systems, volume 18, numbers 1 and 2, 2025 // View article
Combinatory Logic as a Model for Intelligent Systems Based on Explainable AI
Authors:
Thomas Fehlmann
Eberhard Kranich
Keywords: Intelligent Systems; Chain-of-Thought (CoT); Explainable AI (XAI); Artificial Neural Networks (ANN); Deep Neural Network (DNN); Combinatory Logic; Quality Function Deployment (QFD).
Abstract:
Directed graphs such as neural networks can be described by Arrow Terms that link a finite set of incoming nodes to some response node. Scott and Engeler have shown that its powerset is a model for Combinatory Logic. This algebra is called Graph Model of Combinatory Logic. Since Combinatory Logic is Turing-complete, the model explains both traditional programming logic as well as neural networks such as the brain or Artificial Neural Networks as used in a Large Language Model. The underlying graph model is a general model for all kinds of knowledge. The graph model would yield a powerful AI-tool if used as a blueprint for implementing AI. Chain of Thoughts would come for free, and explainability with it. However, its performance would make such a tool impractical and useless. The paper proposes a combined approach for adding explainability to AI and creating Intelligent Systems. It is the strategy humans use when they try to explain their ideas. First, the generative power of neural networks is used to produce an idea or solution. Next, humans create a chain of thoughts that explain such ideas to others and try to provide evidence. AI could follow the same strategy. The architecture of such intelligent systems consists of two distinct elements: a well-trained artificial neural network for observing and generating solution approaches, and a controlling engine for fact checking and reliability assessment.
Pages: 68 to 78
Copyright: Copyright (c) to authors, 2025. Used with permission.
Publication date: June 30, 2025
Published in: journal
ISSN: 1942-2679