Home // DATA ANALYTICS 2018, The Seventh International Conference on Data Analytics // View article
Authors:
Gerald Fahner
Keywords: explainable artificial intelligence; algorithmic transparency; machine learning; gradient boosting; neural nets; credit risk scoring; scorecard; segmentation; constraining models
Abstract:
Complex Machine Learning (ML) models can be effective at analyzing large amounts of data and driving business value. However, these models can be nonintuitive, their parameters meaningless, their potential biases difficult to detect and even harder to mitigate, and their predictions and decisions difficult to explain. Lenders, regulators, and customers need explainable models for automating credit decisions. Lack of algorithmic transparency is a broad concern beyond lending, which has led to much interest in “explainable artificial intelligence” [1]. This paper discusses a model family which warrants explainability and transparency by design: the Transparent Generalized Additive Model Tree (TGAMT). Many credit risk models used in the US and internationally belong to this family. Today, these credit scores are developed painstakingly by teams of data scientists and credit risk experts in a tedious interplay of “art and science” in order to simultaneously achieve high predictive performance and intuitive explanations of how the scores are arrived at. The main contribution of this paper is to automate the learning of TGAMT models. We also report benchmark results indicating that TGAMT’s achieve strong predictive performance similar to complex ML models while being more explanation-friendly.
Pages: 7 to 14
Copyright: Copyright (c) IARIA, 2018
Publication date: November 18, 2018
Published in: conference
ISSN: 2308-4464
ISBN: 978-1-61208-681-1
Location: Athens, Greece
Dates: from November 18, 2018 to November 22, 2018