Home // IARIA Congress 2023, The 2023 IARIA Annual Congress on Frontiers in Science, Technology, Services, and Applications // View article


Fostering Trust on Machine Learning Inferences

Authors:
Dalmo Cirne

Keywords: artificial intelligence; game theory; machine learning; trust.

Abstract:
Artificial Intelligence (AI) and Machine Learning (ML) providers have a tremendous responsibility to develop valid and reliable systems. Much is discussed about trusting AI and ML inferences, but little has been done to define what that means. Those who work in the space of ML-based products, have familiarity with the topics of transparency, explainability, safety, bias, and so forth, yet there are no frameworks to quantify and measure such items. Producing ever more trustworthy machine learning inferences is a path to increase the value of products (i.e., increased trust in the results) and to engage in conversations with users to gather feedback to further improve products. In this paper, we begin by examining the dynamic of trust between a provider (Trustor) and users (Trustees). Trustors are required to be trusting and trustworthy, whereas trustees need not be trusting nor trustworthy. The challenge for trustors is to provide results that are good enough to make a trustee increase their level of trust above a minimum threshold for: 1- doing business together; 2- continuation of service. Then, we conclude by proposing a framework to capture quantitative metrics to be used to objectively understand how trustworthy an AI and ML system can claim to be, and their trend over time.

Pages: 1 to 4

Copyright: Copyright (c) IARIA, 2023

Publication date: November 13, 2023

Published in: conference

ISBN: 978-1-68558-089-6

Location: Valencia, Spain

Dates: from November 13, 2023 to November 17, 2023