Home // ACHI 2024, The Seventeenth International Conference on Advances in Computer-Human Interactions // View article
Authors:
Bassam Alsanousi
Stephanie Ludi
Hyunsook Do
Keywords: Human-Computer Interaction (HCI); Artificial Intelligence (AI); Usability; Large Language Models (LLMs); Local Interpretable Model-Agnostic Explanations (LIME).
Abstract:
Mobile applications have become essential in today's digital landscape so optimizing their User eXperiences (UX) is essential. Our study explored the application of Large Language Models (LLMs), including some Bidirectional Encoder Representations from Transformers (BERT) family architectures and advanced pre-trained models like Generative Pre-trained Transformer (GPT) by OpenAI GPT-3.5, GPT-4, and Llama 2 by Meta (zero and few-shot), for detecting usability issues in user reviews. The methodology encompassed data preprocessing, sentiment analysis, fine-tuning LLMs, and interpretability techniques, notably Local Interpretable Model-Agnostic Explanations (LIME). The findings indicated that the fine-tuned LLMs, particularly, Robustly Optimized BERT Approach (RoBERTa), XLNet, and DistilBERT were relatively successful in identifying the usability issues, achieving an accuracy rate of 96%. The study also assessed advanced pre-trained models Llama 2, GPT-3.5, and GPT-4, which generally fell short of the performance achieved by fine-tuned models. Finally, we also discovered the use of LIME that helped in understanding the decision-making processes of the fine-tuned models.
Pages: 200 to 209
Copyright: Copyright (c) IARIA, 2024
Publication date: May 26, 2024
Published in: conference
ISSN: 2308-4138
ISBN: 978-1-68558-163-3
Location: Barcelona, Spain
Dates: from May 26, 2024 to May 30, 2024