Home // GPTMB 2025, The Second International Conference on Generative Pre-trained Transformer Models and Beyond // View article


Assessing the Capabilities of Large Language Models in Translating American Sign Language Gloss to English

Authors:
Jalal Al-Afandi
Péter Pócsi
Gábor Borbély
Helga M. Szabó
Ádám Rák
Zoslt Robotka
András Horváth

Keywords: ASL-GLOSS translation; Generative pretrained transformers, large language models

Abstract:
In this paper, we investigate the ability of large language models (LLMs) to translate American Sign Language with GLOSS annoation into English without fine-tuning or architectural modifications. Our findings show that pretrained transformers achieve translation quality comparable to human experts. While prompt engineering enhances accuracy for simpler models, it has minimal impact on more advanced ones. Additionally, when generating multiple translation variants, the first response is typically the most accurate, with subsequent outputs declining in quality. These results underscore the strong zero-shot translation capabilities of LLMs and highlight their potential for scalable ASL-GLOSS translation applications.

Pages: 9 to 14

Copyright: Copyright (c) IARIA, 2025

Publication date: July 6, 2025

Published in: conference

ISBN: 978-1-68558-287-6

Location: Venice, Italy

Dates: from July 6, 2025 to July 10, 2025