Menu

AI System Achieves 98% Accuracy in ASL Gesture Detection

Author: Florida Atlantic University
Published: 2024/12/16 - Updated: 2026/01/21
Publication Details: Peer-Reviewed, Simulation, Modelling
Category Topic: AI - Related Publications

Page Content: Synopsis - Introduction - Main - Insights, Updates

Synopsis: This peer-reviewed research from Florida Atlantic University presents a novel artificial intelligence system that recognizes American Sign Language alphabet gestures with exceptional precision. The study holds particular value because it addresses a critical communication gap affecting deaf and hard-of-hearing individuals through practical technology application. By training the YOLOv8 deep learning model on a custom dataset of nearly 30,000 annotated images and combining it with MediaPipe hand-tracking technology, researchers created a system that achieved 98% accuracy in real-time gesture detection. The methodology is authoritative as it underwent peer review and was published in the Elsevier journal Franklin Open, with findings validated through rigorous testing across varying hand positions and movements. This technology offers tangible benefits for deaf and hard-of-hearing people by potentially facilitating smoother interactions in educational settings, medical appointments, and everyday social situations where sign language interpretation isn't readily available - Disabled World (DW).

Introduction

Sign language serves as a sophisticated means of communication vital to individuals who are deaf or hard-of-hearing, relying on hand movements, facial expressions, and body language to convey nuanced meaning. American Sign Language exemplifies this linguistic complexity with its distinct grammar and syntax.

Main Content

Sign language is not universal; rather, there are many different sign languages used around the world, each with its own grammar, syntax and vocabulary, highlighting the diversity and complexity of sign languages globally.

Various methods are being explored to convert sign language hand gestures into text or spoken language in real time. To improve communication accessibility for people who are deaf or hard-of-hearing, there is a need for a dependable, real-time system that can accurately detect and track American Sign Language gestures. This system could play a key role in breaking down communication barriers and ensuring more inclusive interactions.

To address these communication barriers, researchers from the College of Engineering and Computer Science at Florida Atlantic University conducted a first-of-its-kind study focused on recognizing American Sign Language alphabet gestures using computer vision. They developed a custom dataset of 29,820 static images of American Sign Language hand gestures. Using MediaPipe, each image was annotated with 21 key landmarks on the hand, providing detailed spatial information about its structure and position.

These annotations played a critical role in enhancing the precision of YOLOv8, the deep learning model the researchers trained, by allowing it to better detect subtle differences in hand gestures.

Results of the study, published in the Elsevier journal Franklin Open, reveal that by leveraging this detailed hand pose information, the model achieved a more refined detection process, accurately capturing the complex structure of American Sign Language gestures. Combining MediaPipe for hand movement tracking with YOLOv8 for training, resulted in a powerful system for recognizing American Sign Language alphabet gestures with high accuracy.

"Combining MediaPipe and YOLOv8, along with fine-tuning hyper-parameters for the best accuracy, represents a groundbreaking and innovative approach," said Bader Alsharif, first author and a Ph.D. candidate in the FAU Department of Electrical Engineering and Computer Science. "This method hasn't been explored in previous research, making it a new and promising direction for future advancements."

Findings show that the model performed with an accuracy of 98%, the ability to correctly identify gestures (recall) at 98%, and an overall performance score (F1 score) of 99%. It also achieved a mean Average Precision (mAP) of 98% and a more detailed mAP50-95 score of 93%, highlighting its strong reliability and precision in recognizing American Sign Language gestures.

"Results from our research demonstrate our model's ability to accurately detect and classify American Sign Language gestures with very few errors," said Alsharif. "Importantly, findings from this study emphasize not only the robustness of the system but also its potential to be used in practical, real-time applications to enable more intuitive human-computer interaction."

The successful integration of landmark annotations from MediaPipe into the YOLOv8 training process significantly improved both bounding box accuracy and gesture classification, allowing the model to capture subtle variations in hand poses. This two-step approach of landmark tracking and object detection proved essential in ensuring the system's high accuracy and efficiency in real-world scenarios. The model's ability to maintain high recognition rates even under varying hand positions and gestures highlights its strength and adaptability in diverse operational settings.

"Our research demonstrates the potential of combining advanced object detection algorithms with landmark tracking for real-time gesture recognition, offering a reliable solution for American Sign Language interpretation," said Mohammad Ilyas, Ph.D., co-author and a professor in the FAU Department of Electrical Engineering and Computer Science. "The success of this model is largely due to the careful integration of transfer learning, meticulous dataset creation, and precise tuning of hyper-parameters. This combination has led to the development of a highly accurate and reliable system for recognizing American Sign Language gestures, representing a major milestone in the field of assistive technology."

Future efforts will focus on expanding the dataset to include a wider range of hand shapes and gestures to improve the model's ability to differentiate between gestures that may appear visually similar, thus further enhancing recognition accuracy. Additionally, optimizing the model for deployment on edge devices will be a priority, ensuring that it retains its real-time performance in resource-constrained environments.

"By improving American Sign Language recognition, this work contributes to creating tools that can enhance communication for the deaf and hard-of-hearing community," said Stella Batalama, Ph.D., dean, FAU College of Engineering and Computer Science. "The model's ability to reliably interpret gestures opens the door to more inclusive solutions that support accessibility, making daily interactions - whether in education, health care, or social settings - more seamless and effective for individuals who rely on sign language. This progress holds great promise for fostering a more inclusive society where communication barriers are reduced."

Study co-author is Easa Alalwany, Ph.D., a recent Ph.D. graduate of the FAU College of Engineering and Computer Science and an assistant professor at Taibah University in Saudi Arabia.

AI-Enhanced Wearable Enables Speech Without Vocal Cords: An adhesive neck patch translates signals into audible speech in speech technology for people with disabilities.

Insights, Analysis, and Developments

Editorial Note: What makes this development particularly significant is the research team's dual-approach methodology - combining landmark tracking with object detection - which solved a persistent challenge in sign language recognition technology. While previous systems struggled with subtle gesture variations, this 98% accuracy rate moves ASL recognition from theoretical possibility into practical viability. The researchers' decision to focus on edge device optimization suggests they're thinking beyond laboratory success toward real-world deployment, which matters because accessibility tools only help when they're actually accessible. As the technology expands to include more complex gestures and full sentence structures, it could fundamentally change how communication barriers are addressed, making spontaneous conversations between signing and non-signing individuals possible without human interpreters. The real test will be whether this system can maintain its accuracy outside controlled research conditions and whether it can be made affordable enough for widespread adoption - Disabled World (DW).

Attribution/Source(s): This peer reviewed publication was selected for publishing by the editors of Disabled World (DW) due to its relevance to the disability community. Originally authored by Florida Atlantic University and published on 2024/12/16, this content may have been edited for style, clarity, or brevity.

Related Publications

: AI systems trained on norms risk excluding those with non-linear lives - people with disabilities, caregivers, migrants - by treating diversity as inefficiency.

: Learn how AI-powered scams including voice synthesis and deepfakes target vulnerable populations, with particular risks for seniors and individuals with disabilities.

: AI accelerates drug discovery, offering breakthrough treatments for age-related diseases, rare conditions, and disabilities through personalized medicine.

Share Page
APA: Florida Atlantic University. (2024, December 16 - Last revised: 2026, January 21). AI System Achieves 98% Accuracy in ASL Gesture Detection. Disabled World (DW). Retrieved February 19, 2026 from www.disabled-world.com/assistivedevices/ai/asl-ai.php
MLA: Florida Atlantic University. "AI System Achieves 98% Accuracy in ASL Gesture Detection." Disabled World (DW), 16 Dec. 2024, revised 21 Jan. 2026. Web. 19 Feb. 2026. <www.disabled-world.com/assistivedevices/ai/asl-ai.php>.
Chicago: Florida Atlantic University. "AI System Achieves 98% Accuracy in ASL Gesture Detection." Disabled World (DW). Last modified January 21, 2026. www.disabled-world.com/assistivedevices/ai/asl-ai.php.

While we strive to provide accurate, up-to-date information, our content is for general informational purposes only. Please consult qualified professionals for advice specific to your situation.