Smart Devices Recognize and Interpret Sign Language

Author: Texas A&M University
Published: 2015/12/01 - Updated: 2021/07/19
Topic: Deaf Communication - Publications List

Page Content: Synopsis - Introduction - Main

Synopsis: New technology at Texas A&M enable smart devices to recognize and interpret sign language. The technology, developed in collaboration with Texas Instruments, represents a growing interest in the development of high-tech sign language recognition systems (SLRs).

Introduction

A smart device that translates sign language while being worn on the wrist could bridge the communications gap between the deaf and those who don't know sign language, says a Texas A&M University biomedical engineering researcher who is developing the technology.

Main Item

The wearable technology combines motion sensors and the measurement of electrical activity generated by muscles to interpret hand gestures, says Roozbeh Jafari, associate professor in the university's Department of Biomedical Engineering and researcher at the Center for Remote Health Technologies and Systems.

Continued below image.
A prototype of Jafari's sign language recognition technology that he aims to scale down to the size of a watch.
A prototype of Jafari's sign language recognition technology that he aims to scale down to the size of a watch.
Continued...

Although the device is still in its prototype stage, it can already recognize 40 American Sign Language words with nearly 96 percent accuracy, notes Jafari who presented his research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June. The technology was among the top award winners in the Texas Instruments Innovation Challenge this past summer.

The technology, developed in collaboration with Texas Instruments, represents a growing interest in the development of high-tech sign language recognition systems (SLRs) but unlike other recent initiatives, Jafari's system fore-goes the use of a camera to capture gestures. Video-based recognition, he says, can suffer performance issues in poor lighting conditions, and the videos or images captured may be considered invasive to the user's privacy. What's more, because these systems require a user to gesture in front of a camera, they have limited wear-ability - and wear-ability, for Jafari, is key.

"Wearables provide a very interesting opportunity in the sense of their tight coupling with the human body," Jafari says. "Because they are attached to our body, they know quite a bit about us throughout the day, and they can provide us with valuable feedback at the right times. With this in mind, we wanted to develop a technology in the form factor of a watch."

In order to capture the intricacies of American Sign Language, Jafari's system makes use of two distinct sensors. The first is an inertial sensor that responds to motion. Consisting of an accelerometer and gyroscope, the sensor measures the accelerations and angular velocities of the hand and arm, Jafari notes. This sensor plays a major role in discriminating different signs by capturing the user's hand orientations and hand and arm movements during a gesture.

However, a motion sensor alone wasn't enough, Jafari explains. Certain signs in American Sign Language are similar in terms of the gestures required to convey the word. With these gestures the overall movement of the hand may be the same for two different signs, but the movement of individual fingers may be different. For example, the respective gestures for "please" and "sorry" and for "name" and "work" are similar in hand motion. To discriminate between these types of hand gestures, Jafari's system makes use of another type of sensor that measures muscle activity.

Known as an electromyographic sensor (sEMG), this sensor non-invasively measures the electrical potential of muscle activities, Jafari explains. It is used to distinguish various hand and finger movements based on different muscle activities. Essentially, it's good at measuring finger movements and the muscle activity patterns for the hand and arm, working in tandem with the motion sensor to provide a more accurate interpretation of the gesture being signed, he says.

"These two technologies are complementary to each other, and the fusion of these two systems will enhance the recognition accuracy for different signs, making it easier to recognize a large vocabulary of signs," Jafari says.

In Jafari's system both inertial sensors and electromyographic sensors are placed on the right wrist of the user where they detect gestures and send information via Bluetooth to an external laptop that performs complex algorithms to interpret the sign and display the correct English word for the gesture. As Jafari continues to develop the technology, he says his team will look to incorporate all of these functions into one wearable device by combining the hardware and reducing the overall size of the required electronics. He envisions the device collecting the data produced from a gesture, interpreting it and then sending the corresponding English word to another person's smart device so that he or she can understand what is being signed simply by reading the screen of their own device. In addition, he is working to increase the number of signs recognized by the system and expanding the system to both hands.

"The combination of muscle activation detection with motion sensors is a new and exciting way of understanding human intent with other applications in addition to enhanced SLR systems, such as home device activations using context-aware wearables," Jafari says.

Jafari is associate professor in Texas A&M's Department of Biomedical Engineering, associate professor in the Department of Computer Science and Engineering and the Department of Electrical and Computer Engineering, and researcher at Texas A&M Engineering Experiment Station's Center for Remote Health Technologies and Systems. His research focuses on wearable computer design and signal processing. He is director of the Embedded Signal Processing Laboratory (jafari.tamu.edu).

The Center for Remote Health Technologies and Systems is designing and developing advanced health technologies and systems to enable healthy living through health monitoring and disease diagnosis, management and prevention. The center's mission is to identify and overcome the unmet needs of patients and health care providers through the development of breakthrough remote health care devices, bio-signal mapping algorithms, remote health analytics and information systems that will improve access, enhance quality, and reduce the cost of health care.

As an engineering research agency of Texas, TEES performs quality research driven by world problems; strengthens and expands the state's workforce through educational partnerships and training; and develops and transfers technology to industry. TEES partners with academic institutions, governmental agencies, industries and communities to solve problems to help improve the quality of life, promote economic development and enhance educational systems. TEES, a member of the Texas A&M University System is in its 100th year of engineering solutions.

Attribution/Source(s): This quality-reviewed publication was selected for publishing by the editors of Disabled World (DW) due to its relevance to the disability community. Originally authored by Texas A&M University and published on 2015/12/01, this content may have been edited for style, clarity, or brevity. For further details or clarifications, Texas A&M University can be contacted at engineering.tamu.edu NOTE: Disabled World does not provide any warranties or endorsements related to this article.

Explore Similar Topics

- Gruv Button™ Retrofit is an award-winning, low-cost solution simplifying hearing aid insertion for seniors and those with dexterity challenges, enhancing usability and reducing device abandonment.

- Contemporary music can pose challenges for individuals with hearing impairments, but adjustments in sound mixing could potentially create a positive impact.

Citing and References

Founded in 2004, Disabled World (DW) is a leading resource on disabilities, assistive technologies, and accessibility, supporting the disability community. Learn more on our About Us page.

Cite This Page: Texas A&M University. (2015, December 1 - Last revised: 2021, July 19). Smart Devices Recognize and Interpret Sign Language. Disabled World (DW). Retrieved March 20, 2025 from www.disabled-world.com/disability/types/hearing/communication/smart-sign.php

Permalink: <a href="https://www.disabled-world.com/disability/types/hearing/communication/smart-sign.php">Smart Devices Recognize and Interpret Sign Language</a>: New technology at Texas A&M enable smart devices to recognize and interpret sign language.

While we strive to provide accurate and up-to-date information, it's important to note that our content is for general informational purposes only. We always recommend consulting qualified healthcare professionals for personalized medical advice. Any 3rd party offering or advertising does not constitute an endorsement.