video overview

IIr Associates, Inc.
Publisher of The Virginia Engineer

Print-Publishing Services
Web Site Design-Coding-Hosting
Business Consulting

Phone: (804) 779-3527

New Technology May Enable Smart Devices To Recognize, Interpret Sign Language
October 2, 2015

A smart device that translates sign language while being worn on the wrist could potentially bridge the current communications gap between the deaf or other hearing impaired persons and those who don’t know how to interpret sign language, according to a Texas A&M University biomedical engineering researcher who is developing the technology.

In Jafari’s system both inertial sensors and electromyographic sensors are placed on the right wrist of the user where they detect gestures and send information via Bluetooth to an external laptop that performs complex algorithms to interpret the sign and display the correct English word for the gesture. Photo courtesy of Texas A&M University.

The wearable technology combines motion sensors and the measurement of electrical activity generated by muscles to interpret hand gestures, according to Roozbeh Jafari, director of the Embedded Signal Processing Laboratory and researcher at the Center for Remote Health Technologies and Systems as well as an associate professor in the Departments of Biomedical Engineering; Computer Science and Engineering; Electrical and Computer Engineering, and a researcher at Texas A&M’s Engineering Experiment Station’s Center for Remote Health Technologies and Systems.

Although the device is still in its prototype stage, it can already recognize 40 American Sign Language words with nearly 96 percent accuracy, notes Prof. Jafari who presented his research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June. The technology was among the top award winners in the Texas Instruments Innovation Challenge this past summer.

The technology, developed in collaboration with Texas Instruments, represents a growing interest in the development of high-tech sign language recognition systems (SLRs) but unlike other recent initiatives, Prof. Jafari’s system foregoes the use of a camera to capture gestures. Video-based recognition, he says, can suffer performance issues in poor lighting conditions, and, in today’s socially litiguous environment, the videos or images captured may be considered invasive to the user’s privacy. What’s more, because these systems require a user to gesture in front of a camera, they have limited wearability – and wearability, for Prof. Jafari, is key.

“Wearables provide a very interesting opportunity in the sense of their tight coupling with the human body,” Prof. Jafari goes on to explain. “Because they are attached to our body, they know quite a bit about us throughout the day, and they can provide us with valuable feedback at the right times. With this in mind, we wanted to develop a technology in the form factor of a watch.”

This article reprinted from materials provided by Texas A&M University.

  ------   News Item Archive  -----  
The Virginia Engineer on facebook
The Virginia Engineer RSS Feed