Press "Enter" to skip to content

New AI Technology Developed By Google Can Read Sign Language

Google’s AI labs have engineered a new technology that can track hand gestures and translate them into verbal speech. This new discovery might prove to be a breakthrough for millions of people who communicate through sign language.

The new technology involves a few clever shortcuts and general efficiency of machine learning systems to produce a real-time, highly-accurate map of the hand and its fingers, using a smartphone and its camera.

“Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands,” write Google researchers Valentin Bazarevsky and Fan Zhang in a blog post. “Robust real-time hand perception is a decidedly challenging computer vision task, as hands often occlude themselves or each other (e.g. finger/palm occlusions and handshakes) and lack high contrast patterns.”

The computers are not good at catching quick and subtle hand movements. Its is really difficult to catch these movements. Catching the right movements is important it is tough to catch them fast. Multi-camera, dept sensing rigs also face trouble in tracking hand movements. The researchers are aiming to shorten the data and algorithms that are needed by the system to process the gestures. Fewer data would result in faster output.

How does it work?

The new technology will only detect the palm, which will make the system read tall rectangular images and shorts ones too. The palm is the most distinctive and reliably shaped part of the hand. A separate algorithm detects the images and assigns 21 coordinates of knuckles and fingertips. Once the pose is detected, it is compared to a couple of known gestures, such as language symbols for numbers and things like “peace” and “metal”.

The hand-tracking algorithm is fast and accurate and runs on a smartphone, eliminating the need for a desktop. It runs in the MediaPipe framework, which is a technology which is already known to people.

Other researchers will also be able to improvise the existing systems that require hardware to recognize hand gestures. The source is available for other companies to develop software as Google has not yet applied the technology in any of its devices.

“We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues,” says the company.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *