Author : Arashta Hussain, Nimakhi Saikia, Chandana Dev
Date of Publication :17th February 2024
Abstract: Sign language is a major way of communication for people with hearing and speech impairments. It is very helpful for them in order to communicate with others and themselves. Hence, the necessity to develop a human computer interface for sign language recognition has gain immense popularity in recent times. There are numerous sign languages that are used throughout the world, the most common one being American Sign Language (ASL). Neural network systems may be used to tackle a wide range of problems in the subject of deep learning. This research work aims to design a real-time American Sign Language recognition system using computer vision and deep learning techniques with user built dataset. The built system uses Gaussian blur filter and a Convolutional Neural Network (CNN) classifier. Keras is used to train the dataset used in the model. The built model in the proposed work uses about 600 images for each of the 26 alphabet. The proposed system converts the hand gestures of the ASL fingerspelling alphabets into English text alphabets, alphabets to words and then words to a complete sentence. The accuracy that the model is able to achieve is approximately 99.4%. Thus, the result achieved from this system infers that the same could be helpful to improve the quality of life for deaf and speech impaired people.
Reference :