YOLOv5-Based System for American Sign Language Interpretation

Rajni P, Jahnavi P, V.Krishnamraju Ch.S., Uma Mahesh S, Arun Kumar B.

Abstract


Sign language is a vital form of communication for individuals with hearing and speech impairments. This project presents a real-time sign language recognition system using the YOLOv5 deep learning model. The proposed system efficiently identifies hand gestures from video input, converting them into readable text and speech output. It uses a CNN-based architecture that processes grayscale images for reduced computational cost without compromising accuracy. The system was trained and validated on robust datasets such as the Massey University Dataset and the ASL Alphabet Dataset, achieving over 99% accuracy. This solution enables inclusive communication, supporting human-computer interaction, assistive technologies, and real-world applications in education, healthcare, and social settings by providing a fast, scalable, and user-friendly gesture recognition platform.


Full Text: PDF [Full Text]

Refbacks

  • There are currently no refbacks.


Copyright © 2013, All rights reserved.| ijseat.com

Creative Commons License
International Journal of Science Engineering and Advance Technology is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJSEat , Permissions beyond the scope of this license may be available at http://creativecommons.org/licenses/by/3.0/deed.en_GB.

Â