Sign Bridge
Loading...
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
UMT.Lahore
Abstract
The Sign Bridge is an AI-powered solution developed to bridge the critical communication gap between individuals with hearing impairments and the general public in Pakistan. This system focuses on recognizing and interpreting gestures from USL and converting them into readable Urdu text. By leveraging advanced techniques in computer vision, deep learning, and gesture recognition, the project contributes meaningfully to inclusive technology and accessible communication. At the core of the system is a combination of Convolutional Neural Networks (CNNs) for image-based feature extraction, and MediaPipe for real-time hand landmark detection
The model is trained on a custom-built dataset comprising 54 frequently used Urdu sign language words, each recorded through 20 short-duration videos. This dataset provides the foundational training data for model learning and testing. Despite its relatively modest size, the model achieved 55% accuracy, proving the feasibility of real-time Urdu sign language nterpretation using a lightweight, scalable approach. The application features a user-friendly interface that captures hand movements via webcam and displays the translated text on-screen, offering real-time interaction. The system is suitable for use in educational institutions, accessibility tools, and healthcare settings, promoting digital inclusion and independence for the deaf and hard-of-hearing community. Overall, this project showcases how AI and machine learning can be effectively utilized to develop localized, meaningful assistive technologies that support both accessibility and linguistic diversity in emerging markets like Pakistan.