Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Shawana Tabassum"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Percepti
    (UMT, Lahore, 2019) Shawana Tabassum
    Percepti is supposed to be a visually impaired person friendly app which is supposed to help people with weak eyesight or blind people to deal with their daily life activities and perform tasks with relative independence of others. The idea is to integrate object detection and text to speech so a user can hold up their phone and the camera of their phone will see the object in front of them, convert the response and deliver it to the user in the form of speech so the user can hear whatever object is in front of them. Since it is a mobile application for blind people using camera and voice commands and feedback. The compatibility and accessibility have been a huge factor contributing o the creation of the project. The users may be an ios user or an android user hence flutter and dart language is used. The application is going to be based on an image recognition. this particular feature is accessed through MLkit which is compatible with flutter, flutter text to speech package is further used that will use the voice command to give out the information about the detected object. The objective is to help people who are visually weakened and blind and inform them in such a fantastic and understandable way that a user can exactly imagine what’s in the surrounding without relying on people or touching objects.

DSpace software copyright © 2002-2026 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback