The “Seeing for the blind” invention uses an echolocation system to locate objects in the vicinity of the blind user and alert them by conveying distance using audio and tactile feedback. The device also has an object identification system that uses a deep learning neural network to identify the name of an object or text and repeat the name back to the user. With both systems, the user is able to understand what objects are around them and where they are in relation to them, which allows them to have a greater sense of orientation. Here is a link to a demo. Thus far, there is no system that provides the visually impaired person with such a comprehensive understanding of their surroundings. This invention is aimed at both legally and fully blind users and is especially directed towards people who have become blind after birth. Those individuals who have become blind after birth understand normal sight-based reactions, and therefore my invention will be a natural extension to these people. This invention provides the user with a better sense of navigation, thereby greatly improving their lives.
A demonstration of how Seeing for the Blind works
Mark I Prototype version 1, with just the ultrasonic sensor and the belt pouch (left) and Sidharth using it (right)
Mark II - Mark VII are the combination of echolocation and object detection systems that can provide the blind person with an awareness of obstacles in their surroundings as well as identification of objects in the line of sight. There is one speaker that is connected the Raspberry Pi / Arduino system. These are being prototyped in India.
The End Goal is a product where users will feel more comfortable. The glasses will be compact, have the camera, and sonar on the bridge piece, as shown, and then have the computers and battery on the side piece. This will give a more simplistic design.
Send us a message, and we will get back to you soon.
Copyright © 2019 Seeing for the Blind - All Rights Reserved.
Powered by GoDaddy Website Builder