India has a 18 million + population that is speech and hearing impaired. Though disparate aids and devices are there, this segment remains largely unserved. Developed economies have established products and solutions that empower the physically challenged. However, India is sorely lagging in this and a significant proportion of the impacted belong to the middle class. There is a huge unmet need for holistic product / solution that makes the quality of life of the hearing & speech impaired better and reduces their dependencies on others.



SEEAR is an AI powered consumer device that shall be the “EAR to the hEARing impaired”. It is an intelligent and user-friendly device which when installed in a premises, shall detect and convert common sounds such as door bells, kitchen appliances, baby sounds etc., into vibration or LED visual alerts on a wearable or desktop console. SEEAR combines best in class hardware that integrates artificial intelligence/machine learning (AI/ML) to ensure continuous upgrade of the device.

SEEAR is positioned to be priced for affordability in the mass market. Kues Innovations has tied up with for the distribution and marketing of this innovative device. is an organization that has been working towards boosting and supporting the hearing-impaired community in India. They offer subscription based learning initiatives for hearing impaired. Kues Innovations has tied up with for taking the SEEAR product to market.’s subscribers shall be leveraged for concept testing of SEEAR. Their subscribers are a huge ready target base for SEEAR


  • Installed across the area to be covered and are connected to each other over Mesh Network.
  • These Nodes collect sound samples over a certain decibel level and convey the same to the gateway device.
  • The messages from these nodes triggers a notification on wearables or wall mount/ Table Top LED indicators in the form of vibration or visual cue.
  • The mobile application displays the location of the sound sample collected and the confidence(in terms of percentage) regarding the source of the sound (e.g. baby cry, fire alarm, pressure cooker, water overflow, falling of heavy object, etc.. )


  • These devices issue notification to the user regarding the Sound cues captured or received by the Mobile edge device from the Sensors
  • These can be in form of Vibration/Light flashes if it is transmitted to wearables such as watches, anklets, belt buckles, rings, etc,.
  • These can also be in the form of Light flashes on Table Top or Wall mount devices, if the wearables are an inconvenience to the user.


  • This edge device contains the UI/UX for the user which on receiving the Sound sample displays the location of the sound and confidence(in terms of percentage) regarding the source of the sound.
  • This edge device will be running a pre trained network with softmax kind output, so that cloud latency is avoided.
  • The sound samples with low confidence levels can be send to the cloud sever along with the labels by the user for further training and network performance improvement. So the trained network can be deployed to the edge devices post training.
  • Human ear is the target performance benchmark to be achieved.


  • The sound samples are stored and used to train/test the AI/ML Network pre deployment.
  • The trained network with tuned hyper parameters is deployed and updated to the edge devices like Mobile phones for sound classification
  • New labeled sound samples are stored and trained on the network in the Cloud Infrastructure post which is deployed to the Edge Mobile device.

SEEAR – Timeline to launch