This AI Startup Wants to Make Calls Inclusive With Sign Lang…
Analytics India Magazine (Smruthi Nadig)

When Jayasudan Munsamy walked away from a stable career at Amazon in 2018, he was seeking to build something meaningful. “I always wanted to do something on my own, something that would help the community,” he told AIM.
In 2019, he founded DeepVisionTech.ai, a deep tech startup using artificial intelligence to bridge one of India’s most persistent accessibility gaps: communication for the deaf and hard-of-hearing community.
What began as an enterprise AI startup focused on computer-vision solutions, the company’s direction changed dramatically after Munsamy conducted a market study in early 2020. What he discovered was not a niche problem but a systemic failure.
“The common misunderstanding was that if transcripts and captions are there, deaf people will understand. That might be true in developed countries where they are taught English in the sign language, but in India, that itself is not taught properly,” founder and CEO Munsamy explained.
That insight led DeepVisionTech.ai to focus squarely on disability inclusion, specifically deafness, guided by a deeply personal motivation. Munsamy has lived with hearing impairment in one ear since childhood, giving him first-hand insight into the invisible barriers faced by the deaf community.
Official national data on the employment of deaf individuals in India is limited, but various studies highlight persistent challenges. The 2011 Census highlighted that among those aged 15 to 59 with hearing disabilities, about 74% were marginal workers or not employed, illustrating significant barriers to employment.
According to analysis by Unspoken ASL, deaf individuals often have lower employment rates and incomes than hearing individuals, with limited access to stable jobs, owing to educational inequalities, insufficient sign-language training, and workplace communication barriers.
Beyond Communication
Initially, the company believed communication was the primary challenge. However, deeper engagement with NGOs, schools, and deaf individuals revealed larger issues.
“Deaf children don’t get an education in sign language. Digital accessibility is a problem. Workplace inclusion is a problem. Government services don’t have interpreters,” Munsamy specified.
According to the startup’s own research, 90–95% of deaf children drop out of school between grades 7 and 10, largely because teaching is not conducted in sign language. Even in schools designated for the deaf, lessons are often delivered through written text or spoken instruction, leaving students disengaged and excluded.

This educational exclusion cascades into employment. Most job opportunities available to deaf individuals are limited to low-skill roles in logistics or manufacturing. Even when they are hired by large IT companies, retention is low.
“They quit within one or two months, not because of the work, but because they feel lonely. They can’t socialise. They can’t communicate with their managers,” the entrepreneur highlighted.
From Computer Vision to Generative AI
To address these layered challenges, DeepVisionTech.ai developed Let’s Talk Sign, an automatic sign language interpretation platform. At its core, the solution enables bi-directional translation: sign language to speech/text, and speech/text to sign language.
Rather than offering a single generic tool, the company built six tailored solution variants, each addressing a specific context: education, workplace inclusion, digital accessibility, customer support, government services, and everyday communication.
The Bengaluru-based startup’s tech journey spans both the pre-LLM and post-LLM eras. Early systems relied on custom translation models for Indian languages, computer-vision models for sign recognition, and rule-based simplification engines that mimicked how human interpreters shorten and contextualise speech.
“Sign language has its own grammar. You can’t just translate word-for-word,” Munsamy explained.
Under the direction of its technical lead, Arul Praveen T, who was elevated to the role of co-founder in 2022, the company built its own datasets by working directly with deaf signers and augmenting data using synthetic video generation and avatar-based signing. It used GAN-based models to generate synthetic frames, helping overcome the scarcity of annotated sign language datasets in India.
With the rise of large language models, DeepVisionTech.ai adapted quickly by integrating open-source Indic language models from IIT Madras and Bhashini’s multilingual AI platform, significantly improving translation accuracy across Indian languages.
“Our core IP is translation from any language to Indian sign language grammar, and recognising signs from video and converting them to speech,” the founder said.
Funding the Mission
DeepVisionTech.ai’s journey has been shaped by government-backed incubation programmes, assistive-technology accelerators, and cloud-technology partners that enabled the startup to build deep AI capabilities at an early stage.
The startup was incubated at IIIT-Bangalore Innovation Centre and received support under the MeitY TIDE 2.0 Grant, a flagship initiative of the Ministry of Electronics and Information Technology. This early backing enabled the team to focus on research, prototyping, and model development without immediate commercial pressure.
To strengthen its business readiness and market approach, DeepVisionTech.ai also participated in acceleration and launchpad programmes, including Gujarat University Startup and Entrepreneurship Council and NSRCEL at IIM Bangalore, gaining exposure to entrepreneurship training, mentor networks, and early customer conversations.
Equally significant has been the startup’s association with the AssisTechnology Foundation (ATF), which provided domain-specific incubation and mentorship on disability-focused innovation, enabling it to engage with real-world accessibility needs.
Prateek Madhav, founder and CEO of ATF, told AIM that the organisation is helping the company develop a bot that translates virtual calls into signs in real-time, incorporating both hand and shoulder gestures. Although the company faced data-collection challenges three years ago, it “used [synthetic] data and partnered with nonprofits to gain access to datasets from around the world. Today, we believe we have built one of the best algorithms available,” he added.
DeepVisionTech.ai has received ₹64 lakh in grants and prize money, including winning a challenge at IKP Knowledge Park, Hyderabad. It has also been backed by Karnataka Startup Cell and Pontaq, a UK-India VC firm, according to Crunchbase.
On the technology front, it benefited from global startup programmes including NVIDIA Inception, Oracle for Startups, Microsoft, and Google, which provided access to cloud infrastructure, compute credits, and technical mentorship—essential for training and iteration—as well as saved up on AI training costs.
Business Model
DeepVisionTech.ai operates across B2B, B2C, and B2G segments. Consumer apps are currently free, ensuring accessibility for individuals. Revenue comes primarily from tie-ups with organisations.
NGOs and government departments typically pay a one-time fee. Corporations follow a hybrid model: an upfront customisation fee plus a per-user subscription, usually ranging from ₹800 to ₹2,000 per user per month, depending on features and languages supported.
The company’s first major corporate client was Godrej Properties, with over 50 active users across multiple locations. DeepVisionTech.ai is also in discussions with Wipro, Accenture, and TVS Motors.
Additionally, the Maharashtra state government issued a work order this year as part of a startup challenge organised by the innovation society, enabling the startup to connect directly with the education department.
Designing for Accessibility, And Learning Along the Way
Initially designed exclusively for deaf users, the platform has evolved to accommodate other disabilities. “A visually impaired founder once asked me, ‘How is your app accessible to me?’ That question changed our thinking,” Munsamy recalled.
Today, the platform aligns with WCAG 2.0 AA standards for web accessibility and is working to meet Indian accessibility guidelines as well.
Looking ahead, DeepVisionTech.ai has three clear priorities.
First is improving facial expressions, body language, and emotional context in its signing avatars, critical for education and long-form engagement. Second is transforming Let’s Talk Sign into a platform-as-a-service, allowing developers and organisations to embed sign-language interpretation via APIs. Third is international expansion.
The app is being used across 11 countries, with the company building partnerships in Africa, Southeast Asia, and the Middle East.
“Sign language is recognised as an official language in many countries, yet they don’t have solutions like this,” Munsamy said. “The challenges are global.”
DeepVisionTech.ai is quietly showing what inclusive AI looks like in India. It operates alongside a small but growing set of startups addressing overlapping needs. Companies such as Oswald Labs focus on broader digital accessibility.
While startups including D.I.T.U and Glovatrix work on Indian Sign Language recognition, speech-to-text, and language AI, DeepVisionTech differentiates itself by combining automatic sign-language interpretation, workplace inclusion tools, and education-focused solutions into an integrated, end-to-end platform.
As awareness around assistive technology grows, Munsamy remains cautiously optimistic. “Five years ago, even IIT professors didn’t understand disability needs. Today, that awareness is changing, and that gives me hope,” he noted.
For DeepVisionTech, the goal is simple yet profound: empower people with disabilities to live independently, one sign, one conversation, and one inclusive system at a time.
The post This AI Startup Wants to Make Calls Inclusive With Sign Language Translation appeared first on Analytics India Magazine.
Generated by RSStT. The copyright belongs to the original author.