Digital technologies reshape how we live, work and participate in society, and for many, they have also increased accessibility. From everyday apps to intelligent systems, digital innovation continues to open up powerful new ways to remove barriers that many people have long faced, addressing unmet access needs and often creating experiences that work better for everyone.
While some innovations are designed to solve specific disability needs, they can often simultaneously hold the potential for more widespread adoption in future. As Open Inclusion’s Christine Hemphill remarked in a recent interview, “when we look at the world around us, a huge amount of breakthroughs happened on the back of understanding and meeting disability-informed needs.” Touchscreens, subtitles, text-to-speech and the computer keyboard were all invented to solve particular unmet needs but have since been adopted by the masses.
Today, there are multiple use cases where AI tools are being used in ways that cause harm – compromising data ethics, perpetuating bias and causing negative consequences for the environment. But when designed and deployed with care and a purposeful end goal, AI tools have the power to transform positively the way people experience the world around them.
From facial and voice recognition to cognitive assistants, here are some of the ways that AI is being used to build a more inclusive world for all.
Generative voice recreation
Tools like Chat GPT have seen generative AI rise up the global news agenda in recent years. But beyond the viral trends and very real threat of deepfakes, the technology is being used to do some game changing things. Earlier this year, Bristol-based assistive technology company Smartbox used AI to recreate the voice of a woman with motor neurone disease. In 2000 the disease took her voice and the use of her hands, and to date she had been communicating through smart-gaze technology and a robotic, synthetic voice, like that used by Stephen Hawking. Her children barely knew their mum’s own voice.
With eight seconds of scratchy audio from an old VHS tape Smartbox created an authentic replica voice, with dynamic and appropriate accent. A key component of the process was an app by AI voice company ElevenLabs, which has been trained to fill in gaps to predict a voice’s intonation.
Not only an incredible feat for research and technology, but also for human connection; the woman’s son noted how it had brought the family closer together as it means she can now express her emotions.

Image: Microsoft Seeing AI
AI-powered visual assistants
Alt text and audio descriptions have long been used by people with sight loss to describe key visual elements in audio and video content. AI is now adding another layer to that experience and helping to make it more personal. Microsoft’s Seeing AI app, for example, is able to narrate a person’s surroundings, read text, answer questions and can even identify emotions on people’s faces.
Other apps are being developed to help people navigate urban environments, such as OKO, which provides real-time information about traffic signals to help people with sight loss cross streets safely. By interpreting pedestrian signals via the phone’s camera, the app provides audio feedback, vibrations and screen tint changes to indicate when to cross. Tools like this, where public safety is a high risk, emphasise the importance of innovating carefully and not ‘moving fast and breaking things’.

Photo: OKO
Facial recognition technology
AI-driven facial recognition is becoming increasingly sophisticated and enhancing disability-inclusive innovation with it. Eye-tracking technology, such as Smart Eye, for example, works by using specialised cameras to enable people with physical disabilities and limited mobility to control a digital device using eye movements. Similar to Seeing AI, it has developed algorithms that can not only identify basic human emotions but also more complex cognitive states, such as fatigue, confusion and distraction.
So it is not only beneficial for people with sight loss, but also neurodivergent individuals who have difficulty processing and understanding emotions.
Elsewhere, tools such as Lip Reader Pro are harnessing advanced AI to make video content accessible to people with hearing loss. Using ‘visual speech recognition’, it is able to process lip movements and convert them into accurate speech and text, generating subtitles for videos where audio is unavailable. By studying communication patterns and analysing speech in different contexts, it can do this for multiple languages and accents – a prime example of disability-inclusive innovation that can be useful for a much wider audience, such as the telephone, voice activation and automatic doors.

There are AI tools that can translate from lip reading, as well as between and into different sign languages, and into audio. Photo: Vitaly Gariev
Meta RayBan smart glasses
Smart glasses have come a long way since the early 2010s, when Google, Microsoft and Apple began experimenting with the technology. Today, Meta’s Ray-Ban smart glasses are widely considered the most advanced in the world, including a range of accessibility features that help disabled people understand the world around them.
One such feature allows users to customise Meta AI to provide detailed responses based on their environment and surroundings. In a recent interview with Calvium, Leonie Watson, founder of accessibility consultancy Tetralogical, highlighted the everyday power of this: “I can even ask what’s around me while walking down the street. For example, in an airport lounge I asked about a billboard across the room. It was trivial information, but without AI, I’d never have known it was there, because I wouldn’t have asked. That sense of incidental awareness is powerful after losing my sight 25 years ago.”
Another feature, ‘Call a Volunteer’, connects individuals with sight loss to a network of sighted volunteers in real-time to help them with everyday tasks. It shows how technology can be used for more than just technology’s sake, but also as a medium for meaningful community connection.

Photo: Voiceitt – Greater independence through voice
Inclusive voice AI
For people with non-standard speech and intellectual disabilities, communication can be a major challenge. Tools like Voiceitt are seeking to break down those barriers by enabling atypical speech to be translated in real-time. Designed by and for people with non-standard speech, the Google Chrome extension has been heralded a game changer for not only in-person communication, but also assisting healthcare professionals with the ongoing care of their patients.
Notably, it is being used by the Department of Intellectual & Developmental Disabilities of Tennessee, whose commissioner Brad Turner said it is already yielding impactful results for people with intellectual disabilities supported within home- and community-based services. “It’s our hope to expand use of this innovative product in order to provide more opportunities for people to use this technology as a bridge for communication with their loved ones, friends, colleagues and community members.”
AI-powered hearing aids
In addition to subtitles and lipreading, AI is transforming hearing aid technology for those with hearing loss. Phonak has developed an AI-trained automatic operating system that continuously analyses the sound environment – more than 700 times per second – to ensure hearing aids adapt to what users need to hear better.
Requiring fewer manual adjustments throughout the day, users reported feeling less fatigued from audio processing, while also having more energy for social connection and participation.

Photo: Phonak
Cognitive assistants
The global population is neurodiverse; we all process information, learn new things and communicate in different ways. For neurodivergent individuals, managing time, tasks and information can be more challenging than it is for neurotypical individuals. Cognitive assistants are proving to be a particularly powerful tool for accessibility, supporting a range of neurodevelopmental needs.
Goblin Tools, for example, has a range of AI-powered features designed to help neurodivergent people with tasks they find overwhelming or difficult, such as breaking down large tasks into smaller steps, rephrasing text, analysing message tone and summarising notes. Cogs, meanwhile, has developed an app specifically for managing neurodivergent burnout. Combining AI with community co-creation, it helps users – whether they are autistic or have ADHD – to spot and track signs of burnout and creates personalised self-care plans, including stimming tools for focus and reflection exercises that can help manage everyday anxiety and promote greater social participation.
Final thoughts
AI is a powerful technology that can enhance everyone’s experience of the world around them – whether interacting with content, other people or managing mental load more effectively. Many of the innovations listed here have been designed by and for disabled people, to address specific needs, but we can imagine how they could positively impact many people’s lives.
While cognitive assistants can be useful for parents and carers juggling work and caring responsibilities, Meta’s ‘Call a Volunteer’ technology could be used to connect elderly and vulnerable people with volunteers, thus combating social isolation. In our aging population, AI-powered technologies that assist with sight and hearing loss will have a growing role to play, enabling better communication, connection and community cohesion.
At Calvium we are supporting disabled audiences to access to more easily book and attend cultural events through a digital membership system for All In. Whether experimenting with cutting edge technologies or scaling to nation-wide services, it starts and ends with responsible innovation. That means eschewing the ‘move fast and break things’ approach in favour of a more careful, people-centred approach. As a B Corp and Disability Confident employer, Calvium is committed to developing responsible, trustworthy and sustainable AI solutions for all.
