Skip to main content

Advanced digital technologies such as AI, cloud computing, big data, or IoT are being rapidly adopted by businesses to increase efficiency, with 90% of UK firms are now using at least one. Nearly half have already integrated AI into their operations, according to a recent report from the University of Manchester. But the true potential of digital innovation lies not just in operational efficiency or commercial gains. AI and emerging tech can radically improve accessibility, independence, employment, and participation for people with disabilities, if designed well.

To explore the possibilities, we interviewed Léonie Watson, co-founder and Director of accessibility consultancy TetraLogical, who shared how digital innovations are reshaping accessibility, where the risks lie, and what it will take to ensure innovation is both inclusive and sustainable.

An active member of the digital accessibility community, Léonie is Chair of the W3C Board of Directors, and co-Chair of the W3C Web Applications Working Group. She’s also co-organiser of the Inclusive Design 24 (#id24) conference; and co-author of the Inclusive Design Principles and the Do No Harm Guide: Centering accessibility in data visualization.

Photos of Léonie Watson and Jo Morrison, with job titles.

What are the opportunities that AI brings to accessibility, what barriers can it help overcome?

As someone with a disability, one of the biggest advantages is access to information that was previously out of reach. For me, as a blind person, this centres around images. Until recently, if I encountered a picture without a human-written description, I had to ask someone or use a separate service to get details. Now, my screen reader lets me query AI directly from a website with a shortcut, and I instantly get a description.

This change is enormous. Many blind people I know are revisiting decades of family photos for the first time, uncovering details they never had before. With AI-enabled devices like Meta’s Ray-Ban glasses, I can even ask what’s around me while walking down the street. For example, in an airport lounge I asked about a billboard across the room. It was trivial information, but without AI, I’d never have known it was there, because I wouldn’t have asked. That sense of incidental awareness is powerful after losing my sight 25 years ago.

AI also helps people with cognitive or neurodivergent conditions to process large amounts of information. You can get summaries, drill deeper into sections and check context. Of course, hallucinations remain a problem and you must judge when to trust results. For casual things, errors may not matter; for life-changing decisions, they do. But that’s no different from how we treat blogs or other online information.

Is this latest era of AI likely to dramatically impact accessibility technology, and if so, how so?

We’re already seeing AI products moving beyond mainstream adoption and tapping directly into accessibility use cases, often in creative and transformative ways. 

For example, products like Meta’s Ray-Ban smart glasses and Oakley’s devices are mainstream; they weren’t designed specifically for accessibility, yet they’ve proven incredibly useful for people with disabilities. The AI uses familiar tools like ChatGPT, Claude, and Meta’s systems, but they are all trained and optimised for people who are blind or have low vision.

Beyond glasses, I recently discovered there are small devices designed to capture facial expressions in real time and convert them into haptic vibration feedback – such as Hapware’s ALEYE and Valance’s Vibes. They could help people who struggle with recognising visual cues, such as those who are neurodivergent, better understand emotions in social interactions. A vibration might indicate someone is smiling, while another might signal a frown. This provides a new way for people to interpret and respond to visual information that would otherwise remain inaccessible.

I saw another brilliant story about a woman with motor neurone disease, who has just had her voice recreated with AI. It’s incredible that we’ve got that ability to quite literally give people a voice back.

Person on beach wearing large, sleek reflective sunglasses.

Oakley’s Meta glasses provide a range of functions that support enhanced understanding and navigation of environments for people with sight loss. Photo: Oakley

There are significant challenges to AI adoption, including the environmental impact of LLMs and data centres. What is needed to make AI an environmentally sustainable system for the future?

AI raises ethical and sustainability issues. Large models consume vast natural resources, making their current form unsustainable. Worse, AI is misused in accessibility through “overlays” – quick-fix widgets marketed as AI solutions. These don’t work, mislead buyers and can even endanger users. While AI offers tremendous benefits, its risks and misuse must be acknowledged.

Ultimately, the companies that produce these things have got to solve that problem and find ways to make them more efficient – more energy efficient, smaller models that are more tightly scoped to a specific task.

Whether they do that of their own accord – which quite frankly, seems unlikely – or whether governments and regulators hold them to account and make them do it, doesn’t matter too much in the scheme of things. These companies are full of really smart people who need to turn more of their efforts to finding solutions that aren’t going to burn up all our resources.

As individuals, we don’t currently have the ability to push back hard enough. Even if you decided not to use any of these tools, they’re so omnipresent… literally every operating system I use, personally and professionally, has AI woven into it. It’s nearly impossible to switch off. 

Do you think lots of accessibility innovation has fallen into the trap of ‘move fast and break things’?

It’s very much a Silicon Valley “tech bro” mentality, and it has been far too pervasive in recent years. The problem is that when they said “move fast and break things,” what they really meant was breaking things for people who actually rely on these systems to live their lives and get work done. 

It was always a cavalier attitude that sounded great but I think we’re starting to come out of it. There’s a growing recognition that “move fast and break things” is not a good approach. In my own work – both in my company Tetralogical and with the W3C, which develops web standards – one of the core tenets is that things have got to be backwards compatible: don’t break things. If we change a web standard in a way that breaks chunks of the web, that’s not acceptable.

The best path is probably somewhere in the middle: we need to keep moving forward and iterating, but in ways that don’t leave users behind or create a scorched earth for the people who depend on the products we build.

Pylons and electrical wires leading to substation and on to large concrete building in background, under blue sky with scorched grasses.

Google data centre in The Dalles, Oregon consumed 302.4 million gallons of water in 2023. Photo: Visitor7

How is AI being discussed from the perspective of W3C, WCAG standards and similar frameworks? 

W3C has this idea of horizontals – disciplines or topics that run across all of the standards we produce. Accessibility, security, privacy and internationalisation are the four that have been in existence for a long time, but recently they’ve started working on sustainability, which will become a fifth horizontal. 

Every new standard or specification gets reviewed by a group that specialises in that area, and any problems will be addressed by the group that’s working on the standard. There is currently a very capable core group growing within W3C looking at sustainability of our standards and the impact on the web and technology. There’s also a community group, which is not a fully fledged working group yet, but it’s looking at AI more specifically – what it is as a technology and how it’s going to impact the web.

With the fast moving development of AI technologies, how would you recommend policies, regulations and frameworks ‘keep up’ or respond? What approach is best?

I suspect what we’ll see is that, in organisations that can, people will start to specialise in policy for this area, it’s moving that quickly. We’re fewer than 20 people in my company and we’ve had an AI policy for about 18 months now; it’s hard keeping it up to date because every day something new comes along. 

When we first introduced it, we were very cautious and more inclined not to use it; and if you did, to share good use cases. That was before it got woven into almost everything that we used, whether we liked it or not, so we had to adapt. Now we’ve got the rise of agentic technology so it is really hard. 

You just have to do your best to try and keep up.

Hand making a pinching gesture, wearing a chunky black wristband.

Meta’s neuromotor wristband offers a wearable method of computer communication for differently-abled people. Image: Meta.

What advice would you give to businesses and developers looking to enhance their understanding of accessibility?

Don’t feel you need to learn everything at once. I’ve worked in accessibility for nearly 25 years and have never known it all; in fact, it’s only become more complicated. So it’s okay not to know, and it’s okay not to know how to start.

The key is to try one thing. Research it, test it, and if it works, keep doing it before moving on to the next. If you think you need to make everything perfect immediately, it becomes overwhelming – and fear of failure can stop people from trying at all. But one simple step can make a huge difference. For example, if a designer or developer just tests with a keyboard – tabbing through a page and checking if all elements work with Enter or Space – they’ve instantly improved accessibility for many people. Screen readers, magnifiers, and even speech recognition systems rely on keyboard accessibility. One small fix can help a huge group.

Over time, these practices become habits. You won’t need to look them up; you’ll just build accessibly by default. That’s when you know you’ve hit your stride.

There’s always been tension over whether accessibility should be the work of specialists or everyone’s responsibility. The answer is both. We need experts, but accessibility must also be built in from the ground up by everyone involved in design, development, and management.

Four people working on flow diagrams, writing on flip charts on a wall in an office.

Photo: This is Engineering

Finally, are you feeling optimistic for the future for accessibility?

Sustainable accessibility is something many organisations struggle with. For years, the common mindset has been “one and done”: put policies in place, run some training, and assume the job is finished. On paper, that looks fantastic; in practice, it isn’t sustainable.

People come and go. When the individuals who drive accessibility forward leave, the momentum often disappears. What looked like lasting progress turns out to have been short-term enthusiasm.

The encouraging shift I see now is that more organisations are beginning to understand accessibility isn’t a quick fix. It takes time to build systems, processes and culture so that accessibility becomes embedded and resilient. When that happens, the work survives staff turnover and continues long-term.

That trend makes me optimistic. As more organisations focus on making accessibility durable rather than superficial, we prevent new problems from constantly being created. And hopefully companies like mine go out of business and we can all go sit on the beach.

 

Thank you for sharing your these exciting developments and positive potential for change!

 

Contact us to develop digital innovation for positive change.

hello@calvium.com

+44 (0) 117 226 2000

Subscribe to the monthly Calvium newsletter to get more insight and inspiration like this in your inbox.