Verbal maps and human intelligence
It has been great to start work on the UCAN GO project. The problem we are trying to solve is to raise the confidence of visually impaired people wanting to visit arts venues without special assistance. Our goal is to create an app that will help lead people around a venue through verbal instructions in an iPhone app, incorporating verbal maps and human intelligence. We are aiming to test and pilot our solution at the Wales Millennium Centre, Cardiff and the Torch Theatre, Milford Haven.
UCAN have described some of the activities we have been up to here. Below is a photo from our recce at the Wales Millennium Centre where the wonderful and helpful Peter shared his insider knowledge about the way the seats are laid out, numbered and lit.
In this blog post I just wanted to capture some of the early ideas, principles and aha moments from a tech point of view along the way.
The idea is to develop a lexicon and schema for creating a ‘verbal map’ of a building.
The lexicon will be a new language of buildings: What will be the equivalent of junctions and post codes in our verbal map? How will concepts of turns, direction and distance be best conveyed? What are the best landmarks and signs to use?
The schema will be for modeling the verbal map, storing the data and to develop tools to populate the data to construct the map. The tools will need to be usable by the UCAN team, as it will be their job to make the verbal maps.
Unlike Sat Nav, our verbal map will not be able to rely on GPS to report a user’s position. Instead we intend to see if we can develop an intuitive user-reporting interface for each step of the journey and rely more on human rather than artificial intelligence.
Principle 1. Don’t depend on external instrumentation
One key principle is to see whether we can come up with a solution that does not rely on any external instrumentation in the environment, but just uses the considerable power in an average smart phone. Whilst there are many promising indoor-location solutions that might break into the mainstream quite soon it still might be some time until arts venues would be able to afford to introduce them. My philosophy is to treat any developments in sensing technologies as opportunities to enhance our solution but not to depend on it. For example, iBeacons might be a great way to confirm a location within a building, but what if the batteries run out or crowds of people interfere with their signal? If the solution can still work without their data it would be much more robust.
This interesting article on a new Indoor-Location platform called Inside resonated with the approach that we want to take with UCAN GO. They are “looking at how humans navigate, and how the natural human navigation works”
Their system uses vision algorithms and gyroscopes to understand the spatial data and position users accordingly. One to follow up on as we progress our research.
Principle 2. User led research.
My second principle is to make sure that our approach is user led rather than tech led. This is a principle already embraced by UCAN who champion the capabilities of blind and partially sighted young people and the idea that visually impaired young people are the best individuals to lead on activities that are for visually impaired young people. They have appointed Mared Jarman and Megan John, both registered blind, to lead the project. I want to make sure that we don’t come up with a solution for Megan and Mared; we want to come up with a solution with Megan and Mared.
So far we are off to a great start. The UCAN team are brilliant at sharing their ideas and demonstrating the problems they have with some app designs or technologies. They are gathering together examples of apps that work well so that we can apply best design principles as we develop our interface. They have also embraced the suggestion for capturing user journeys by recording their own journeys through our target venues. The idea of this first data gathering exercise is to record how one person would lead another to the key destination points such as their seat, the toilet or the bar. All essential for a good night out at the theatre! In the first exercise they were also encouraged to make observations and “think aloud” as they went.
After listening to these first recordings we were able to refine the method slightly so that one person acted as if they were “the app” and the second followed the instructions given.
The activity has helped to highlight some useful concepts that we can take forward into a first test prototype.
Principle 3. Human Intelligence
My final principle is about using human intelligence intelligently! I think its a bit like the old joke about NASA spending millions inventing a pen that could write upside down, whereas the Russians just used a pencil. I want to try to stay aware if we start to come up with over elaborate solutions when a much more pragmatic approach would be fine. As an example I caught myself falling into the trap of thinking about modelling a route in a similar way to a Sat Nav, with every turn and distance modelled between “obstacles” in the room. However as soon as I discussed this approach with the team they pointed out the obvious, that partially sighted people would not need micro directions, they can easily work out a path to a door as long as they can see and identify the right door across the room. That was a great “Aha!” moment!
We will be using the first test session to find out just how well some of our early ideas for directing and identifying natural way points are received by more of the visually impaired UCAN members. I will post more on our approach to the user tests and results in next months blog.
The Digital Research and Development Fund for the Arts in Wales is a partnership between the Arts Council of Wales, Arts & Humanities Research Council (AHRC) and Nesta to support arts projects across Wales that work with digital technologies to expand audience reach and engagement and/or explore new business models for the arts sector within Wales.