An artificial intelligence (AI)-powered digital twin has been developed by a UK charity with support from technology companies and universities to enable people with communication disabilities to speak naturally.
The technology, known as VoxAI, is a step up from the computer voice used by the late physicist Stephen Hawking, one of the first famous public figures with motor neurone disease (MND).
The Scott-Morgan Foundation was founded by its founder, a roboticist. Peter Scott-Morganto apply engineering principles to disability after being diagnosed with MND.
A five-year project led by the foundation has developed an artificial intelligence-powered platform that helps people with MND, also known as amyotrophic lateral sclerosis (ALS), communicate naturally despite their disability.
It was developed by the foundation's chief technologist Bernard Muller, who is paralyzed by MDD and learned to write code using eye-tracking technology.
The platform combines artificial intelligence technologies to create photorealistic avatars that move naturally, with natural facial expressions and can reproduce the voice of the person using them. It is capable of listening to a conversation and offering people with disabilities a choice of three responses that they can choose based on their understanding of the person.
For example, one of the people testing this technology, Leah Stavenhagen, worked as a consultant at McKinsey before developing MND. The artificial intelligence she uses was trained on a book she wrote, as well as 30 interviews in English and French.
Lavonne Roberts, CEO of the Scott-Morgan Foundation, told Computer Weekly that while people don't mind waiting to hear what Stephen Hawking has to say, delays in communication tend to create problems for both the speaker and the listener.
“When someone has to painstakingly spell out something, their eyes become tired, which leads to further progression of MND, so we try to protect against that,” she said.
“The other thing that happens is people start giving much shorter answers because they don’t have time to carry on a conversation,” Roberts added. “And frankly, you get some awkward pauses.”
The Scott-Morgan Foundation, which demonstrated the technology today at the AI Summit in New York, plans to make the software available for free so that as many people as possible can use it. It will also offer a subscription version with more advanced features.
Roberts said many commercially available computers and tablets now have workable eye tracking, and tracking devices provided by the NHS can also use the technology.
“The idea was to democratize the technology by putting it online and providing license keys so people could have a voice again,” she said.
More than 100 million people around the world living with conditions that severely limit speech, including people recovering from a stroke or living with cerebral palsy, traumatic brain injury or nonverbal autism, could benefit from this technology.
The foundation plans to launch a two-year trial of the platform that will track about 20 participants using the technology, led by Mexico's Technological University of Monterrey, which will evaluate its impact.
A simplified platform is also being developed that can be used by people without access to Wi-Fi.
Gil Perry, CEO of D-ID, which creates digital avatars for businesses, contributed to the project after the company helped several people with MND/ALS in ways they say have changed their lives.
His company joined the Scott-Morgan Foundation project about two years ago after meeting Roberts. “I saw that LaVonne has a vision and she can connect all the dots because she has a group of people who are just sleeping and dreaming about this vision day and night,” Perry said.
The company has improved its technology and can now create avatars with facial expressions even for those whose condition means they are in the late stages of immobility.
Roberts said one breakthrough moment came after a mother told the foundation that although the technology was good, “you just couldn't capture my daughter's smile.” This prompted work to make avatars more realistic. “I remember Erin’s mother crying when she saw Erin in the video and saying, ‘That’s her smile,’” she said. “And I knew we were onto something.”
Müller, who designed the platform, said his avatar not only makes him visible, but also “present.” “When someone sees my avatar smile or shows concern, they see me, not the disability,” he added. “It changes everything about how I interact with the world.”






