If it weren't for Amazon, it's entirely possible that instead of calling out to Alexa to change the music on our speakers, we might have been calling out to Evi instead. That's because the tech we know today as Amazon's smart assistant started out life with the name of Evi (pronounced ee-vee), as named by its original developer, William Tunstall-Pedoe.
The British entrepreneur and computer scientist was experimenting with artificial intelligence before most of us had even heard of it. Inspired by sci-fi, he "arrogantly" set out to create a way for humans to talk to computers way back in 2008, he said at SXSW London this week.
Arrogant or not, Tunstall-Pedoe's efforts were so successful that Evi, which launched in 2012 around the same time as Apple's Siri, was acquired by Amazon and he joined a team working on a top-secret voice assistant project. What resulted from that project was the tech we all know today as Alexa.
That original mission accomplished, Tunstall-Pedoe now has a new challenge in his sights: to kill off AI hallucinations, which he says makes the technology highly risky for all of us to use. Hallucinations are the inaccurate pieces of information and content that AI generates out of thin air. They are, said Tunstall-Pedoe, "an intrinsic problem" of the technology.
Through the experience he had with Alexa, he learned that people personify the technology and assume that when it's speaking back to them it's thinking the way we think. "What it's doing is truly remarkable, but it's doing something different from thinking," said Tunstall-Pedoe. "That sets expectations… that what it's telling you is true."
Innumerable examples of AI generating nonsense show us that truth and accuracy are never guaranteed. Tunstall-Pedoe was concerned that the industry isn't doing enough to tackle hallucinations, so formed his own company, Unlikely AI, to tackle what he views as a high-stakes problem.
Anytime we speak to an AI, there's a chance that what it's telling us is false, he said. "You can take that away into your life, take decisions on it, or you put it on the internet and it gets spread by others, [or] used to train future AIs to make the world a worse place."
Some AI hallucinations have little impact, but in industries where the cost of getting things wrong -- in medicine, law, finance and insurance, for example -- inaccurately generated content can have severe consequences. These are the industries that Unlikely AI is targeting for now, said Tunstall-Pedoe
Unlikely AI uses a mix of deep tech and proprietary tech to ground outputs in logic, minimizing the risk of hallucinations, as well as to log the decision-making process of algorithms. This makes it possible for companies to understand where things have gone wrong, when they inevitably do.
Right now, AI can never be 100% accurate due to the underlying tech, said Tunstall-Pedoe. But advances currently happening in his own company and others like it mean that we're moving towards a point where accuracy can be achieved.
For now, Unlikely AI is mainly being used by business customers, but eventually Tunstall-Pedoe believes it will be built into services and software all of us use. The change being brought about by AI, like any change, presents us with risks, he said. But overall he remains "biased towards optimism" that AI will be a net positive for society.
Watch this: Siri Could Become More Like ChatGPT. But Why?
05:19