What Do Smart Assistants Have To Do To Learn To Love Them?

The language assistants are still pretty stupid at the moment. That could change if they evolve into real artificial intelligence. Can they become more than just hype?

“There’s a paradigm shift every ten years,” says Siri inventor Adam Cheyer. In the 1980s, the desktop computer changed the relationship between man and machine. Then in the 90s, the Internet came with the browser and once again confused everything. Finally, the iPhone appeared. Via touch control, app store and mobile Internet, it made the phone the most important interface between the analog and virtual worlds. “The iPhone is celebrating its tenth birthday this year,” says Cheyer. “So we’re about to change again.”

But what comes after the smartphone? Cheyer is sure he already knows the answer: Language assistants. In other words, artificial intelligence that speaks to people and automatically fulfils their wishes. Without annoying typing in search windows, without scrolling through apps and menus. Artificial Intelligence (AI) has every detail under control.

The first generation of these helpers has been available for a long time. In October 2016, Amazon launched its assistant Alexa on the German market in the form of a loudspeaker. At the beginning of August, Google handed in Home, a similar device. And Apple expects the Siri-Speaker HomePod at the beginning of 2018. These devices are still far from being AI all-rounders. They start spotify playlists, switch on the lights, read the weather and the news in front of them or make themselves useful as expensive kitchen alarm clocks. Still, because they will get more to do.

Although Apple had the first assistant with Siri – at that time still on the iPhone 4s – the company lags behind in this development. Cheyer left the company shortly after Siri was released. In 2012, he founded Six Five Labs. With VIV, he wanted to build an assistant that could do more – a true artificial intelligence. “Smart assistants haven’t made much progress yet and it’s a challenge,” says Cheyer. For them to become really smart, they need to learn to teach themselves new skills. They have to become complex AI systems that master tasks on their own. To a software like AlphaGo from Google that can teach itself to play Go better than anyone in the world. The problem is that everyday life is much harder for a computer to understand than Go.

So far, every new trick, every new command has to be written by hand. That devours vast amounts of resources. Because Amazon and Google let themselves be helped by other developers, their systems also passed Apple by. Only in the middle of last year did Apple finally open Siri. But no matter how much of this work is outsourced, there will always be limits – there will always be the moment when a language assistant says: “Unfortunately, I can’t help.”

Cheyer’s former Siri colleague Babak Hodjat has a similar view. He also believes that assistants have to become AIs. Hodjat developed Siri’s speech recognition software. Today he is CEO of the AI company Sentient Technologies. When a computer understands natural language, people have high expectations, says Hodjat: “Beyond human intelligence. AI should accomplish the impossible.”

It is still absurd, for example, to require such a language assistant to plan an entire holiday or an evening at the cinema, including reservation and travel. Only a real artificial intelligence could do that. It would not have to be taught the structure of each website or ticket app individually, but would simply open up the process for itself. When the time comes for the assistant to succeed, the paradigm shift predicted by Cheyer would begin. Assistants would learn most everyday interactions between humans and computers within a very short time and become smarter on their own.

Because this future is still somewhat further away – Cheyer and Hodjat admit this – the assistants must make themselves useful now only once elsewhere. What they can learn quickly: control over their own Smarthome. Because many manufacturers of such gadgets have a problem: Customers are overwhelmed by the many competing systems and apps.

Companies like Nest, Bosch or Nespresso would have an interest in all their household appliances being able to talk to the usual assistants, says Simon Bryant, an expert in home electronics at the market analysis company Futuresource: “It’s much easier for companies if they can say that they are compatible with an assistant. That’s what consumers understand. So Google Home, Amazon Alexa or Siri will soon be at the heart of the smart home. Coffee is no longer brewed via the Nespresso app, but by calling.

The market for language assistants is large. According to analyses by Futuresource, 30 percent of the audio market already consists of loudspeakers with built-in assistants. 3.4 million such gadgets were sold in the first quarter of 2017. “Enthusiasm in such a new market is always high,” says Bryant. “It’s driven by the appeal of the new.”

The Amazon Assistant is installed in 44 percent of the devices delivered to date. Most of them are currently sold in the USA and Great Britain. The reason: The big manufacturers develop their software first in English. An assistant cannot simply be translated into a new language. In each country, the assistant needs its own team of editors to write its texts.

Assistants could be particularly successful in emerging markets. It is already clear that the “next billion people” who will soon be online, as tech companies call them, cannot read and write at all. Instead of sending text messages, these users then send voice messages. A smart assistant would probably be their preferred interface for booking a flight or downloading an app.

If language assistants become as successful as the experts believe, then in the future hotel bookings or orders will first be processed via Google Assistant, Alexa or Siri. These large companies can therefore continue to expand their ecosystems. It is no longer what is on the first page of Google search results that will determine the success of an offer, but whether the assistant knows it or not. “Language assistants could give a single company the opportunity to influence the entire market,” writes Futuresource in his analysis.

In order not to be left behind, more and more companies want to establish their own assistants. Microsoft has integrated Cortana into Windows 10 and Xbox, and Samsung bought the company from Siri inventor Cheyer in 2016. Presumably in order to counteract the previous failures with the self-developed assistant S-Voice. So far there is no finished AI from Cheyer and a smart Samsung speaker will probably not be available in the near future according to current rumors. But the list of companies with their own ambitions is long: Baidu and JD.com from China, Orange from France and South Korean Telecom, to name only the biggest. Even Facebook’s chatbot system could be seen as a first step towards Smart Assistant.

So far, most users use only a few functions of their devices, says analyst Bryant: “Consumers use voice assistants to listen to music or radio and to ask about the weather. They usually only use new functions if they have just bought the device. But never again after that.

In order for language assistants to really succeed, four conditions must be met, says Cheyler: “Users want a single assistant on every device that can handle every service and personalize everything for the user. So they must always be available and easy to use. Only then will their triumphal march come. “Assistants will be used in education, healthcare, construction and business,” says Cheyer.

But there could be a fifth condition that such a super-assistant has to fulfill beforehand. Particularly in Germany, many people are concerned about their privacy. A gadget from Google, Amazon or Apple that always listens in is particularly problematic. AI expert Hodjat has the solution: language assistants must become more human. “If my echo has eyes and looks like a little android, then it’s easier for me to accept it.” Suddenly it’s not so bad if the little robot Buddy has a microphone that’s always active. Because people could build up a relationship of trust with him, just like they do with a pet. So maybe the Furby is about to make its big comeback. Instead of “Hihi, Gabo Gabo” he’ll crow “I’m afraid I didn’t understand that”.

Comments are closed.