You are using an older browser version. Please use a supported version for the best MSN experience.

Experts cast doubt on whether China's news anchor is really A.I.

CNBC logo CNBC 11/16/2018
a man wearing a suit and tie © Provided by CNBC LLC

Over the past week, the "world's first" artificial intelligence (AI) news anchor has gone viral online, a robot version of a presenter at China's state Xinhua News Agency.

Lauded for "his" ability to broadcast 24 hours a day, the presenter said he would "work tirelessly to keep you informed." The anchor was developed by Xinhua and Chinese search engine Sogou.com and launched at the World Internet Conference last week.

But is this actually a true example of AI? Will Knight, a senior editor for AI at MIT Technology Review, is somewhat skeptical.

"The use of the term AI is a little bit tricky in this context, because the anchor itself is not intelligent, it doesn't have any intelligence ... But they are using some quite clever kind of machine learning which is a sub-field of AI to capture the likeness of a real anchor and the voice of that anchor," Knight told CNBC by phone.

When Knight first saw the anchor, he thought it was an impressive piece of mimicry. "The underlying technology for learning how to reproduce faces and voices is quite a sort of fundamental idea, and a quite powerful one potentially."

If the anchor wrote its own script, that might be a different story. "If it were to get a bunch of reports, take some phone calls and then reproduce it, that would be incredible, but that's way beyond what machines can do," he added.

As with any new technology, terms can enter people's vernacular before they are fully understood. "We should also always be really careful I think about the use of the term AI, and in this context you don't want to suggest that this anchor is actually exhibiting any intelligence, because it's not, it's just like a kind of very sophisticated digital puppet." Knight said.

AI may have created the Xinhua anchor and its voice, but the anchor itself cannot think, Ali Shafti, a research associate in robotics and AI at Imperial College London, told CNBC by phone. "What actually creates those images and the movement of the lips and the voice of this anchor is using algorithms that are related to artificial intelligence. But to call this an AI anchor is slightly overselling it."

Defining AI isn't straightforward, Shafti said. "The term itself is usually defined as a non-human device or algorithm being able to do behaviors and actions that are possible only for a person of human intelligence, or maybe not even possible for humans, so above human intelligence," he said.

"People will probably misunderstand this as the anchor itself is intelligent, it's like a human and can react to situations with an intelligent behavior which is not the case. It is basically a puppet running script. It can read script. It can do (that) very convincingly and the aspects that it looks so convincingly, that's the AI, but not what it says and it does," Shafti added.

The abilities and dangers of AI can be overstated. "As a person who does research in AI and robotics, I think we need to be very careful with how we explain AI to the general public. There is already the fear and the negative thoughts on the subject in the general public. And it is based on what is being said by people like Elon Musk and by movies and films and series that people see. It is not realistic. It is being oversold and it is necessary for people to understand what it is that we are researching and what it is that we are trying to do."

Shafti cited Imperial College's work in predicting the best treatments for patients with sepsis, or blood poisoning, as an example of AI in action. There is debate among clinicians about how much fluid to give sepsis patients and when to start medication, and Imperial's AI system analyzed 100,000 medical records to work out the best treatments for future patients. The system made more reliable decisions than human doctors, Imperial reported last month.

Others object to the hype around AI. Musk has said the race for the new tech will be the most likely cause of World War III , but Apple's Chief of Machine Learning and AI Strategy John Giannandrea has stated that predictions about the threat of AI are unhelpful . "I just object to the hype and the sort of sound bites that some people have been making," he said in September 2017, when he was in a similar role at Google.

At that time, Giannandrea added he was excited about the potential for computers to analyze written language. "Today, computers can't 'read' in the sense of read and understand and summarize a document, and so I think progress in that area is one that I am really excited about."

Knight said this is an area that is currently prone to mistakes. "Doing something like writing a report using language is one of the most difficult things you could have a computer do. Language is so powerful and flexible so it's easy for machines to fall into making lots of errors which you see with chat bots and the like," he told CNBC.

He cited Google's AlphaGo, the computer that beat a human at the ancient Chinese game of Go, as an impressive example of AI. But even that is limited. "It's come a long way, but it's still limited. As experts will point out, it can't play another game … The thing that humans are really good at is you could learn one game and you can transfer what you've learnt in one domain to another very effectively," he told CNBC.

"We should be a bit judicious about how we use the term (AI) generally and I think often it's more useful to tell people what the machine is actually doing … As the interested public, we should be cautious of that and really sort of question what is going on under the hood, because that's what matters. It's not magic, even if it's pretty impressive."

A spokesperson from Sogou told CNBC via email: "'AI Anchor' technology incorporates the latest advances in image detection and prediction capabilities, as well as speech synthesis, to produce a realistic virtual anchor that both appears and sounds like an actual person."

"This requires our algorithms to generate appropriate and life-like images and sounds for any textual input, which draws on cutting-edge predictive capabilities and has resulted in intelligent technology that can produce a seemingly live broadcast. This is just one potential application of virtual character technology which could revolutionize the way we think about how humans interact with machines and facilitate more natural human-machine interactions in a wide range of everyday scenarios."

  • CNBC's Catherine Clifford contributed to this report
AdChoices
AdChoices

More from CNBC

image beaconimage beaconimage beacon