Google insiders say the final version of Duplex, the stunning AI bot that sounded so real it fooled humans, may be purposefully made less scary (GOOG, GOOGL)
- Google sent many a developers' tongue wagging and spooked many others with Google Duplex, the AI software that talks eerily like a human.
- Indications are that the Duplex released to the public may be different than the demo version. Google is not married to the version demonstrated on Tuesday at the Google I/O annual developer conference.
- The digital valets on the market now are a long way from conversing. Some of the shortcomings with the current tech could be observed at I/O.
Google insiders knew they had a hit on their hands ahead of the Google Assistant phone call demo at the I/O conference this week. They also knew it would raise as many concerns as it did applause.
In fact, the AI-based software's ability to mimic a human voice is so uncanny that some in-the-know Googlers say it may not actually go into the wild in its current incarnation.
During my conversations with various Google executives, managers and engineers, as well as other attendees at the Google developers conference this week, the buzz has been all about the Google Assistant demo.
Google CEO Sundar Pichai took the stage on Tuesday and played a recorded conversation of Google Assistant — the company's AI-based chat bot — making an appointment at a hair salon. Assistant’s speech possessed human-like inflections, pauses, and verbal crutches like “um” and “ah.” The technology responded quickly and logically. The voice sounded uncannily human. What’s more, the woman on the other end of the line sounded completely unaware she was chatting with software. Pichai said the technology is called Google Duplex.
The Verge called the demo “stunning.” Buzzfeed said the conversation sounded “creepy.” Wired predicted that the technology would eventually prove to be overhyped. As stunning as the demo was, many were unnerved by an AI product so sophisticated that it can fool humans.
Some of the Google insiders I've spoken to say the company expected the response to be mixed. And they said the version we heard at I/O may not be the one released to the public.
The managers I spoke to said that a final product could, for example, require Google Assistant to notify a person that they weren’t talking to a human before starting a conversation. Or the "ums" and "ahs" could be cut if that’s what is needed to make people comfortable. So, work continues on Duplex.
An acceptable price to pay
And given all the current backlash about technology's impact on society, from the spread of fake news to smartphone addiction, it would seem in Google's interest to be careful about how it rolls out something like Duplex.
The thing to remember is that by waltzing out a version of Duplex, Google played to the crowd. Google's goal is to convince developers to build apps for Assistant and the company’s other AI projects instead of building them for Apple’s HomePod or Amazon’s Alexa. All three companies are betting AI is the future and all three are vying to establish turf in consumers’ homes. The competition is intense.
By showcasing software that can converse, Google managers likely anticipated some negative press. But a few knocks in the media would probably be an acceptable price to pay for the chance at winning over the right developers.
The reality may not live up to the science-fiction dream
Of course, we can’t dismiss the possibility that Duplex won’t live up to the expectations that Tuesday’s demo created. James Temperton at Wired noted that Google thrilled the audience at last year’s I/O conference with Google’s Pixel Buds, a product that performed near-instant translation of foreign languages. The product that actually shipped to consumers, however, didn’t live up to that promise.
He also cited Google Glass and the famous skydiving stunt at I/O in 2012 as another example of a Google product failing to live up to a demo. Google discontinued production of Glass for consumers in 2015.
Another reason to be skeptical is that none of the technology that's available to the public today indicates that digital assistants are capable of doing much beyond responding to a limited number of questions constructed in highly restricted way.
Some of the shortcomings with current AI tech showed up at the developer conference on Tuesday. During a demonstration of a Lenovo smart display, which uses Google’s AI to enable users to start and stop videos via voice command -- think Google Home with a screen -- the employee handling the demo struggled to get the machine to respond correctly to his commands.
The many different conversations occurring in the room also unintentionally triggered the device to perform functions randomly. The employee took to talking to the device from only a couple of inches away.
The employee said the acoustics and the amount of people were playing havoc with the device. That made sense, but what about the infinite number of rooms in the millions of different homes out there? Many of them may also present challenging acoustics. Sometimes, multiple people will be talking in those homes. Then what?
Google Assistant and Amazon Alexa represent a huge step forward. Whether or not the tech is actually ready to talk to us remains to be seen.
SEE ALSO: 15 mind-blowing announcements Google made at its biggest conference of the year
Join the conversation about this story »
NOW WATCH: Jeff Bezos: Blue Origin is 'the most important work that I'm doing'
Contributer : Tech Insider https://ift.tt/2G33rPo
No comments:
Post a Comment