The best way to avoid killer robots and other dystopian uses for AI is to focus on all the good it can do for us, says tech guru Phil Libin

Evernote founder Phil Libin sitting on a couch on August 1, 2018 in the San Francisco headquarters of All Turtles, a startup incubator where he serves as CEO.

  • Artificial intelligence (AI) is developing rapidly and starting to be adopted widely.
  • The technology has the potential to transform society.
  • But it could lead to lots of negative outcomes, such as massive unemployment, and could be put to plenty of deleterious uses, such as large-scale violations of privacy.
  • The best way to avoid those harms would be to focus on creating products that use the technology in socially beneficial ways, said Phil Libin, best known as the founder of Evernote, whose new startup, All Turtles, incubates AI projects.

When it comes to how artificial-intelligence technology might affect society, there are a host of things to worry about, including the massive loss of jobs and killer robots.

But the best way to avoid such negative outcomes may be to ignore them, more or less.

That's the advice of Phil Libin, CEO of All Turtles, a startup that focuses on turning AI-related ideas into commercial products and companies. In a recent conversation with Business Insider, Libin likened the situation to some advice he received when he was learning to ride a motorcycle.

His instructor taught him that if an accident happened in front of him while he was riding on the highway, such as a semi truck flipping over, the worst thing to do would be to stare at the truck. Instead, his instructor said, he should focus on the point he needed to get to to avoid colliding with the truck.

"If you look at what you're trying to avoid, then you're going to run into it," said Libin, who previously founded Evernote. "You've got to look at where you want to be."

The tech industry would do well to follow that admonition when it comes to developing artificial intelligence, he said.

Years in the making, AI is starting to progress rapidly. It's being used by consumers in the form of intelligent assistants such as Amazon's Alexa to answer trivia questions and make purchases. And it's being used by corporations to help them make business decisions.

AI has the potential for good — and evil

Many observers think the technology could transform society in profound ways, and not necessarily for the better.

Indeed, there are some potentially dangerous and dystopian outcomes and uses of AI. It's already starting to be used in China as part of a mass surveillance scheme. It could be used to track people basically from their birth on, collecting intimate insights into their every thought and desire. It could be used to perpetuate or worsen discrimination against particular people or groups. And it could be used to power terrifying new weapons.

Technologists and policy makers ought not ignore such potential uses of the technology, Libin said. They should be aware of them. But the best way to avoid them would be to concentrate on developing ways to use AI in socially beneficial ways, he said.

"There really is a flipped-over truck, and there's all sorts of bad things that can happen. And we definitely need to work towards not hitting it," Libin said. "But the best way to do that is to [say] … this is where we want to go. Here's a vision of certain products that are like obviously good, and virtuous, and the world needs them, and they solve real problems, and let's make those products."

Indeed, that's what he sees as a big part of All Turtles' mission. One of the first projects the company helped incubate is a chatbot called Spot that is designed to make it easier for employees to document and report incidents of sexual harassment and discrimination. Another is Disco, a plug-in for collaboration software Slack that helps employers give timely positive feedback to workers.

The projects All Turtles works on "is all stuff that we should be able to, right from beginning, right by design, feel good about," he said.

SEE ALSO: This former judge is heading the World Economic Forum's approach to AI — here's why she thinks regulation is unlikely, and what should be done about AI instead

SEE ALSO: The founder of a beloved productivity app thinks Hollywood has a better blueprint for innovation than Silicon Valley —and he's taking his cues from Netflix to fix it

SEE ALSO: The founder of a beloved productivity app thinks the startup model is broken — here's how he's trying to keep the tech industry from 'making the same 10,000 mistakes over and over again'

Join the conversation about this story »

NOW WATCH: British Airways has a $13 million flight simulator that taught us how to take off, fly, and land an airplane



Contributer : Tech Insider https://ift.tt/2MW0x2y
The best way to avoid killer robots and other dystopian uses for AI is to focus on all the good it can do for us, says tech guru Phil Libin The best way to avoid killer robots and other dystopian uses for AI is to focus on all the good it can do for us, says tech guru Phil Libin Reviewed by mimisabreena on Monday, October 01, 2018 Rating: 5

No comments:

Sponsor

Powered by Blogger.