This former judge is heading the World Economic Forum's approach to AI — here's why she thinks regulation is unlikely, and what should be done about AI instead

Kay Firth-Butterfield, Head, Artificial Intelligence and Machine Learning at World Economic Forum, seen outside the Hilton Hotel in downtown San Francisco on August 21, 2018 after her speech at the Singularity University Global Summit.

  • Artificial intelligence poses both promise and perils for societies and their citizens.
  • Kay Firth-Butterfield, who heads the AI program at the World Economic Forum, is working with governments, companies, and non-profits to help them understand the issues surrounding the disruptive technology, and work through ways to maximize its benefits and minimize harms.
  • The most effective strategy for regulating AI will likely be through government and industry standard setting, she said.


If you ask Kay Firth-Butterfield about the promise and potential perils of artificial intelligence, she might start talking about toys, dolls, and action figures. 

Not the toys of today, necessarily, but those of the future, that will be empowered with AI. Such toys have the promise to be the quintessential educational tools, interacting with kids daily, gaining intimate knowledge about how they think and communicate, and using that information to help them learn.

"Personalized education using AI for kids is going to be a huge game changer," said Firth-Butterfield, who heads the artificial intelligence and machine learning program at the World Economic Forum, said in a conversation last week with Business Insider.

On the other hand, such toys raise a host of issues that policymakers are only starting to get a handle on. The privacy implications alone of potentially having a toy — or a succession of them — collect a child's every utterance from the time they can talk until adulthood are tremendous, she said.

"That is an issue that we really have to solve," she said.

And toys are just one of many areas where society will have to wrestle with both the potential and perils of AI.

The technology promises improvements to everything from industrial processes to agriculture to transportation, Firth-Butterfield said. But it also could lead to a raft of challenges and dangers, including massive job losses in a relatively short period of time, the illegitimate denial of goods or services thanks to flawed or biased algorithms, and citizens' loss of control of what was previously personal data, she said.

Firth-Butterfield works with governments and companies to think through AI issues

In her job with the World Economic Forum, Firth-Butterfield works with representatives of governments, corporations, civil society groups, and academic institutions to work through some of those challenges. The projects she and her group lead are designed to come up with ways to govern AI that will allow countries and companies to reap its benefits while minimizing its harms.

"It's really important that we know that there are all these different tensions, because without addressing them, we are really left with, I suspect, a failing trust in the technology," she said. "What I certainly don't want to see are all the benefits of AI somehow being lost because we haven't put in the ethical underpinnings to help the public know that we're doing something safe."

It's going to enable us to feed more people.

Firth-Butterfield, who has served as a lawyer and judge in the United Kingdom, has been helping people and companies work through the legal and ethical implications of AI for years now, as a professor, corporate advisor, and consultant. She joined the World Economic Forum last fall.

For her, AI has "enormous" potential. In education, it promises to provide students personalized learning programs that are tailored to their individual needs, learning styles, and aptitude.

In industry and business, the technology could help companies significantly reduce the amount of energy they use, she said. Google, she noted, announced two years ago that its DeepMind machine-learning technology helped it reduce the amount of energy it uses to cool the servers in its data centers by 40%.

And in agriculture, AI could be used in tandem with Internet of Things devices to make farmers and agribusinesses more productive and efficient, she said. AI could take the data collected by sensors in fields to help farmers determine how much fertilizers, pesticides, or even just water needed by their crops. 

"It's going to enable us to feed more people," she said.

AI has plenty of potential pitfalls

But she's equally concerned with the possible pitfalls of the technology. Algorithms that are flawed in design or in the data they rely on could lead to negative consequences for particular groups of people.

There's a long history of US lenders denying home loans to black people because of the color of their skin, for example. Software designed to automate loans approvals could end up perpetuating that prejudice if that bias is baked into the underlying algorithms, Firth-Butterfield warned. The same is true for discrimination in employment.

Harmful biases have already made themselves evident in artificial-intelligence software and tools. Two years ago, for example, the image recognition software built into Google Photos infamously labeled African-Americans as gorillas. Google also scuttled a video conferencing service intended for employees after the service's face-recognition software failed to detect the faces of people of color, Business Insider reported recently.

We don't have the luxury of a long time to actually even out the effects on job loss with this revolution, because it's happening so quickly.

"It's really important" that we make sure that we're "not encoding own prejudices and taking them forward with us, because if do that, we will actually stultify the development of the world," she said.

Biased algorithms aren't the only thing she's worried about. AI poses a big threat to employment worldwide.

The world had decades to adapt to the upheavals of the second industrial revolution, the one that led to mass production of everything from steel and automobiles, Firth-Butterfield said. But artificial intelligence is developing and likely will be adopted much more rapidly — and the impact on the job market will likely be felt in short order too, she said.

"We don't have the luxury of a long time to actually even out the effects on job loss with this revolution, because it's happening so quickly," she said.

And then there are the ways that AI could erode privacy and potentially harm kids.

Firth-Butterfield favors standards, not regulation

The best way to maximize the benefits of AI while minimizing the benefits is to have multiple stakeholders — governments, corporations, non-profit groups, and more — work through the issues and come up with ways to govern technology, Firth-Butterfield said. That doesn't have to be through laws and formal government regulations, she said. In fact, the better way to regulate AI will be to do it through government and industry standards, she argued.

By setting standards that attempt to minimize harms and take ethics into account, governments in particular can significantly influence the development of artificial intelligence, thanks in part to their huge purchasing power, she said. And setting standards tends to be a lot quicker and more flexible than crafting formal regulations or laws, so it can better respond to changing developments, she said.

AI's running fast, and we need to run as fast with governance mechanisms

"AI's running fast, and we need to run as fast with governance mechanisms," she said.

Those standards will need to focus on minimizing bias and protecting privacy, she said. They'll also need to make clear who or what entities are legally accountable for any harms that take place. And they'll need to ensuring transparency, so citizens and consumers understand how the AI algorithms work and what they're doing.

To be sure, there will be cases where governments will need to put in place formal regulations, Firth-Butterfield acknowledged. Those will likely be when they need to protect the most vulnerable people in society, including kids, the disabled, and the elderly, she said.

Already some countries are ahead of the game in thinking through AI governance issues, Firth-Butterfield said. Among them: Brazil, China, India, and the United Kingdom.

"There are a number of countries that are already stepping up to the plate," she said.

SEE ALSO: The founder of a beloved productivity app thinks the startup model is broken — here's how he's trying to keep the tech industry from 'making the same 10,000 mistakes over and over again'

SEE ALSO: A new study shows that tech CEOs are optimistic about the future, even if they still don't understand millennials

SEE ALSO: AI is great at recognizing nipples, Mark Zuckerberg says

Join the conversation about this story »

NOW WATCH: This machine perfectly pours concrete



Contributer : Tech Insider https://ift.tt/2NjVry2
This former judge is heading the World Economic Forum's approach to AI — here's why she thinks regulation is unlikely, and what should be done about AI instead This former judge is heading the World Economic Forum's approach to AI — here's why she thinks regulation is unlikely, and what should be done about AI instead Reviewed by mimisabreena on Monday, August 27, 2018 Rating: 5

No comments:

Sponsor

Powered by Blogger.