Searching for AI ground rules

0
66
AI ground rules

About seven years ago, three researchers from the University of Toronto built a system that could analyze thousands of photos and teach themselves how to recognize everyday objects like dogs, cars and flowers.

The system was so effective that Google purchased the tiny startup these researchers were just getting off the ground. And their system soon sparked a technological revolution. Suddenly, machines could “see” in a way that wasn’t possible in the past.

This made it easier for a smartphone app to search for your personal photos and find the images you were looking for. It has accelerated the progress of driverless cars and other robotics. And it increased the accuracy of facial recognition services in the country for social networks like Facebook and law enforcement agencies.

But researchers soon noticed that these facial recognition services were less accurate when used with women and colored people. Activists raised concerns about how companies gathered the huge amounts of data needed to train such systems. Others worried that mass surveillance or autonomous weapons would ultimately result from these systems.

How should we, as a society, address these issues? Many have been asking the question. Not everybody can agree on the answers. Google sees things differently from Microsoft. A few thousand Google employees see things differently from Google. The own point of view of the Pentagon.

Last week at the New Work Summit, hosted by The New York Times, conference participants worked in groups to compile a list of recommendations for artificial ethical intelligence development and deployment. The results are included in this.

But even the existence of this list sparked controversy. Some participants, who spent years studying these issues, questioned whether a group of randomly selected people was the best choice to decide the future of artificial intelligence.

LEAVE A REPLY

Please enter your comment!
Please enter your name here