Google pledges not to develop AI weapons, but says it will still work with the military - Frontline

Trending

Latest stock market news from Wall Street - CNNMoney

Friday 8 June 2018

Google pledges not to develop AI weapons, but says it will still work with the military


a close up of a sign
Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last month following a months-long controversy over its involvement in a Department of Defense drone project. The document, titled “AI at Google: our principles” and published today Google’s primary public blog, sets out objectives the company is pursuing with AI, as well as those applications it refuses to participate in. It’s authored by Google CEO Sundar Pichai.
Notably, Pichai says his company will never develop AI technologies that “cause or are likely to cause overall harm”; involve weapons; are used for surveillance that violates “internationally accepted norms”; and “whose purpose contravenes widely accepted principles of international law and human rights.” The company’s main focuses for AI research are to be “socially beneficial”; “avoid creating or reinforcing unfair bias”; be built and tested safety; be accountable to human beings and subject to human control; to incorporate privacy; “uphold high standards of scientific excellence”; and to be only used toward purposes that align with those previous six principles.
“At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,” Pichai writes. “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”
However, Pichai does not rule out working with the military in the future. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he writes. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”
Gizmodo reported last week that Google plans to end its involvement with Project Maven, a government initiative that involves using Google’s open-source machine learning libraries to parse drone footage. Google was not involved in the operation of drones, the company claims, but its involvement in any way with drone warfare on behalf of the US government was met with fierce backlash both inside and outside the company. Thousands of employees signed an open letter urging Google to cut ties with the program, and at least a dozen or so employees even resigned over the company’s continued involvement as of last month.
Eventually, Google Cloud CEO Diane Greene told employees that the company would end its involvement with Project Maven when its contract expired in 2019. According to Wired, Google’s work with Project Maven would fall outside the work it plans to continue with the military, because using AI to analyze drone footage “doesn’t follow the spirit of the new guidelines.”

No comments:

Post a Comment