Google pledges not to develop AI weapons, but says it will still work with the military - Frontline

Trending

Friday, 8 June 2018

Google pledges not to develop AI weapons, but says it will still work with the military


a close up of a sign
Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last month following a months-long controversy over its involvement in a Department of Defense drone project. The document, titled ā€œAI at Google: our principlesā€ and published today Googleā€™s primary public blog, sets out objectives the company is pursuing with AI, as well as those applications it refuses to participate in. Itā€™s authored by Google CEO Sundar Pichai.
Notably, Pichai says his company will never develop AI technologies that ā€œcause or are likely to cause overall harmā€; involve weapons; are used for surveillance that violates ā€œinternationally accepted normsā€; and ā€œwhose purpose contravenes widely accepted principles of international law and human rights.ā€ The companyā€™s main focuses for AI research are to be ā€œsocially beneficialā€; ā€œavoid creating or reinforcing unfair biasā€; be built and tested safety; be accountable to human beings and subject to human control; to incorporate privacy; ā€œuphold high standards of scientific excellenceā€; and to be only used toward purposes that align with those previous six principles.
ā€œAt Google, we use AI to make products more usefulā€”from email thatā€™s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,ā€ Pichai writes. ā€œWe recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.ā€
However, Pichai does not rule out working with the military in the future. ā€œWe want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,ā€ he writes. ā€œThese include cybersecurity, training, military recruitment, veteransā€™ healthcare, and search and rescue.ā€
Gizmodo reported last week that Google plans to end its involvement with Project Maven, a government initiative that involves using Googleā€™s open-source machine learning libraries to parse drone footage. Google was not involved in the operation of drones, the company claims, but its involvement in any way with drone warfare on behalf of the US government was met with fierce backlash both inside and outside the company. Thousands of employees signed an open letter urging Google to cut ties with the program, and at least a dozen or so employees even resigned over the companyā€™s continued involvement as of last month.
Eventually, Google Cloud CEO Diane Greene told employees that the company would end its involvement with Project Maven when its contract expired in 2019. According to Wired, Googleā€™s work with Project Maven would fall outside the work it plans to continue with the military, because using AI to analyze drone footage ā€œdoesnā€™t follow the spirit of the new guidelines.ā€

No comments:

Post a Comment