Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last month following a months-long controversy over its involvement in a Department of Defense drone project. The document, titled āAI at Google: our principlesā and published today Googleās primary public blog, sets out objectives the company is pursuing with AI, as well as those applications it refuses to participate in. Itās authored by Google CEO Sundar Pichai.
Notably, Pichai says his company will never develop AI technologies that ācause or are likely to cause overall harmā; involve weapons; are used for surveillance that violates āinternationally accepted normsā; and āwhose purpose contravenes widely accepted principles of international law and human rights.ā The companyās main focuses for AI research are to be āsocially beneficialā; āavoid creating or reinforcing unfair biasā; be built and tested safety; be accountable to human beings and subject to human control; to incorporate privacy; āuphold high standards of scientific excellenceā; and to be only used toward purposes that align with those previous six principles.
āAt Google, we use AI to make products more usefulāfrom email thatās spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,ā Pichai writes. āWe recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.ā
However, Pichai does not rule out working with the military in the future. āWe want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,ā he writes. āThese include cybersecurity, training, military recruitment, veteransā healthcare, and search and rescue.ā
Gizmodo reported last week that Google plans to end its involvement with Project Maven, a government initiative that involves using Googleās open-source machine learning libraries to parse drone footage. Google was not involved in the operation of drones, the company claims, but its involvement in any way with drone warfare on behalf of the US government was met with fierce backlash both inside and outside the company. Thousands of employees signed an open letter urging Google to cut ties with the program, and at least a dozen or so employees even resigned over the companyās continued involvement as of last month.
Eventually, Google Cloud CEO Diane Greene told employees that the company would end its involvement with Project Maven when its contract expired in 2019. According to Wired, Googleās work with Project Maven would fall outside the work it plans to continue with the military, because using AI to analyze drone footage ādoesnāt follow the spirit of the new guidelines.ā
No comments:
Post a Comment