Google pledges not to use AI for weapons, but will complete work on Pentagon’s drone project

Anca Alexe 08/06/2018 | 09:15

Google has published a set of principles that will guide its work in artificial intelligence, following controversy over its involvement in a drone project with the US Defense Department, The Verge reports.

The company pledges to never develop AI for use in weaponry and sets broad guidelines for AI, touching on issues like bias, privacy and human oversight.

However, Google says it will continue to work with the military “in many other areas,” and its involvement in the Pentagon’s drone programme – Project Maven – which uses AI to analyse surveillance footage, will continue until the end of its contract in 2019.

Recently, thousands of Google employees signed an open letter urging the company to cut ties with Project Maven, while about a dozen people even resigned over Google’s involvement.

Google’s new principles also say that it will not work on AI surveillance projects that will violate “internationally accepted norms” or projects that contravene “widely accepted principles of international law and human rights.” The company’s main focuses for AI research are to be “socially beneficial” by avoiding unfair bias, remaining accountable to humans and subject to human control, upholding high standards of scientific excellence and incorporating privacy safeguards.

BR Magazine | Latest Issue

Download PDF: Business Review Magazine April 2024 Issue

The April 2024 issue of Business Review Magazine is now available in digital format, featuring the main cover story titled “Caring for People and for the Planet”. To download the magazine in
Anca Alexe | 12/04/2024 | 17:28
Advertisement Advertisement
Close ×

We use cookies for keeping our website reliable and secure, personalising content and ads, providing social media features and to analyse how our website is used.

Accept & continue