Google announced on Tuesday that it is reviewing the principles of how it uses artificial intelligence and other advanced technology. The company has removed language that promised not ‘technologies that cause or will cause overall damage’, ‘weapons or other technologies whose most important purpose or implementation is people or that injure or directly facilitate people,’ ‘technologies that collect information or use for supervision that violates the internationally accepted norms, ‘and’ technologies whose purpose is contrary to the generally accepted principles of international law and human rights. ‘
The changes were announced in a note at the top of a 2018 blog post that unveiled the guidelines. “We made updates for our AI principles. Visit ai.google for the latest, ”the note says.
In a blog post Tuesday, some google drivers mention the increasingly widespread use of AI, developing standards and geopolitical battles over AI as the ‘background’ for why Google’s principles should be revised.
Google first published the principles in 2018 as it moved to terminate internal protests about the company’s decision to work on a US military drone program. In response, it refused to renew the government contract and also announced a set of principles to lead future uses of its advanced technologies, such as artificial intelligence. Among other things, the principles said that Google would not develop weapons, certain supervision systems or technologies that undermine human rights.
But in an announcement on Tuesday, Google left the obligations. The new website no longer contains a set of prohibited uses for Google’s AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states that Google will “implement appropriate human supervision, caution and feedback mechanisms to align with user objectives, social responsibility and widely accepted principles of international law and human rights.” Google also says it will work to mitigate “unintended or harmful outcomes.”
“We believe that democracies should lead in AI development, led by core values such as freedom, equality and respect for human rights,” said James Manyika, Google Senior Vice President for Research, Technology and Society, and Demis Hassabis, CEO of Google Deepmind, written, the company’s valued AI research laboratory. “And we believe that businesses, governments and organizations that share these values must work together to create AI that protects people, promotes global growth and supports national security.”
They added that Google will continue to focus on AI projects “corresponding to our mission, our scientific focus and our expertise areas, and remain in accordance with the generally accepted principles of international law and human rights.”
Several Google employees expressed concern about the changes in talks with Wired. “It is profound to see that Google drops its commitment to the ethical use of AI technology without input from its employees or the wider public, despite the long-standing sentiment of employees that the company should not be in the war enterprise,” says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.
Do you have a tip?
Are you a current or former employee at Google? We would like to hear from you. Using a non-work phone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or [email protected], or Caroline Haskins on Signal at +1 785-813- 1084 or at e-mailcarolinehaskins@gmail .com
US President Donald Trump’s return to office last month galvanized many businesses to review policies that promote fairness and other liberal ideals. Google spokesman Alex Krasov says the changes have made much longer in the works.
Google calls its new goals as the pleasant, responsible and collaborative AI initiatives. Past are phrases like “be socially beneficial” and maintain “scientific excellence.” Added a mention of ‘respect for intellectual property rights’.