Google will seek government contracts in areas such as cyber security, military recruitment, and search and rescue
Technology Giant Google announced on July 12, 2018, that it will ban use of artificial intelligence software in weapons or unreasonable surveillance efforts. The restriction could help Google management defuse months of protest by thousands of employees against the company’s work with the US military to identify objects in drone video. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” said Chief Executive Sundar Pichai in a blog post.
The Google official described the principles as a template that any software developer could put into immediate use. Though Microsoft Corp and others released AI guidelines earlier, the AI community has followed Google’s efforts closely because of the internal pushback against the drone deal. The company recommended that developers avoid launching AI programs likely to cause significant damage if attacked by hackers, as existing security mechanisms are unreliable. Furthermore, the company’s official acknowledged that enforcement would be difficult, as the company cannot track each use of its tools, some of which can be downloaded free of charge and used privately.
In recent past, breakthroughs in the cost and performance of advanced computers have carried AI from research labs into industries such as defense and health. Technology leaders are selling AI tools, which enable computers to review large data sets to make predictions and identify patterns at high speed as compared to the humans could. However, potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents from mass collection of online communications, which has increased concerns among academic ethicists and Google employees.