Posted 13 августа 2021, 14:46

Published 13 августа 2021, 14:46

Modified 24 декабря 2022, 22:37

Updated 24 декабря 2022, 22:37

"Do no evil!" Vatican strongly opposed flying killer robots

13 августа 2021, 14:46
Сюжет
War
This initiative can only be realized with broad support from technology companies, corporations and academics.

Sergey Putilov

Local conflicts in recent years have been characterized by an increasingly active use of attack drones. The latter, moreover, are increasingly able to independently decide on the destruction of the target. This is facilitated by robotization and the development of artificial intelligence. However, not everyone agrees that the use of such lethal systems is in accordance with the norms of humanitarian law and morality. The Permanent Mission of the Holy See to the UN in Geneva has issued a strong statement against the use of lethal automated weapons systems, according to the official publication of the Roman Church Vatican News.ev

In Geneva, at a meeting of the Group of Governmental Experts on Autonomous Lethal Weapon Systems (LAWS), a group of Vatican experts presented their vision of this issue. From the perspective of the Catholic Church, autonomous weapons systems capable of self-learning or self-programming "necessarily leave room for a certain level of unpredictability." That is, for example, they do not take into account if civilians are near the terrorist, who may also suffer as a result of the strike. Unlike a human, a robot cannot use the "principle of discrimination," Vatican experts believe. The use of a swarm of small drones - "kamikaze" in urban areas further increases the risks to the civilian population. “Without direct human control, such systems can err in setting targets. The concept of a swarm of autonomous weapons further exacerbates this risk, as the massive nature of the swarm can lead to excessive damage and indiscriminate action, which is clearly contrary to international humanitarian law, ”the Holy See document says.

At the same time, as the Vatican experts note, among prominent scientists, engineers, researchers, there is a growing understanding that the use of automated combat systems is contrary to moral norms. “Increasingly, for ethical reasons, employees and entrepreneurs are opposing certain projects related to the military use of artificial intelligence”, - the statement said.

Indeed, in 2018, Google said it would not allow its artificial intelligence (AI) products to be used in the arms industry. By doing so, the corporation tried to demonstrate that its unofficial motto "Corporation of Good" is not an empty phrase, although billions of dollars that could have been poured onto it from the Pentagon were at stake. The company said it will not develop "technology that causes or is likely to cause harm," or AI technology for use in weapons. "These are not theoretical concepts," Google CEO Sundar Pichai said in a blog post, referring to the company's recently approved code of ethics called "New Principles".

It all started with the fact that Google's management the day before was criticized by its own employees for a contract for the supply of image recognition technology to the US Department of Defense within the framework of the Project Maven program ("Expert", "Specialist"). Later it became known that Google will not renew the contract with the military. The Specialist project was initiated by the US Department of Defense. Its purpose is to process a huge amount of visual information coming from unmanned aerial vehicles operating in various parts of the world. It is assumed that the project has already been used by the Pentagon in the fight against ISIS (a terrorist organization banned in Russia) in Syria. At the same time, Google previously claimed a multi-billion dollar contract to transfer Pentagon data to cloud services. Known as JEDI (Joint Venture Defense Infrastructure), this contract could be worth billions of dollars in cloud computing over 10 years. The total lost profits of the company in case of termination of cooperation with the military department can reach 11 billion dollars. As stated in an open letter signed by 3,000 Google employees and published by The New York Times, the reputational risks of the "kindest" corporation in the world will be immeasurably higher compared to the loss of military contracts. As pacifist employees of the company note, cooperation with the military will cause irreparable damage to the Google brand and its ability to compete for talented employees. “By entering into this contract, Google is joining companies such as Palantir, Raytheon, and General Dynamics (US defense companies), contrary to its own “Do no evil” slogan. The argument that other firms such as Microsoft and Amazon are also involved does not make the situation less risky for Google”, - the authors of the letter note.

“The Vatican can only act on the conscience of the people. At the same time, the practical implementation of these ideas is possible only if it is picked up by specific companies, corporations, and individual scientists. Even if we assume that Google's decision to abandon cooperation with the military was dictated not so much by peacekeeping as by selfish considerations (loss of reputation, which costs a lot, the risk of losing hundreds of highly qualified specialists), nevertheless, such a measure is worthy of respect. On the other hand, the letter of protest, signed by more than 3,000 employees of the company, outraged by the deal with the Pentagon, speaks of a high degree of civic responsibility of these people. In this sense, computer scientists and programmers from other countries have a lot to learn from their American colleagues”, - said Vitaly Adamenko, head of the electronic library “Beyond Violence”, to Novye Izvestia.

Subscribe