MEPs adopted a new report dealing with artificial intelligence. It covers many topics, including its use in a military setting. This report makes it a general point of honor to ensure ethics and human control.
Military use abundantly mentioned
On January 19, 20 and 21, 2021, the European Parliament adopted a new report in view of the Commission’s legislative proposal. It was for the deputies to further specify the research and application framework regarding artificial intelligence in several areas. These include health, education and especially defense. MP Gilles Lebreton, rapporteur for the text, explained that AI should remain a tool for decision-making or action. The latter should not in no way replace the human or relieve them of their responsibilities.
If the texts of the report concern several fields, the use of AI in a military setting seems to take an important place. For MEPs in favor of the document, the European Union’s activities in defense-related AI must respect human dignity and human rights. In other words, humans must be able to exercise “significant” control over these systems.
One of the issues addressed in the report concerns Lethal Autonomous Weapons Systems (SALA). Also known as the controversial killer robots, these systems are weapons capable of identifying targets on their own. In this regard, the report calls on the EU to play a leading role in their development and the promotion of ethics, by collaborating with the UN and the international community.
Avoid social notes and deepfakes
The report also mentions AI in public sectors such as health and justice. However, the systems in question must in no way replace human contact. It is also a question of avoiding certain discriminatory practices. In addition, it will be a question of ensuring that European citizens are constantly warned when they are the subject of a decision involving AI.
MPs in favor of the report believe that it is necessary to pay attention to AI applications spilling over into mass surveillance, whether in the civilian or military field. For example, the highly intrusive social apps whose objective – as is the case in China – is to monitor but also to note the citizens.
Finally, the deputies mentioned the deepfakes (or hypertrucage), an image synthesis technique based on artificial intelligence. This same technique is a source of concern because of its ability to fake news broadcaster in order to influence elections or worse, to destabilize countries. One of the objectives will be to speed up research in order to counter this naturally undesirable phenomenon.