• Wed. Jan 26th, 2022

Uses and abuses of AI in search of the point of discrimination

Byeditorial

Jun 4, 2021

Emotional artificial intelligence, or emotional AI, interprets what we are feeling from our expressions, our movements, the levels of agitation we express in any physical way, more or less minimal and more or less refined. To react accordingly. In a useful sense, in games, in cars, in voice assist bots. But also in a worrying sense: there are sufficient reasons and technologies to exercise forms of police control, in classrooms or at the borders that lie between the lawful, the illegal and above all the non-regulated. The discriminating factors are many and subtle, starting from the possible identification of people. The long-awaited proposal for a regulation on European Artificial Intelligence was presented a few weeks ago in an attempt to intervene while preserving respect for fundamental rights and European values ​​without limiting, at the same time, technological development. The Regulation divides the uses according to the risk: low, medium or high. By reserving a series of prohibitions for high risk, mainly aimed at the sale and use of AI systems that use subliminal techniques to substantially distort the behavior of a person, the sale and use of AI systems that exploit a vulnerability linked to the age or disability of a specific group of people, to the use of social scoring systems (the evaluation of the citizen) by public authorities, to the real-time use of remote biometric identification systems, such as facial recognition systems, in places accessible to the public for the purpose of suppressing crimes. All with several exceptions and eliminating, with respect to the draft leaked at first, the ban on the use of AI systems for indiscriminate and generalized mass surveillance purposes, even if already limited by other regulations. of Reclaim your face which launched a specific petition on the issue: “We ask the European Commission to strictly regulate the use of biometric technologies in order to avoid undue interference with fundamental rights. In particular, we call on the Commission to prohibit, in law and in practice, indiscriminate or arbitrarily targeted uses of biometrics that can lead to illegal mass surveillance. These intrusive systems must not be developed, used (even on an experimental basis) or used by public or private entities as they can lead to unnecessary or disproportionate interference with people’s fundamental rights. ”A recent case is that connected to proctoring software. That is the tools that are increasingly adopted to check the physical or digital environment of a person who has to take an online exam. The pandemic has, also in this case, produced an acceleration in the use of these technologies, especially at the university level: the case was raised at the University of Turin but other universities have already adopted similar tools, such as the Bicocca in Milan and Rome Three. They range from functions that can block the browser screen preventing the examiner from opening other pages or starting other applications, up to facial recognition and machine learning models that are able, based on emotional AI systems, movement evaluation and behaviors of the person to understand, for example, if he is copying. How far, despite the assurances of the universities, we can get in respect of people’s privacy is still to be defined.The most extreme case recently reported is that connected to the Chinese autonomous region of Xinjiang which is home to 12 million Uyghurs, one ethnic minority most of which is formed by Muslims. Those belonging to this ethnic minority are interned in detention camps, re-educated, the German newspaper Die Welt spoke of a real genocide hidden from the eyes of the world. Also operated through the experimentation of a camera system with artificial intelligence software that from facial recognition aim to reveal emotional reactions. The case was raised by numerous testimonies reported by the BBC. In essence, it would be a sort of advanced lie detector with cameras that detect and analyze even the slightest changes in facial expressions by creating pie charts to highlight mental states of anxiety. in and out of a car: in the research The Right to privacy in the age of emotional AI conducted by Andrew McStay it is reported that in the United Kingdom 50% of citizens disagree with the detection of emotions in any form, 33 % admits that they agree with the detection of emotions as long as they are not personally identifiable. In short, the path of balance and awareness has just begun.