Between ethics and laws, who can govern artificial intelligence systems?

We have all begun to realize that the rapid development of AI is really going to change the world we live in. AI is no longer just a branch of computer science, it has escaped from research labs with the development of "AI systems", "software that, for human-defined purposes, generates content, predictions, recommendations or decisions influencing the environments with which it interacts"(European Union definition).

Bernard Fallery, University of Montpellier

AI applications are everywhere, introducing new risks, right down to the manipulation of behavior.
Pixabay, CC BY

The issues of governance of these AI systems - with all the nuances of ethics, control, regulation and regulation - have become crucial, as their development is today in the hands of a few digital empires like Gafa-Natu-Batx... who have become the masters of real societal choices about automation and the "rationalization" of the world.

The complex interweaving of AI, ethics and the law is being woven out of a balance of power - and collusion - between states and tech giants. But the commitment of citizens is becoming necessary, to assert imperatives other than a technological solutionism in which "everything that can be connected will be connected and rationalized".

AN IA ETHIC? THE GREAT PRINCIPLES AT AN IMPASSE

Certainly, the three great ethical principles help us to understand how a genuine bioethics has been built up since Hippocrates: the personal virtue of "critical prudence", the rationality of rules that should be universal, and the evaluation of the consequences of our actions with regard to general happiness.

[More than 80,000 readers trust The Conversation newsletter to better understand the world's major issues. Subscribe today]

For AI systems, these major principles have also been the basis of hundreds of ethics committees: the Holberton-Turing oath, the Montreal Declaration, the Toronto Declaration, theUnescoProgram... and even Facebook! But AI ethics charters have never yet led to a sanction mechanism, or even the slightest reprimand.

On the one hand, the race for digital innovation is indispensable for capitalism to overcome the contradictions in profit accumulation, and it is indispensable for states to develop algorithmic governmentality and unhoped-for social control.

But on the other hand, AI systems are always both a remedy and a poison (a pharmakon in Bernard Stiegler's sense), and so they continually create different ethical situations that are not a matter of principles but require "complex thinking"; a dialogic in Edgar Morin's sense, as shown by theanalysis of ethical conflicts around the Health data hub.

AN IA LAW? BETWEEN REGULATION AND REGULATION

Even if major ethical principles will never be operational, it is from their critical discussion that an AI law can emerge. Here, the law comes up against particular obstacles, notably the scientific instability of the definition of AI, the extraterritorial aspect of digital technology, and the speed with which platforms develop new services.

https://youtube.com/watch?v=kyWl69MR6io%3Ffeature%3Doembed

In the development of AI law, we can see two parallel movements. On the one hand, regulation through simple guidelines or recommendations for the progressive legal integration of standards (from the technical to the legal, such as cybersecurity certification). On the other, real regulation through binding legislation (from positive law to technical, such as the RGPD regulation on personal data).

POWER RELATIONS... AND COLLUSION

Personal data is often described as a coveted new black gold, as AI systems have a crucial need for massive data to fuel statistical learning.

In 2018, the RGPD became a true European regulation of this data that had been able to take advantage of two major scandals, the NSA's Prims spying program and that of misappropriated Facebook data at Cambridge Analytica. The RGPD even enabled activist lawyer Max Schrems in 2020 to have all transfers of personal data to the United States invalidated by the Court of Justice of the European Union. But collusion between states and digital giants remains rife: Joe Biden and Ursula von der Leyen are constantly trying to reorganize these contested data transfers through new regulation.

The Gafa-Natu-Batx monopolies are today driving the development of AI systems: they control possible futures through "predictive machines" and the manipulation of attention, they impose the complementarity of their services and soon the integration of their systems into the Internet of Things. States are reacting to this concentration.

In the United States, a trial to force Facebook to sell Instagram and WhatsApp will open in 2023, and an amendment to antitrust legislation will be voted on.

In Europe, from 2024 onwards, the Digital Market Act (DMA) will regulate acquisitions and prohibit "large access controllers" from self-referencing or bundling their services. As for the regulation on digital services, theDSA Act, it will oblige "major platforms" to be transparent about their algorithms, to deal swiftly with illegal content, and will ban advertising targeted at sensitive characteristics.

But the connivances remain strong, as each also protects "its" giants by brandishing the Chinese threat. Thus, under threats from the Trump administration, the French government suspended payment of its "Gafa tax", despite the fact that it had been passed by parliament in 2019, and tax negotiations continue within the framework of the OECD.

NEW AND ORIGINAL EUROPEAN REGULATIONS ON THE SPECIFIC RISKS OF IA SYSTEMS

Spectacular advances in pattern recognition (for images as well as text, voice and location) are creating predictive systems that present growing risks to health, safety and fundamental rights: manipulation, discrimination, social control, autonomous weapons, etc. After the Chinese regulation on the transparency of recommendation algorithms in March 2022, the adoption of the AIA Act, the European regulation on artificial intelligence, will be another milestone in 2023.

European risk classification of AI systems.
Yves Meneceur, 2021, Provided by the author

This original legislation is based on the degree of risk of AI systems, in a pyramidal approach similar to nuclear risks: unacceptable, high risk, low risk, minimum risk. Each level of risk is associated with prohibitions, obligations or requirements, which are specified in annexes and are still the subject of negotiations between the Parliament and the Commission. Compliance and sanctions will be monitored by the competent national authorities and the European Artificial Intelligence Committee.

A CITIZEN'S COMMITMENT TO IA RIGHTS

To those who see the involvement of citizens in the construction of an AI law as utopian, we can first remind them of the strategy of a movement like Amnesty International: to advance international law (treaties, conventions, regulations, human rights tribunals) and then use it in concrete situations such as the Pegasus spyware case or the ban on autonomous weapons.

Another successful example is the None of your Business (None of your Business) movement: advancing European law (RGPD, European Court of Justice...) by filing hundreds of complaints every year against privacy-invasive practices by digital companies.

All these citizens' collectives, working to build and use an AI right, have very diverse forms and approaches. From European consumer associations filing a joint complaint against Google's account management, to 5G antenna saboteurs refusing the total digitization of the world, to Toronto residents defeating Google's major smart city project, to free software activist doctors seeking to protect health data...

This emphasis on different ethical imperatives, both opposing and complementary, corresponds well to the complex ethical thinking proposed by Edgar Morin, accepting resistance and disruption as inherent to change.

Bernard Fallery, Professor Emeritus in Information Systems, University of Montpellier

This article is republished from The Conversation under a Creative Commons license. Read theoriginal article.