Scroll Top

Digital Rights of the local and global society

by Fabio Gnassi

The usage modes and integrative processes of the technologies we use on a daily basis seem to be shaped and shaped solely through the computing and aesthetic interfaces that filter user-instrument interaction. Yet it is the normative, ethical, and legislative plans that should be at the center of the processes of educating, training, and shaping the development of the technologies we use.

In this interview Fabio Gnassi, on behalf of the Editorial Staff of The Bunker, interviews jurist, entrepreneur and activist Luna Bianchi around The Ai Act and the role of regulations in the interaction between society and technology.

Luna Bianchi
The timeliness of the institutions in rushing to regulate the use of new technologies is indicative of their ability to interact and shape the most significant aspects of society. What is your outlook with regard to the confrontation between technology and legislation?

Both technology and law are social constructs, products, that is, of a wholly human process, of a social negotiation that is often capable of dismantling and redirecting the interests and needs that had originally created them. This is true, above all, because it is society the determines the reasons for the development of certain technologies and of certain legislative frameworks, by deciding which social groups to protect and which demands and needs to respond to and by which interests to allow itself to be guided. But it is also true because they both model the societies to which they are applied: they influence and require specific behaviors on the part of individuals and the groups that are subject to them, contributing to define the rules and, basically, the political structure. 

Lawrence Lessing wrote, as far back as 2006 “code is law”: algorithms now contribute to regulating society exactly the way laws do, determining conduct and defining the modes whereby one is (that is, exists and is recognized in his or her identity) and lives in a certain space. Unlike the law, however, an algorithmic code has an impact that is massive and immediate on the community, regardless of whether it is known or visible, and this creates a short circuit with respect to our ability to participate in the social negotiation that enables technology, like law, to evolve with the contexts to which it is applied.

Both technology and law are social constructs, products, that is, of a wholly human process, of a social negotiation that is often capable of dismantling and redirecting the interests and needs that had originally created them.

An effective integration of artificial intelligence into the social environment requires not only an adequate legal framework, but an equally appropriate public information plan as well. What principles, in your opinion, should form the foundations of an educational project in this sector?

First of all, just because technology is not neutral, but is inherently political, it is a social construct and therefore it is also up to us to control it.  What this means is that it is the choices made by the human beings who commission it, develop it, implement and use it to determine where, when and how that technology will affect the shape of society.  We have to be aware of this; it is fundamental, above all considering the consequences of those choices, and by that I mean the impacts that technology will have on us, on our individual and collective dimension. Those impacts may be massive indeed, and it is no coincidence that the AI Act (the European regulation on artificial intelligence) places particular emphasis on this point: AI has specific impacts and risks that are new and potentially dangerous, and that must be governed and managed appropriately.

The second point, then, revolves around the need to be informed and aware of the issues arising from the adoption of AI. I don’t mean that we have to be afraid of it, because it is a technology that is part of our life already (even though often we are unaware of it), but neither should we welcome it with facile enthusiasm, assimilating it to any previous innovation. AI has an ability to influence us in ways of which we are often unconscious, and the introduction of AI into a context or a process changes the status of that context and that process.  AI does not only change the way in which we do things, but also how we think and plan, by taking over and hacking many of the concepts, structures and movements familiar to us.

For this reason, it is very important for us to make a habit of looking at what we already know with fresh eyes, to see what additional structures require adjustment to respond to the new socio technical contexts (for example, certain legislative provisions), and we should also take steps to apply a strengthened ethical system when we interface with AI, to make the decisional process of the machine more similar to the human standard, so that it can combine its complexity with human values and our experience in the world. I think these are the mental principles we need to concentrate on now, so as to be ready for a new tomorrow.

Luna Bianchi e Diletta Huyskes. Fondatrici di Immanence, società benefit che si occupa di valutare gli impatti e rischi etici dell’IA e delle tecnologie digitali
AI is a planetary phenomenon with cultural, social and political implications that go beyond those of the individual countries. The European Union was among the first political organizations to develop its own AI governance. What value do you give those regulations and what role do you think they could have in a broader international agreement?

What we are seeing in this early stage of the AI Act is the so-called “Brussels Effect”, the ability, that is, of Europe to influence international legislation in a globalized world. The legislative proposals circulating in many countries (the U.S., the U.K., Brazil, India, China, Indonesia, New Zealand, Peru, Singapore, Taiwan) are able to take account of the impacts and risks consequent to the AI Act. The big issue, however, connected to the dynamics of power between private entities and institutions, will be who controls AI? With the GDPR, we have seen its substantial unenforceability as far as the big tech companies (and others) are concerned, so we have to wonder now, with AI, whether we will be able to ensure respect of the AI Act in its fundamental reasoning, and whether it means altering the very approach that we take toward innovation.

We expect our laws to be both clear and courageous (which is not entirely the case with the AI Act, also because of the immense amount of lobbying pressure brought to bear – naturally enough – by big tech, which managed to make the European Parliament back down on a number of key issues, like biometric recognition), but if there is no real enforcement of them, they serve no purpose other than to create a situation of confusion and frustration among the users. Such a strong concentration of power and market has grown up around AI that it is urgent, at this point, to review the antitrust laws: up to now, we have not established effective institutions able to crack down on this mono/oligopolistic system. At this point we need to understand how to reconsider governance and, establish the public decisional processes to act as a whole, in the long term, understanding how economic and political power and influence can be balanced against the interests and responsibilities of the public institutions.

The European countries are characterized by a particular paradox that associates the difficulty of assimilating what we perceive as new, with a high propensity toward the construction of rigid legislative frameworks which, rather than incentivizing development, limit its application. What are your thoughts in this regard?

We are living in a historic period of major social and technical revolutions, and we will have to try to create tools which, rather than setting up a rigid, narrowly coded structure, will foster social negotiation for the continuous adaptation of technology and law to the changing and specific socio-economic, political and cultural conditions. I think the secret lies in our ability to find flexible solutions that make it possible to translate the goals and values of the artefacts with respect to the real emerging needs, to balance human interests and rights based on real contingencies. What is justice in this context? Who should be protected?  What type of innovation is necessary to improve the conditions of our society as a whole, starting from those who are most disadvantaged?

That means that also technology, like law, should – as is required by the AI Act – undergo specific evaluations of impact before being released on the market, according to an adequate plan of strategies and risk management, applying appropriate ethics codes defined at the outset, to the specific technological advance in question.  If, still today, new technologies, like generative AI, are put in our hands without a sufficient ex-ante analysis of the possible consequences, we can be certain that this is a very precise choice made for a specific political purpose, identifiable within the framework of the struggle for power which we are seeing and that is definitely is not aimed at any real safeguard or wellbeing of humanity.