8M - AI does not shut up the voice of women



Algorithmic Bias – Automated Masculinism

We have acquired the conviction —or the prejudice— that technology is aseptic, objective, and infallible. It is worth remembering that this is not the case. AI, for example, is not neutral. It feeds on historical data and on selected data that are loaded into it. If historically women have had less access to credit or leadership positions, the algorithm “learns” and concludes that being a woman is a risk factor, a sign of lower aptitude, or of lesser relevance. It is a silent and hard?to?detect form of discrimination —and therefore difficult to report— because it hides beneath a false layer of mathematical objectivity.

It must also be noted that, due to the human biases of the developers themselves —mostly men— the original training data and the AI algorithm lean toward their preferences and references, producing distorted and potentially harmful results.

We can already find numerous examples in fields such as healthcare, where predictive algorithms undervalue the physiological and medical characteristics of women or minority groups, or in automated CV?screening systems that systematically discard racialised profiles, women, or older candidates.

‘Deepfakes’ and the control of the digital body

Reflecting the concern this issue has generated in our societies, in 2024 the European Regulation —directly applicable in all Member States— establishing harmonised rules on artificial intelligence (2024/1689 of the European Parliament and of the Council, 13 June 2024) was approved. It is the first legislation of its kind at the international level. The regulation promotes the ethical and responsible use of artificial intelligence, classifies different types of AI and their risks, and requires clear labelling of artificially generated content.

In parallel, that same year the European Parliament and the Council approved Directive 2024/1385 of 14 May on combating violence against women and domestic violence. It requires transposition by Member States to become applicable, but it is the first European?level law against violence towards women and the first attempt to tackle, at a European scale, the violence they face online or to prosecute the sharing of non?consensual intimate images (including those generated by AI).

More risk, more awareness

These pieces of legislation have been a major step forward, but the explosion of generative AI over the past two years and the speed of its technological development have multiplied the creation of non?consensual content. Today, the generation of fake images through AI is not only an attack on privacy but has also become a tool of social discipline, with women as the main victims —99% of pornographic deepfake content features images of women—. Such content is used to silence women with a public voice of their own, whether in politics, journalism, or activism.

All of this has already mobilised many women and the feminist movement, which has become aware that AI may become a new space where long?standing sexist attacks are reproduced and amplified through new tools.

Ideas and proposals such as feminist technological sovereignty, consent?based technology, and digital self?defence are beginning to gain ground both in the digital sphere and in feminist activism.



Gemma Galdón, algorithm auditor

How can we audit the algorithms used by public administrations to ensure they are not denying benefits or services to women based on historical biases?

It depends on who we are. In public administration it is easier because there is access to the data, although sometimes third?party providers are used and even they do not have access to all the data. But in general, those who develop or use the algorithm can access these systems and check whether they are working properly or not. I believe that, with technical knowledge and auditing methodologies, it is perfectly feasible.

If what you are asking is how to audit from the outside, we have been working for many years with affected communities precisely to do what we call reverse engineering: working with the people impacted and, based on the data they have, demographic context data and service?provision data, we can formulate plausible hypotheses about how the system works and even reproduce it externally. Therefore, we can reach fairly robust conclusions about how the system is functioning, whether it works well or poorly, and whether benefits are being granted or denied based on protected attributes such as gender, sex, race, postal code or any other variable we want to introduce.

It all depends somewhat on the data, but yes: algorithms are auditable.

Given the speed of generative AI, is ex?post regulation (after harm occurs) sufficient, or do we need ethical blocking mechanisms before software is released?

This is a major issue: ex?ante versus ex?post control. The GDPR, the data?protection regulation, opted for ex?ante control and turned compliance into a highly bureaucratic process led mostly by lawyers rather than technologists. It has allowed the creation of a kind of legal?compliance theatre in which many documents and impact assessments are produced, and roles are created within organisations to ensure algorithms work properly —but then they don’t— and therefore we have no real control over what happens when these systems actually reach the population.

There is now a shift in this reasoning, and the trend is to create ex?post control mechanisms. In fact, what we need is control throughout the entire process. We should think of AI like a new medicine: control when it is being developed, when it is being tested before reaching the market through clinical trials, and control once it is on the market through performance monitoring. The same applies to AI.

We focus heavily on impact auditing because many times we do not know what needs to be done during development until we see what happens when the system reaches society. That is why collaboration among all actors is essential, understanding that those who work before a system reaches the population will not have all the relevant data —risks that are not addressed or risks that are addressed but turn out not to be real. It is crucial to do this preliminary impact?assessment work and then the subsequent monitoring to ensure that we mitigate negative impacts and enhance positive ones.

Often the impacts are not immediate at the client level, which is what companies have incentives to verify. In the long term, there may be impacts on democratic erosion or the exclusion of women… but companies have no incentive to measure these consequences. They measure how the system performs and avoid major, obvious errors —such as an algorithm refusing credit to women— things that clearly affect their business model. But for more structural and contextual impacts, there are no incentives to monitor them. The challenge is to create auditing mechanisms that require companies to take these effects into account and to create mechanisms within public administration to do this work, which is currently not being done.

What legal responsibility should companies have when they create tools that facilitate the production of pornographic or defamatory deepfakes?

I believe full responsibility should lie with the companies that create them. If they develop these tools, they must be responsible for how they are used and proactively identify harmful uses. They must prevent people who misuse them from continuing to access these systems.

Not everything can be prevented because that would require an unacceptable system of surveillance and population control, but much can be done. When we audit, we see many AI systems that constantly detect misuse and redirect or stop it in ways that invade privacy. Sometimes we talk about very abstract things and lose sight of what can already be done: companies can identify that a user or an IP address is generating harmful or inappropriate content.

Do we have proactive mechanisms to control how digital tools are used? We must remember that this is a complex area because it can affect privacy and freedom of expression. But at the same time, there are extremely serious uses, such as harming another person, harassment, or fraud. All of these are illegal uses, and those who develop the tools must take responsibility for doing everything within their power to prevent them.

Teresa Serra, pioneer of computing at Barcelona City Council

You experienced the early days of computing in public administration: at what point did technology stop being seen as an administrative tool and become a space of power where women have lost ground?

Public administration has had women in technological leadership roles since its beginnings —particularly the Barcelona City Council.

It is true that, in general, the most technical positions tend to be preferred by men, but this bias is smaller in public environments. Meanwhile, women manage user and client relations very well and are more open to listening and incorporating their needs.

The first goal is to increase the number of women in STEM studies. Why mathematics, biotechnology… but not engineering and computer science?

Universities and technology associations are doing important work to provide visibility and training to incorporate more women into the sector, but progress is slow.

What is the most important lesson that new generations of computer scientists should learn from the mistakes (and successes) of data systems from the 1980s and 1990s?

For me, when we talk about data —regardless of whether they are handled by men or women— the most important thing is to anticipate and guarantee their governance, and everything that implies, as well as their quality. Especially in public environments, where data is highly sensitive and administrations are the responsible stewards.

How has the perception of women’s data security changed from the first digital files to today’s global cloud?

AI is destined to impact every aspect of our lives, and it is essential that women’s perspectives and voices are integrated from the very beginning of its development.

Artificial intelligence offers multiple benefits for women by automating tasks, personalising healthcare and promoting equality, helping optimise time and strengthen skills across different fields.

Some examples: women’s health and diagnostics, labour empowerment and entrepreneurship, efficiency and time management, bias reduction and gender equity, education and accessibility, etc.

Simona Levi, digital rights and activism, founder of XNET

AI is being used, among other purposes, to manipulate public opinion. What kind of “digital self?defence protocol” could women in the public sphere adopt to avoid being pushed out by bot campaigns and deepfakes?

I don’t think there is any difference between digital space and public space. Just as machismo wants women at home and silent, it also doesn’t want active women in digital spaces. So the recipe and strategies are the same as always: resist and move forward.

How can we defend ourselves from digital sexist violence without falling into censorship or excessive control of the internet by governments or ad?hoc bodies?

We must be brave and avoid falling into victimhood or into reactionary discourses that use us to justify rights restrictions. If we receive verbal aggression in the street, we don’t ask for the streets to be closed or militarised —or at least we shouldn’t. The same applies to the internet: the existence of machismo does not mean we should demand restrictions on digital rights.

Is the fight against disinformation also a feminist struggle in today’s context?

The feminist struggle has always been a struggle against propaganda, against narratives about who we are and who we should be. It is a struggle against hegemonic information. Now we must also confront a hegemonic victimist feminism that distorts the luminous struggle of women.

Gina Rigol, journalist and communicator, @Ineditdiari

In a context where AI not only automates content but also influences which narratives are amplified or silenced, how is AI reshaping narrative power around women and feminisms in the media ecosystem, and what democratic risks does this transformation entail?

It’s important not to forget that behind artificial intelligence there are people. In fact, the most critical voices say it is neither intelligence nor artificial. Therefore, whenever society has a bias —gender bias in this case— it will be reproduced in AI systems. Even more so if the people behind the configuration and ownership of these algorithms —which is even more important— are white cis heterosexual men, as we know they often are. Their perspective will inevitably be reflected in AI, with all the consequences this has for narratives about women.

The obvious risk is that when human bias becomes entrenched in machines, we add the problem that society tends to see machines as perfect. The term “artificial intelligence” carries weight, and the perception will be that machines cannot be wrong. We can always doubt a person; doubting a machine seems harder. That is the risk: that these biases become entrenched, reproduced and static.

The opacity of algorithmic systems often hinders public scrutiny. What role should journalism (especially gender?aware journalism) play in holding AI accountable, and what tools or alliances are needed to do so rigorously and effectively?

On the one hand, in daily news production, mechanisms for detecting false information should be incorporated. This is already beginning to happen, not only to debunk narratives built from AI?generated products but also to detect AI?generated content itself, since at a glance not everyone can distinguish it.

On the other hand, in more in?depth work, journalism should have the time and space to scrutinise who is giving instructions to these algorithms. As I said, algorithms are made by people, and people should be accessible in one way or another.

There is a very exciting initiative worth mentioning: AlgoRace, a collective dedicated to investigating and publishing on AI systems that include racist biases. This and similar initiatives can access information about the instructions algorithms receive today for everyday tasks. The police, for example, already train their own AI.

Generative AI is producing images, texts and representations that may reproduce sexist or hypersexualised imaginaries. How can feminism intervene in dataset definition, moderation criteria and ethical standards so that automated symbolic production does not reinforce stereotypes but challenges them?

The answer is the same as to “how can feminisms intervene in society?”. AI simply reproduces and entrenches biases that already exist.

The first and most important tool is regulation. How these algorithms are defined should be public information and should not be controlled by private companies that, without public oversight, have the power to decide how these machines “think” —in heavy quotation marks— and what narratives they reproduce.

First, regulation to the greatest extent possible. Also, the development of public artificial intelligence would be a good idea. And, of course, education: not only feminist education to counter stereotypes about women, but also digital literacy to understand the internet.

Young people —young women and not?so?young women— must be very aware of who is behind the information provided online or by AI. How it is made, with what narratives, with what tools. It’s not just about feminist self?defence but about understanding the ownership and development of the internet and AI itself. With luck, this education may lead to more responsible use of these tools. If we even consider that they should continue to be used… my opinion would probably be no.


Tags