Let me introduce you to Marie. Marie is a 28-year-old professional and while on her way home from work is talking to a TikTok follower about the French elections. This follower has an uncanny ability to touch on subjects that mean the most to her. Almost overnight, Marie's social media feeds become increasingly filled with political themes, until on election day, her vote has already been heavily influenced.
The trouble is the TikTok follower is not a person, but an artificial intelligence-driven bot, exploiting personal but publicly available data about Marie to manipulate her opinion. In this case, her right to sources of unbiased information, information vital to her voting decision, a fundamental tenet of democracy, has been violated.
The scenario described isn't five years down the road, it's already happening. Manipulation of voters, astroturfing and domestic and foreign interference on a scale we have not seen are all increasingly likely with AI-driven political campaigns. You don't have to look much further than two weeks, as Russia has already used AI technologies to generate false profiles of bloggers to spread disinformation on the war in Ukraine.
As the European Union drafts the principles that will regulate artificial intelligence in the bloc -- under the name of the Artificial Intelligence Act, or AIA- rules to protect the democratic process from AI-driven manipulations are largely absent. The original draft as well as the amendments by the Slovenian and French presidencies have failed to include uses of AI that jeopardize democratic processes as an unacceptable risk.
The AIA is structured around risks. Uses of AI that carry an unacceptable risk are prohibited. These are limited to uses that can cause physical or psychological harm. If an AI system can be used to induce suicide, it will be an unacceptable risk. Uses that entail high, limited or minimal risk, on the other hand, are not prohibited.
High risk would, for instance, be AI systems that deal with law enforcement. An AI system that categorises the likelihood of an individual committing tax evasion would be high risk; allowed under certain requirements. For limited or minimal risks, the measures to mitigate potential problems are less extensive
The problem is that the risks contemplated in the AIA are based solely on the individual and the consumer, but not society as a whole. The draft fails to protect democratic discourse and freedoms. In its current state, the AIA opens a window of opportunity to the malicious use of AI engines to manipulate public opinion and political discourse by altering the content and information that a person can access.
If we have learnt something from the last few years, it is that most digital threats to democracy come from the malicious use of social media by political actors.
For example, the AIA prohibits the use of AI to manipulate human behaviour, but only when it might cause physical or psychological harm. Manipulating voting behaviour, for instance, would be permitted using this definition, thus falling outside the unacceptable uses of AI. In addition, it contains only mild measures for the prevention of tactics like the use of bots or deep fakes.
One suggestion proposed by some experts is to increase transparency even beyond the recommendations of the Digital Services Act, the upcoming package of legislation at the EU seeking to regulate fundamental rights of users online, making those who develop bots liable and accountable for their actions.
More aggressive labelling of how and when a bot is being used could also help ensure that they are not used to deceive humans. The same is the case for deep fakes or the use of emotional recognition (the use of AI to detect emotions through facial expression, body language or even hearth beat).
Another key action would be to include a stronger obligation to trace and explain the behaviour of an AI system, especially it is involved in political campaigning. When it comes to political content, high-risk AI systems should not only have human oversight but should be designed with the idea of interpretability as a central focus. This implies understanding why an AI system has taken a decision.
To complement interpretability, the AIA should introduce more accountability. If political campaigns deploy AI systems, they should also be liable and conduct the necessary compliance assessments.
To close the circle, potentially affected persons should have better mechanisms to register complaints, as well as total access to all information on how an AI system makes decisions.
All these changes just reinforce what would be the best approach: the inclusion of democracy harming actions to the set of prohibitions included in AIA. A point of reference in this regard is the set of EU values contemplated in the Lisbon Treaty and those included in the International Covenant on Civil and Political Rights. As a final point, EU values have always placed high importance on the access to unbiased, non-manipulated information for a healthy democratic process.
We have seen the dangers of already highly polarized societies fed by opaque algorithms. With far-sighted and timely action, this time, we could get it right. We have an opportunity to leave a long-lasting protection mechanism for democracy in Europe, and potentially beyond. Let's not waste it.
Alberto Fernandez Gibaja is a Senior Programme Officer at International IDEA, a Stockholm-based intergovernmental organisation that aims to strengthen democratic political institutions and processes around the world. He focused on the crossroads between Technology and Democracy and is a regular contributor and commentator in diverse media outlets, primarily on topics related to technology, democracy and online political campaigns’ policies and regulations.
For more information on the Artificial Intelligence Act and Data Act, be sure to check out our report from Foundation Forum 2021.
Would you like to write the first comment?
Login to post comments