Just two days before Slovakiaâs elections, an audio recording was posted to Facebook. On it were two voices: allegedly, Michal Å imeÄka, who leads the liberal Progressive Slovakia party, and Monika Tódová from the daily newspaper DennÃk N. They appeared to be discussing how to rig the election, partly by buying votes from the countryâs marginalized Roma minority.
Å imeÄka and DennÃk N immediately denounced the audio as fake. The fact-checking department of news agency AFP said the audio showed signs of being manipulated using AI. But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakiaâs election rules, the post was difficult to widely debunk. And, because the post was audio, it exploited a loophole in Metaâs manipulated-media policy, which dictates only faked videosâwhere a person has been edited to say words they never saidâgo against its rules.
The election was a tight race between two frontrunners with opposing visions for Slovakia. On Sunday it was announced that the pro-NATO party, Progressive Slovakia, had lost to SMER, which campaigned to withdraw military support for its neighbor, Ukraine.
Before the vote, the EUâs digital chief, VÄra Jourová, said Slovakiaâs election would be a test case of how vulnerable European elections are to the âmultimillion-euro weapon of mass manipulationâ used by Moscow to meddle in elections. Now, in its aftermath, countries around the world will be poring over what happened in Slovakia for clues about the challenges they too could face. Nearby Poland, which a recent EU study suggested was particularly at risk of being targeted by disinformation, goes to the polls in two weeksâ time. Next year, the UK, India, the EU, and the US are set to hold elections. The fact-checkers trying to hold the line against disinformation on social media in Slovakia say their experience shows AI is already advanced enough to disrupt elections, while they lack the tools to fight back.
âWeâre not as ready for it as we should be,â says Veronika Hincová Frankovská, project manager at the fact-checking organization Demagog.
During the elections, Hincová Frankovskáâs team worked long hours, dividing their time between fact-checking claims made during TV debates and monitoring social media platforms. Demagog is a fact-checking partner for Meta, which means it works with the social media company to write fact-check labels for suspected disinformation spreading on platforms like Facebook.
AI has added a new, challenging dimension to their work. Three days before the election, Meta notified the Demagog team that an audio recording of Å imeÄka proposing to double the price of beer if he won was gaining traction. Å imeÄka called the video a fake. âBut of course the fact-checking canât be based just on what politicians say,â says Hincová Frankovská.
Proving the audio had been manipulated was hard. Hincová Frankovská had heard about AI generated posts, but her team had never actually had to fact-check one. They traced where the recording came from, discovering that it had first been posted on an anonymous Instagram account. Then they started calling around experts, asking whether they considered the recording likely to be fake or manipulated. Finally, they tried out an AI speech classifier made by an American company called Eleven Labs.
After a few hours they were ready to confirm that they believed the recording had been altered. Their label, which is still available to see on Slovak-language Facebook when visitors come across the post, says: âIndependent fact-checkers say that the photo or image has been edited in a way that could mislead people.â Facebook users can then choose if they want to see the video anyway.
Both the beer and vote-rigging audios remain visible on Facebook, with the fact-check label. âWhen content is fact-checked, we label it and down-rank it in feed, so fewer people see itâas has happened with both of these examples,â says Ben Walter, a spokesperson for Meta. âOur Community Standards apply to all content, regardless of whether it is created by AI or a person, and we will take action against content that violates these policies.â
This election was one of the first consequential votes to take place after the EUâs digital services act was introduced in August. The act, designed to better protect human rights online, introduced new rules that were supposed to force platforms to be more proactive and transparent in their efforts to moderate disinformation.
âSlovakia was a test case to see what works and where some improvements are needed,â says Richard Kuchta, analyst at Reset, a research group that focuses on technologyâs impact on democracy. âIn my view, [the new law] put pressure on platforms to increase the capacities in content moderation or fact-checking. We know that Meta hired more fact-checkers for the Slovak election, but we will see if that was enough.â
Alongside the two deepfake audio recordings, Kuchta also witnessed two other videos featuring AI audio impersonations be posted on social media by the far-right party Republika. One impersonated Michal Å imeÄka, and the other the president, Zuzana Äaputová. These audios did include declarations the voices were fake: âThese voices are fictitious and their resemblance to real people is purely coincidental.â However that statement does not flash until 15 seconds into the 20 second video, says Kutcha, in what he felt was an attempt to trick listeners.
The Slovakian election was being watched closely in Poland. âOf course, AI-generated disinformation is something we are very scared of, because itâs very hard to react to it fast,â says Jakub Åliż, president of Polish fact-checking group the Pravda Association. Åliż says he is also worried by the trend in Slovakia for disinformation to be packaged into audio recordings, as opposed to video or images, because voice cloning is so difficult to identify.
Like Hincová Frankovská in Slovakia, Åliż also lacks tools to reliably help him identify whatâs been created or manipulated using AI. âTools that are available, they give you a probability score,â he says. But these tools suffer from a black box problem. He doesnât know how they decide a post is likely to be fake. âIf I have a tool that uses another AI to somehow magically tell me this is 87 percent AI generated, how am I supposed to convey this message to my audience?â he says.
There has not been a lot of AI-generated content circulating in Poland yet, says Åliż. âBut people are using the fact that something can be AI generated to discredit real sources.â There are two weeks until Polish voters will decide whether the ruling conservative Law and Justice party should stay in government for an unprecedented third term. This weekend, a giant crowd gathered in Warsaw in support of the opposition, with the opposition-controlled city government estimating the crowd reached 1 million people at its peak. But on X, formerly known as Twitter, users suggested videos of the march had been doctored using AI to make the crowd look bigger.
Åliż believes this type of content is easy to fact-check, by cross referencing different sources. But if AI-generated audio recordings start circulating in Poland in the last hours before the vote, as they did in Slovakia, that would be much harder. âAs a fact-checking organization, we donât have a concrete plan of how to deal with it,â he says. So if something like this happens, itâs going to be painful.â