Russia has less success in spreading misinformation on social networks

0
147

A few days after Russia invaded Ukraine, several social networking platforms, including Facebook, Twitter and YouTube,announced they dismantled coordinated networks of accounts that disseminated misinformation. These networks, which consisted of fabricated accounts disguised under fake names and images of artificial intelligence or hacked accounts, shared suspiciously similar anti-Ukrainian conversations, believing they were controlled by centralized sources linked to Russia and Belarus.

Russian Internet Research Agency used similar disinformation campaigns to step up propaganda about the 2016 U.S. election. But their scale was unclear until the end of the election – and at that time they were conducted with a slight deviation from the platforms of social networks. “There was a feeling that the platforms just didn’t know what to do,” says Laura Edelson. misinformation researcher and Ph.D. PhD in Informatics at New York University. Since then, she said, platforms and governments have become more adept at fighting this type of information warfare – and more willing to deplatform bad actors who deliberately spread misinformation. Edelson talked to Scientific American about how information warfare is waged when the conflict continues.

[An edited transcript of the interview follows.]

How do social media platforms deal with accounts that spread misinformation?

Such disinformation campaigns – where they mislead users about the source of the content – are very easy for platforms to take action because Facebook has this real name policy: misleading users about who you are is a violation of Facebook’s rules. But there is [other] things that shouldn’t be hard to take off – something Facebook has historically struggled with – and that’s actors like RT. RT is the Russian state media. And Facebook has really tried historically what to do with it. That’s what was so impressive about seeing it [Facebook and other platforms] really started taking some action against RT last week because it’s been going on for so long. And also, frankly, [social media platforms] were covered up by governments where governments in Europe banned Russian state media. And it has enabled Facebook, YouTube and other major platforms to do the same. In general, banning someone – but especially the media – is not a step that should be taken lightly. But RT and Sputnik [another Russia state-backed media outlet] are not ordinary media: they have such a long history of information space pollution.

What else can be done to combat harmful false information?

One of the things the U.S. has done very well in this conflict – and why, at least because of misinformation [controlling] In the long run, the first week went very well – the US government was really aggressive, releasing information that it knew about the realities on earth in Russia and Ukraine. It really helped to create a space where it was difficult for Russians to spread misinformation about the same things. Because the U.S. government was very supportive, it didn’t leave much room; there was no information vacuum that the Russians could enter and fill. And then the Ukrainian government was extremely savvy, telling the story of the Ukrainian resistance. Of course, there are times when this has crossed the line in propaganda. But in general, she was convinced that the world will see the Ukrainian resistance and the struggle that the Ukrainian people are ready to wage. This [helps] people see what is happening and understand that the people who are fighting there are real people who have not been fighters for a long time. They were civilians and are now defending their country.

I think both of these things will be hard to maintain over time. But if they are not supported, a window for Russian disinformation will open. The problem we all have to deal with is that this war will not end in the next few days, but the news cycle cannot maintain this level of attention to these events. It’s shocking to say, but in three weeks you’ll have hours without thinking about it. And this is when the people’s protection comes down. If someone is trying to spread something [disinformation]- Maybe the Russians are inventing some fake Ukrainian atrocity or something like that – that’s when the world will become receptive to this kind of thing. And then we have to remember all these things “Who told you the story? Do we trust them? How verifiable is this account? ” This will be part of the further deployment of the conflict. But this is something new for all actors, and everyone will have to get used to keeping up with information warfare, not just kinetic warfare.

Some people also noted the obvious reducing other forms of misinformation, for example, conspiracy theories related to vaccines because Russia’s online infrastructure and payment networks have been limited by sanctions. What’s going on with that?

I have not seen a large-scale analysis in this regard. However, there have been quite a few anecdotal reports that disinformation in other sectors has declined markedly over the past week. We cannot say for sure that this is due to the lack of internet access in Russia. The conclusion is not that all these materials that were exported were supplied from Russia. The conclusion that can be drawn from these anecdotal messages is that Russia’s Internet infrastructure was an important part of the toolkit of people who spread disinformation. There are many pieces of this economy that are exhausted from Russia – networks of bots, for example, networks of people who sell, buy and sell information about stolen credit cards, a lot of economy around buying stolen [social media] accounts – because Russia has historically suffered a lot of cybercrime. Either it closes the eyes, or many of these groups actually work for the Russian state or are contractors.

How can we avoid misinformation and dissemination?

The bottom line is that people don’t need to do that. It’s like saying, “There’s no seat belt in my car. What can I do to protect myself in an accident? ” The answer is: your car should have seat belts, and it shouldn’t be your job. But, unfortunately, this is so. With this small caveat, you need to remember that the most successful misinformation succeeds by appealing to emotions rather than the mind. If misinformation can affect this emotional path, you will never question it because it is good, and if it feels good, then it is close to the truth. So the first thing I recommend: if something makes you emotional – especially if something makes you angry – before sharing or interacting with it, really ask yourself, “Who is promoting this, and do I trust them? ”

What is the most important thing to do with platforms to install metaphorical seat belts?

I think the most important thing that platforms should do, especially in these times of crisis, is [recognize they] should not promote content solely on the basis of interaction. Because you have to remember that misinformation is really fascinating. It’s fascinating because of some of the reasons I talked about: very emotional attraction, things that go around the mind and go straight inside. This is a really effective deception tactic. So I think that’s when platforms need to increase the importance of content quality over how attractive content is. This is the first thing they could do, and almost everything else pales in comparison.

Russia has less success in spreading misinformation on social networks

Source link Russia has less success in spreading misinformation on social networks