
On Thursday, the Bloomberg campaign posted a misleadingly edited video to its candidate’s Twitter account. The video shows the former New York City mayor asking his opponents in Wednesday night’s debate if any of them had ever started a business — followed by a 30-second pause, as each of the other candidates shifts silently and awkwardly behind their podiums. In reality, no such awkward pause followed Bloomberg’s question on Wednesday night.
What can be done about misleading videos like this one? Starting March 5, Twitter will flag tweets with “synthetic and manipulated media” that has been “significantly and deceptively altered or fabricated.” (A spokesperson told the Guardian that Bloomberg’s video would “probably” qualify.) Such tweets will, at the very least, receive warning labels. If the platform determines that they pose risks to public safety or may cause serious harm, they may be removed as well.
As we hurtle toward a new presidential election, Twitter is trying out a variety of new ideas geared at containing or counteracting misinformation. Yesterday NBC News reported that the company is testing the addition of colorful labels to any misleading tweet — not just misleadingly manipulated media. What this program would look like in practice (if it ever exits testing) are still vague, but it seems as though the misleading tweets would be flagged by fact-checkers, journalists, or participants in a Wikipedia-like “community reports” program. Other tweets refuting or contextualizing the “misleading” information would be appended to the original tweet:
Will this new warning-label strategy work at combating misinformation? Well … what do you mean by “work”? One problem is that misinformation is never purely a problem of bad product design, and it won’t ever be “fixed” by tweaks to the operations of a platform. Sure, reworking your platform might help mitigate the spread of harmful rumors or political smears — but belief in misinformation is also a function of social context. People who are convinced by misinformation are never persuaded simply because they happened to read an untrue tweet. They’re convinced because the misinformation comports with ideas and descriptions of the world promulgated by trusted people and institutions: friends, neighbors, politicians, newspapers, TV news, and so on. Twitter could eliminate misinformation from its platform entirely, but so long as cable hosts, news websites, Facebook, and politicians’ Instagram accounts are locked in a mutually beneficial cycle of epistemological reinforcement, very little will change about the problem of misinformation writ large.
Of course, that’s not to say that misinformation on Twitter is not a problem. Twitter’s willingness to take decisive editorial action is admirable, even if it won’t suddenly restore consensus reality. But even within that more limited context, I’m not entirely convinced that this is the right method. When asking if Twitter’s warning-label strategy will work at combating misinformation, it also matters how you define “misinformation.”
Take the Bloomberg video. It’s true that the video is “misleadingly edited,” and would be flagged under Twitter’s new rules about manipulated media. But is it “misinformation”? I suppose a few people might be briefly convinced that the faked, awkward debate moment actually happened. But there’s no institutional ecosystem that would reinforce the lie enough to persuade a large population of people.
I suspect the Bloomberg campaign is aware of this. In fact, I would guess that their goal in creating the video is not really to “misinform” voters about the debates at all. Rather, I think the video was created to underline one of the campaign’s talking points: Michael Bloomberg is a successful businessman; his opponents are not. And, in sharing, bemoaning, and debunking the video, fact-checkers and journalists are doing him the favor of restating that talking point.
This is why the warning-label strategy seems to me like it has potential to backfire. Often the stuff that qualifies as “misinformation” isn’t designed specifically to fool or mislead people, but simply to be spread — reinforcing talking points, raising profiles, or simply just muddying debate. Hanging a big colorful “HARMFUL” label seems more likely to make that problem worse.