On Misinformation
03-30-2025Roger Berkowitz
Audrey Tang is one the leading thinkers and practitioners of using technology to enhance public discourse. As the Digital Minister of Taiwan for nearly a decade, Tang innovated many digital platforms that enhanced public debates. The great danger of social media platforms is their deployment of algorithms designed to maximize addictiveness by recommending content that either confirms your bias or enrages you. Social media becomes a personalized “rage machine” and this will only get worse as artificial intelligence optimizes the algorithms to feed you and even create content tailored personally to push your buttons. The challenge is to change our habits on social media. That may include pushing social media platforms through pressure or through regulation to push content that actually makes people more thoughtful. Yascha Mounk interviews Tang and asks her how to understand how misinformation works today. And how we can fight the rage machines. Tang answers:
I served as the Digital Minister of Taiwan from 2016 to last year. Now I'm the Cyber Ambassador, and in Taiwan, we're ranked top in Asia when it comes to internet freedoms. We're also top in Asia in terms of civic space, and so on and so forth. We've never believed in censorship because we had martial law for almost four decades and people don't want to go back. I do feel that the term “misinformation” is a little bit misinforming, if you will. Since 2016, when I went into the cabinet and we tackled this issue, we always called it “contested information.” That is to say, it’s not about whether it's absolutely true or absolutely false, but rather about its potential to drive engagement through enragement, so to speak. Instead of the usual metrics like fact-checking and things like that, we measure how polarized people are because they receive such information and how they retweeted or posted because of the enragement that is derived from it.
The reason why is that since 2015, many of the online platforms have switched their algorithm from a shared feed or a follower feed to what's called a “for you” feed. This feed only maximized one metric: addictiveness, which is how much time you spent on the touchscreen, essentially. Along with that came a lot of autoplay and recommendation algorithms that strip the social fabric so that people no longer have shared experiences. That drives this individualized kind of rage machine so that people can waste a lot of time shadow-boxing on the extremes. If you look at the content in itself, it is not necessarily true or false. Sometimes it has nothing to do with factual information. All it has to do with is polarization. That is also because Taiwan is one of the most internet connected countries. In 2014, the trust level between the citizens and the government was 9%. In a country of 24 million people, anything President Ma said at the time had 20 million people against him. We want to fight that polarization instead of specific bits of misinformation….
Within X, there's also an algorithm that is more bridge-making, and that is the community notes algorithm. Basically, what it does is that for each trending post, people can volunteer to add more context to clarify. So, they’re not necessarily fact checking, but rather just providing useful context. It's not that the most upvoted note will display, but rather they will first use the algorithm to separate people into different clusters and the clusters are people who would consistently upvote certain kind of notes and the other group is the people who would consistently upvote some other kind of notes and they don't overlap. Now, if a vote from both sides propels a note to the top then that's more likely to be sticky and will be displayed next to the post. The original poster cannot take it down. That's the model that X uses instead of using third-party independent fact-checkers. This model has been adopted by YouTube and very soon Meta—on Facebook—as well, as a kind of different, more horizontal way compared to the vertical institutions of specialized journalists and fact checkers. Within X, two algorithms coexist—for the main post, there is the divisive one, but for the clarification, the community notes, that's the bridge-making algorithm.
Mounk: I think quite highly of the community notes system and it sounds like you do as well. I was really struck, and perhaps this is itself part of the polarized reactions we now have to everything in our politics, that when Meta announced it was no longer going to fact check but rather rely on the community nodes model, the reaction to that from my kind of circles and from people who care about democratic institutions was nearly uniformly negative. I understand that it was in a context where Mark Zuckerberg was making these overtures to Donald Trump. But I thought—and I know less about this topic than you do—that there was a mistake, that it was a knee-jerk reaction to say that we must prioritize fact-checkers over any other kind of system. Even though the community model to me seems like the best feature of the new Twitter, people who may be skeptical might be coming to this conversation thinking, no, Zuckerberg is giving up democracy by getting rid of the fact checkers. What case would you make for the benefits of a community notes model?
Tang: Full disclosure, I'm in touch with the people who implement the Meta community notes and also another group in Meta that implements so-called community forums, which is a deliberative platform, like a jury system where people can go online and steer the Meta system. I would note that neither of these two systems are completely sabotage-proof, so if you really want to take it over, mount an attack and pollute it with a lot of resources, there's a chance that you will succeed. So, I'm not saying that this is a foolproof system. With that said, our experience in Taiwan of both the bridge-making algorithm and the online community forums did show that it is actually possible to strengthen civic muscles and people's solidarity across differences if you implement these kinds of mechanisms.
We first tried out an in-person way of facilitating conversations to uncover common ground between people who are drastically polarized in March 2014, when we occupied our parliament peacefully to show that this kind of method works. Since 2015, we’ve been working with the cabinet on an online version of this process. In 2015, when Uber first came to Taiwan, we asked people, how do you feel about a driver with no professional driver’s license picking up strangers they met on an app and charging them for it? People just shared their own feelings. And just like community notes, the poll system that we used has upvotes and downvotes, but there's room for trolls to grow. There's a visualization that shows whether you're in the camp of pro Uber drivers or pro rural unions, and so on. There are different clusters that are grouped automatically. Every day, people see more and more of these bridging statements. For example, undercutting existing meters is very bad, but surge pricing is fine. That is one sentiment that actually all sides can agree on. After a while, people start competing on the widely acceptable bridging items. At the end of the process, the nine or so statements that got more than 85% approval from all the different groups became the agenda. Then, we made a law based on those consensus ideas.