What did I learn about misinformation in 2021?

Arushi Saxena
4 min readJan 1, 2022

This past year, I spent a lot of time and energy on the topic of misinformation. It started with my digital literacy-related thesis, the #EkMinute Project, which was both inspired by and affected by misinformation in India, and culminated with me taking a full-time role on Twitter’s Societal Health (misinfo + civic/political integrity) team. Given COVID, climate change, and major political transitions around the world, it was a relevant, rewarding, and challenging endeavor.

I was also grateful to participate in many summits and conferences. I presented my work at MIT’s Lincoln Labs, All Tech is Human’s Responsible Tech Summit, metaLab’s Creative Virtual workshops, Berkman Klein Center’s Rebooting Social Media, and my own undergraduate alma mater, the Haas School of Business.

Overall I’ve learned a lot and am excited to share some of my learnings below. These are my own opinions and are not meant to reflect the views of Twitter.

Conversation with Courtney Buechert’s Topics In Social Impact Marketing course at the Haas School of Business (October 2021)

Firstly, misinformation isn’t always static or permanent. “Facts” can evolve or change as we learn more. Something that might be considered misinformation in one situation or timeframe, may be proven true or become correct later down the line. Which means that sometimes acting too fast can actually be harmful. My mind goes to the March 2020 CDC example, where for a number of reasons, public health officials advocated against wearing masks. Clearly with societal context and scientific evidence evolving, we’ve come a long way since then.

Should that stop us from taking action in the meanwhile? Definitely not.

Here’s a framework to consider:

Can the content ever be verified or proven false? Or is it just an opinion? If it’s an opinion, we may never be able to disprove it. Rather we should focus on harm reduction.

What is its potential reach? Will it be seen by a handful of people or can it reach millions?

What is its potential for harm? Is it related to a time-sensitive and / or a critical topic in the public interest?

Secondly, it becomes more complicated once you realize that misinformation comes in many different shapes and sizes. It exists on a spectrum — ranging from memes or altered images to isolated statements taken only slightly out of context to completely false statements from public officials. It can spread through hex code or derived languages known only to a selected group. How do we keep up with it in all its forms? While academics and technologists are developing sophisticated detection capabilities, with human fact-checkers in the loop, platforms have traditionally taken a reactive approach to misinformation. They might use labels to annotate disputed information, share debunking posts from authoritative sources or may outright remove content. But even so, the harm is already done.

The interesting part is that “misinfo debunks” can sometimes be… counterproductive. Indeed, practitioners believe that the more you see a certain topic or theme or sentence, the more you start processing and internalizing the idea even if it’s explicitly framed within a debunk or fact-check.

So what proactive approach are experts considering?

Inoculation theory (yes, definitely meta given the existence of vaccine misinfo). It’s a reimagination of the concept of vaccines in healthcare, and suggests that we proactively expose people to the characteristics and harms of misinformation. Whether it’s a short educational video, online pop-ups banners, sound-bites, or mini ads that explicitly name misinformation, these small nudges can help users build an underlying awareness — when we venture online, we’ll inevitably encounter fake news tidbits, but we can be empowered to identify and flag it for ourselves and our communities.

Teaching people about digital literacy isn’t easy and no one likes to be lectured to. Overall, I believe inoculation should be customized based on the reader, their demographic and attributes. Different people are influenced in different ways and need to see the same thing repeatedly to start believing it. In fact, this was the whole premise behind the EkMinute project.

What’s next in the world of misinformation?

Human choice, personalization, and “middleware”

There’s talk of increased choice and customization for social media users to determine the level of content moderation they’d prefer. Academics are suggesting the creation and application of middleware, third-party software that can be integrated into social media platforms to curate and organize the content that users see. Users would choose among competing middleware algorithms, selecting trustworthy and relevant providers. Middleware companies would be subject to government regulation and theoretically reduce the platforms’ power and editorial control over social media communication.

And what’s next for me?

Like many of us, I have a love-hate relationship with social media. There’s a lot of good I derive from social media but I also spend time thinking about its extractive tendencies and problematic stewardship of customer data. So in 2022, I’m hoping to learn more about data rights as well. Data empowerment will continue to matter more as social media becomes more personalized. Consumers should be empowered to care, know, and choose how their data gets used, and what they’re giving up to get what they want.

***

I’m always open to conversations, project collaborations, introductory chats, and public speaking opportunities. Please reach out via LinkedIn if you’d like to connect. Wish you all the best 2022 has to offer!

--

--

Arushi Saxena

Harvard Master in Design Engineering (MDE) Candidate. Responsible tech, data ethics, and cultural studies. Live to eat. Recent Silicon Valley emigrant.