Combating Misinformation In The Age of Technology

BU Experts
8 min readJul 29, 2022

--

Fakes news, disinformation, and misinformation, oh my. Two Boston University researchers discuss how misinformation spreads online, its impacts, and how to spot it.

By: Katherine Gianni and Giana Carrozza

“Pope Francis shocks world, endorses Donald Trump for president.” “Coronavirus Bioweapon — How China Stole Coronavirus From Canada And Weaponized It.” “Obama Signs Executive Order Banning The Pledge Of Allegiance In Schools Nationwide.”

From politics to public health and everything in between, these headlines capture just a few of the countless articles circulating online that contain rampant misinformation. What’s more, each of these stories has collected hundreds of thousands of engagements across social media platforms like Facebook and Twitter. According to the Pew Research Center, 23% of Americans have shared a fake news story. The researchers also uncovered a majority of Americans believe that fake news has left people increasingly confused about basic facts. Here’s a fact worth considering: accurate information is essential for making informed decisions. Without it, we can face damaging, toxic, and even dangerous consequences. As misinformation continues to spread online, what measures can people take to spot it and better understand the truth?

Photo by visuals on Unsplash.

To learn more, we turned to Dr. Michelle Amazeen and Dr. Gianluca Stringhini. Dr. Amazeen is an associate professor in the department of mass communication, advertising, and public relations at Boston University’s College of Communication. She is also the director of BU’s Communication Research Center (CRC) where she and other CRC fellows study media effects and communication. Dr. Stringhini is an assistant professor at BU’s College of Engineering. His research focuses on data-driven methods to better understand and mitigate malicious activity online–including cybercrime, online harassment, trolling, and misinformation.

Can you please define misinformation?

Michelle Amazeen: Misinformation is the broad term for content that is inaccurate without taking into consideration the intent of the message sender. For example, people could share inaccurate information simply because they were honestly mistaken about something such as the date by which one needs to be registered to vote in an election. When inaccurate information is conveyed to deliberately mislead or deceive someone, that is called disinformation (a specific subset of misinformation). For instance, so-called “pregnancy crisis centers” that tell women that abortions cause mental illness or infertility are examples of disinformation because there is no scientific basis for these claims and the intent is to dissuade women from considering abortion as an option among their reproductive health choices.

Gianluca Stringhini: Misinformation is incorrect or inaccurate information. It is often shared by people who genuinely believe in it, but sometimes it is spread by malicious actors with a clear intent to deceive. In this case it is called disinformation.

What aspect of online misinformation does your research and scholarship focus on?

Gianluca Stringhini: While misinformation is an important threat facing our society, we still do not have a good understanding of how false information is created, how it spreads on social media, and of what type of misinformation is particularly effective and dangerous. In my research, I develop computational approaches to automatically trace and monitor misinformation. The techniques that my group has developed allow us to monitor several online platforms at once (e.g., Twitter, Reddit, 4chan, Gab) and trace misinformation content across it. This allows us to identify online communities that are particularly influential in spreading misinformation, and to pinpoint emerging misinformation narratives that are likely to go viral and should therefore be fact-checked and moderated.

Michelle Amazeen: My research is focused on the origins, nature, and effects of disinformation and interventions that may be effective in helping people recognize and resist it. Of particular interest is the use of “native advertising”: paid content designed to look like online news articles. Also referred to by names such as sponsored content, paid post, or other euphemisms, nearly all major news organizations run this type of material in order generate revenue. In many cases, not only do news outlets post this type of content, but they also have in-house studios that create it.

The reason native advertising is considered disinformation is two-fold. First, research has repeatedly demonstrated that the vast majority of media consumers are unable to distinguish this type of paid content from real news articles. Second, because the content is paid for, it is biased in favor of the sponsor and may even be wholly inaccurate. For example, the fossil fuel company Exxon Mobil is being sued by the Massachusetts Attorney General’s Office for misleading consumers about their efforts to combat climate change.

Photo by Oğuzhan Akdoğan on Unsplash.

What role does social media play in spreading misinformation?

Michelle Amazeen: In the case of native advertising, social media helps amplify these disinformation efforts. While the Federal Trade Commission requires that advertising content must be clearly and conspicuously disclosed as such on news sites, research has shown that these disclosures disappear more than half the time when the content is shared on social media.

Gianluca Stringhini: The ease of communication enabled by social media exposes people to false information, and often social media platforms do not provide easy tools to check whether this information is accurate. As a result, false information is often taken at face value, and people re-share it and make it spread like wildfire. In this scenario, platforms often play catch up, moderating or debunking misinformation after the fact.

How is online misinformation impacting society at-large? What specific consequences and future implications do we need to be aware of?

Gianluca Stringhini: Online misinformation can affect how people act in the real world. Probably the most serious example of this has been the public reaction to the COVID-19 pandemic, where false or unconfirmed cures against the disease were promoted on social media and were adopted by the public, often with nefarious consequences.

Michelle Amazeen: In democratic societies, journalism is one of the institutions that is supposed to keep populations accurately informed to facilitate representative governance. However, by taking money from advertisers and then disguising the content to look like journalistic articles, publishers are jeopardizing the reputation of their newsrooms. Research shows that when people recognize they have been viewing native advertising, it tarnishes how they perceive other content from that publisher, even if that content is not sponsored in any way. Thus, at a time when public confidence in media continues to erode, it seems that participating in the practice of native advertising is not a good way to restore trust.

There is also evidence that the use of native advertising compromises the independence of newsrooms. While there has always been tension between advertisers and news, today’s native advertisements are being created by the news organizations rather than by outside creative agencies. This is something I am expanding upon with a team of BU researchers (including Gianluca) as part of a new project on climate change misinformation. We are examining how fossil fuel companies may be using native advertising to disinform the public about climate change as well as how these ads affect both social media and mainstream news coverage.

How can individuals better recognize and combat online misinformation?

Michelle Amazeen: My research has shown that people who are more news media literate are better able to identify disinformation efforts such as native advertising. Media literacy skills entail paying close attention to the source of news articles to see whether they are from legitimate publications and to see if there is a reporter byline indicating the author of the article. Even with a byline, one must carefully scan the webpage for a disclosure indicating that the content may be sponsored by a corporation, political party, or special interest group. Because the FTC does not require any standardization in disclosure language, it can be difficult to understand that words such as “partner content” mean advertising. Moreover, publishers often use disclosures that are not very prominent. By using small font sizes and colors that blend in with the background, the disclosures can be easy to miss.

Gianluca Stringhini: It is often difficult for people to check whether a piece of information they come across is accurate or not. There are, however, some actions that can be taken to verify online information. One is checking fact-checking websites for known debunked narratives. Sites like Snopes and Politifact constantly post information about claims that have proven to be false, including evidence of why those narratives are not true. Another useful practice is looking up images that look misleading or false on reverse image search engines, like Google Image Search or TinEye. These websites are able to find other places online where a given image has been posted, including fact checking websites that might have investigated it and potentially debunked it.

Photo by Sean Robbins on Unsplash.

What policies and initiatives need to be put in place to address and prevent the spread of online misinformation on a societal level?

Gianluca Stringhini: The main issue with the current approach of debunking misinformation is that the process is reactive, and by the time a fact check about a certain narrative is issued, a large number of people have already been exposed to the misinformation. This problem is made even worse by the need for human analysis in fact checking information, which limits the number of fact checks that can be issued, allowing unconfirmed information to thrive on social media. In our research we are working on techniques to automatically identify social media posts that are related to existing fact checks. Our goal is to reduce the human effort needed to moderate these posts, enabling a quicker response.

Another issue is that misinformation thrives in an environment where several competing and incoherent narratives are present. For example, during the COVID-19 pandemic we have seen multiple and often conflicting guidelines about masking, quarantining, vaccines, etc. To avoid this, government agencies should work on developing a single communication strategy which would leave no room for alternative interpretations and misinformation. This is quite challenging though, especially for topics where the scientific consensus is constantly evolving.

Michelle Amazeen: For the disinformation I have been discussing — native advertising –there are already some policies in place such as Section 5 of the Federal Trade Commission Act. These policies are directed at the content creators which, in this case, are the news publishers. However, the problem is that there is very little enforcement of the already existing policies. Very few publishers have been penalized for omitting disclosures of commercial content on either their websites or when the content is posted to their social media accounts.

In addition to better enforcement, the existing federal policies need improvement. For instance, disclosures need to be standardized so that all native advertising content uses the same language. The prominence of the disclosures should also be standardized with a visibility ratio that is indexed to a certain minimum threshold between font size and color compared to surrounding content. Moreover, publishers should be required to embed digital watermarks in their native advertising content such that they cannot be modified or removed when reposting elsewhere. From a consumer standpoint, providing comprehensive media literacy education in schools and public libraries will also go a long way toward teaching the public to be more critical consumers of media content.

For additional commentary by Boston University experts, follow us on Twitter at @BUexperts. Follow Dr. Amazeen on Twitter at @commscholar. Follow Dr. Stringhini at @gianluca_string. For research news and updates from BU’s College of Communication, follow @COMatBU and the Communication Research Center at @BUCOMResearch. Follow BU’s College of Engineering at @BUCollegeofENG.

--

--

BU Experts

Cutting-edge research and commentary out of Boston University, home to Nobel laureates, Pulitzer winners and Guggenheim Scholars. Find an expert: bu.edu/experts