Copy
View this email in your browser
THE LATEST FROM TOW

Five Days of Facebook Fact-Checking
By Priyanjana Bengani and Ian Karbal

Mark Zuckerberg, Facebook’s CEO, was back in Congress last week to talk about the need to reform how social-media platforms moderate content—a realm within which Facebook is trying to establish a “best in class” reputation. New research from the Tow Center shows that the platform is falling short

In order to understand how consistently and swiftly Facebook applies fact-checks to its namesake platform and Instagram, the Tow Center reviewed fact-checking labels assigned by the company to posts between October 1 and 5—the period spanning President Donald Trump’s COVID-19 diagnosis to his release from Walter Reed Military Hospital following three nights of observation and treatment. During that time, speculation about Trump’s condition dominated a news cycle already stretched by coverage of the ongoing pandemic, the upcoming Presidential election, and claims of voter fraud pushed by the Trump administration. In the absence of reliable information, rumors and conspiracy theories flooded social media; they included claims that “doomsday planes” had launched in order to ward off geopolitical enemies, and that Trump’s diagnosis was actually a “con job.”

Our review of fact-checking labels on Facebook during this five-day period found that the company failed to consistently label content flagged by its own third-party partners. Facebook’s ten US fact-checking partners debunked over seventy claims. We identified over 1,100 posts across Facebook and Instagram containing the debunked falsehoods; less than 50 percent bore fact-checking labels, even in instances where there were minor deviations from the original vetted posts.

Fact-checking isn’t the panacea to the large-scale problems of misinformation on Facebook. However, the social-media giant has touted its efforts with fact-checking partners since its Third-Party Fact-Checking Program launched in 2016. At the time, critics saw the initiative as a public relations effort to combat the narrative that rampant misinformation on the platform helped swing the election for Trump. “I don’t think their fact-checking has been any more than very cheap, very cut-rate public relations,” says Brooke Binkowski, a former managing editor at Snopes who was involved in the inception of the site’s fact-checking partnership with Facebook. (Binkowski is currently the managing editor at the fact-checking site TruthOrFiction.com.) In February, a Popular Information article about the scope and impact of the program found that “a majority of fact-checks were conducted far too slowly to make a difference,” as information can reach millions of people in a matter of hours.

Facebook continues to tout its use of artificial intelligence to automate fact-checking across “thousands or millions of copies” of similar posts spreading the same piece of information, thereby enabling their fact-checkers to focus on “new instances of misinformation rather than near-identical variations of content they’ve already seen.”

Our analysis found that Facebook still struggles to recognize similar content at scale. For example, we found 23 instances of a meme that attributed the murder of a Black Chicago teenager to “other Black kids.” Although an October 2 fact-check by the Associated Press found that the murder investigation in question was still open, Facebook was unable to recognize other iterations of the meme using similarity-matching algorithms that rely on natural language processing and computer vision.

 

Figure 1: A Facebook meme that we found 23 instances of; only two were fact-checked. The images are remarkably similar, with two major differences: The fact-checked images have a white border on the right-hand side, and some examples of the non-fact-checked images attribute the source of the meme to a subreddit.

As part of a recent year-long study on Facebook’s review system, Avaaz, a US-based non-profit that promotes international activism online, highlighted how misinformation peddlers can make minor tweaks to evade Facebook’s algorithms. This allows malicious pages to slip under Facebook’s radar or avoid attaining “repeat offender” status. 

For a platform the size of Facebook, the economies of automation are critical. Currently, the inconsistent manner in which similar pieces of misinformation are algorithmically identified and labelled leave a lot to be desired. 

So does the automation around Facebook’s new Voting Information Center, which the company promised users would be directed to when discussing voting on the platform. Based on our dataset, we found that the company failed to do this about 90 percent of the time.

When reached for comment, a Facebook spokesperson said, “If one of our independent fact-checking partners determines a piece of content contains misinformation, we use technology to identify near-identical versions across Facebook and Instagram.” However, they did not provide additional explanations for the multiple instances of inconsistencies Tow provided to them. 

In 2018 , Mike Ananny, a USC Annenberg professor of communications, reported on the state of Facebook’s fact-checking partnership for the Tow Center. At the time, Facebook focused solely on identifying links to false claims and had yet to expand their fact-checking program to individual posts, memes, and videos. Facebook provided access to a queue of suspect links and dubious posts to fact-checkers at partner organizations; the posts in this queue were identified by user-flagging and a series of algorithmic inputs which Facebook did not disclose even to their own fact-checkers. Ananny found fact-checkers were frustrated by Facebook’s clumsy handling of similar content on both sides of the fact-checking process. “[Fact-checkers] didn’t have the ability to say, ‘Here’s a class of things; these look similar,’” Ananny said in a recent interview, which meant they had to add a link to each debunked claim on many individual posts one by one. On the other side of the queue, the fact-check was supposed to propagate through to identical and similar content algorithmically. Based on our conversations with multiple fact-checking partners, today, the process remains largely similar, but the scope of fact-checking now includes the whole gamut of content: links, posts, memes, and videos. 

In the five-day period that we looked at, we found fact-checkers had debunked ten claims related to the President’s diagnosis, fifteen claims related to COVID-19 and vaccinations in general, and sixteen claims related to the election and voting (including claims about former Vice President Joe Biden). “Other”—the catch-all bucket—includes myriad claims, from false earthquake alerts to George Soros being banned from six countries. Figure 2 illustrates the classification of claims our analysis covered. We provide details on all the claims, the searches we ran, and our overall methodology below. Keep reading on CJR.

WEEKLY ANALYSIS

News Corp’s Knewz Pushes New York Post’s Hunter Biden Story
By Pete Brown

Controversial moves by Facebook and Twitter to limit the spread of a disputed New York Post story about Joe Biden’s son Hunter put the Post’s owner, News Corp, at odds with Big Tech.

Twitter users were unable to share the story, while Facebook placed restrictions on linking to it, moves that led to accusations of censorship, particularly among the political right.

But one platform the company did not have to worry about was Knewz, the news aggregator that News Corp itself launched in January 2020 as a “Groundbreaking News Platform Free of Filter Bubbles” and a direct competitor to tech platforms like Facebook, Twitter, and Google.

At launch, News Corp. outlets were keen to position Knewz as a direct competitor to tech platforms like Facebook, Twitter, and Google. The New York Post said Knewz would “compete against Big Tech, which is decried for selling ads against news stories—and for profiting off the work of journalists they don’t pay.” According to The Wall Street Journal, “News Corp. said it developed the site, in part, to give exposure to media outlets that the company felt were often demoted in Google’s search results and Facebook Inc.’s social feed.”

News Corp’s pitch is that Knewz combines “cutting edge, proprietary artificial intelligence with experienced editors” to “curate a selection of headlines that provide a broad perspective on stories of the day.”

Analysis of recent editions of the Knewz editors’ daily Knewzletter strongly suggest the Knewz editors felt the contentious story of Hunter Biden’s laptop was – and remained – a top priority in the run-up to the election.

Eleven Knewz newsletters have been published since the Post’s first Hunter Biden story on October 14th. Three of these were election editions of the Knewzletter that directed readers to curated sections of the website rather than promoting individual stories. The remaining eight typically followed the same formula: Five headline topics, each with four curated links to different outlets’ reporting on the story. (October 29th’s newsletter broke from this template, containing twelve rather than twenty total links – four each to three different stories.)

Of the 152 links promoted in the eight ‘Knewzletters’ under examination, 35 were about Hunter Biden – almost one in every four.

Tantalizing tabloid-friendly promises of smoking guns, bombshells and scandal enabled the editors to keep the flame alive over multiple days. The editors’ own summary on October 19th described the New York Post’s story as a “bombshell report.”

The younger Biden’s name was featured in the newsletter’s subject line in six of these eight newsletters:

‘Hunter Biden email release’ (October 14th)
‘Hunter's emails exposed and story suppressed’ (October 15th)
‘Uproar over Hunter Biden’ (October 19th)
‘Hunter's big problems’ (October 21st)
‘Hunter's BIG problems’ (October 22nd)
‘Hunter in the hot seat’ (October 28th)

Headlines pertaining to Hunter Biden led the newsletter on four occasions:

‘Hunter's emails exposed’ (October 15th)
‘Uproar over Hunter Biden’ (October 19th)
‘Hunter's big problems’ (October 21st)
‘Hunter in the hot seat’ (October 28th)

The sources of these stories also call into question News Corp’s claims about promoting news “Free of Filter Bubbles… from Publishers Large & Small, Left & Right, Here, There & Everywhere.” 20 of the 35 stories – just over half – came from just six conservative-leaning outlets: The Daily Caller, Daily Mail, Fox News, National Review, New York Post, and The Sun.

The full breakdown: 

Daily Mail (5)
Fox News (5)
National Review (3)
New York Post (3)
Politico (2)
The Daily Caller (2)
Bloomberg News (1)
BuzzFeed News (1)
CBS News (1)
Daily Beast (1)
New Republic (1)
Newsmax (1)
The Daily Wire (1)
The Sun (1)
The U.S. Sun (1)
The Washington Post (1)
The Washington Times (1)
TheBlaze (1)
Washington Examiner (1)
WJLA Washington (1)
Yahoo News (1)

On Wednesday, October 28th, with just under a week to go until Election Day, the Knewz editors’ summary of the big stories of the day reported, “Senator Ted Cruz ripped into Twitter boss Jack Dorsey, accusing him and his social media platform of improperly censoring reporting from the New York Post that reflected poorly on Joe Biden and his son, Hunter Biden.”

No such allegations of censoring the New York Post’s big scoop could be directed at Knewz.

 

 

 

STORIES YOU MAY HAVE MISSED
  • On our Medium page, Tow researcher Ishaan Jhaveri looks into the rise of the pro-police Thin Blue Line flag at Trump rallies over the last four years. 

  • At The Markup, Adrienne Jeffries tracks the ways major social media platforms are planning on handling misinformation and other kinds of interference on Election Day. 

  • "It has been a profitable time for cable news, a record-breaking year for political books and, generally, a bonanza for the legacy media that live rent-free in the president's head. That may be ending,” writes New York Times media columnist Ben Smith in a consideration of the industry’s post-election future.
Twitter
Facebook
Website
Twitter
Facebook
Website
Copyright © 2020 Tow Center for Digital Journalism, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp