Copy
View this email in your browser
ANNOUNCEMENTS
                                                                              
Improving Social Media: Reducing Gender-Based Violence Online
Deadline May 27th, 2021 


Click the below link to explore opportunities to get involved with ONA21. Many include complimentary registration.

More info
EVENTS
               
Taking back the narrative: Changing how Asian Americans are represented in the news, on Wikipedia, and beyond
May 26th, 2021| 1PM ET


The Asian American Journalists Association (AAJA) and the Wikimedia Foundation are co-hosting a virtual event on the topic of increasing representation of Asian American and Pacific Islander (AAPI) communities in the news, Wikipedia, and across the information landscape.

This past year we saw a rise in acts of violence and xenophobia perpetrated against people of Asian descent, further exasperating biases and a lack of representation of AAPI voices in the media and across the web. Speakers will discuss how Wikipedia and journalists can act as partners to increase the representation of AAPI voices in the news media and online landscape, address biases, and encourage everyone to get involved in amending these critical gaps.

Register here
                          
Bolster Your Digital Safety: An Anti-Hacking, Anti-Doxing Workshop

May 27th, 2021| 530PM ET

Everyone with an online presence is at risk of abuse—especially those who identify as women, LGBTQ+, and/or BIPOC. From impersonation and hacking to doxing (the publishing of private info), abusive trolls join forces to intimidate, discredit, and silence. But there are concrete steps you can take to protect yourself and be an ally to others.

This training is designed for the Pulitzer Center’s community of educators and journalists, who teach and/or report on sensitive topics. With your devices in hand, join us for a hands-on workshop with PEN America and the Freedom of the Press Foundation where we’ll teach you how to audit your social media accounts, tighten your privacy settings, and track your personal information online. We’ll also discuss how we can build community and stand in solidarity to fight back against abuse and continue speaking out on the issues that matter most.

Register here
WEEKLY NEWSLETTER

How tech platforms decide what counts as journalism
By Emily Bell, Founding Director of the Tow Center for Digital Journalism

This article first appeared last week on CJR. It has been edited for length and clarity.  
 

In the aftermath of the deadly Capitol insurrection, Facebook, Twitter, and Google acted as they never had before. Twitter flagged Donald Trump’s incendiary lies, removed some posts, then suspended his account; Facebook banned him for inciting violence. Thousands of conspiracy theory accounts disappeared from the internet. The platforms that had delivered and sustained a toxic swarm of misinformation were now abandoning their most mendacious hitmakers.
 

The great deplatforming of January 2021 felt like a turning point that technology companies had long resisted. Last March, Facebook announced a “coronavirus information center” that would place “authoritative information” at the top of news feeds. In May, Twitter put a warning label on a Trump post for the first time, alerting users that it contained “potentially misleading information about voting processes.” Later that month, after police killed George Floyd, Trump made racist comments that Twitter hid behind a barrier; a warning label stated that the post had violated rules against glorifying violence.
 

All of that came after a forty-year period of media deregulation, as I recently told the House of Representatives, that created an environment “optimized for growth and innovation rather than for civic cohesion and inclusion.” But putting a stop to disinformation and extremism—and preventing another attack on a government building—will require more than content removal. Technology companies need to recalibrate how they categorize, promote, and circulate everything under their banner, particularly news. They have to acknowledge their editorial responsibility. After the Capitol riot, there’s no going back to the way things used to be.
 

Between 2016 and 2020, Facebook, Twitter, and Google made dozens of announcements promising to increase the exposure of high-quality news and get rid of harmful misinformation. They claimed to be investing in content moderation and fact-checking; they assured us that they were creating helpful products like the Facebook News Tab. Consider the mystery behind blue-check certification on Twitter, or the absurdly wide scope of the “Media/News” category on Facebook. The platforms’ lack of clarity “comes down to a fundamental failure to understand the core concepts of journalism,” said Gordon Crovitz,a former publisher of the Wall Street Journal and a cofounder of NewsGuard, which applies ratings to news sources based on their credibility.
 

Still, researchers have managed to put together a general picture of how technology companies handle news sources. According to Jennifer Grygiel, an assistant professor of communications at Syracuse University, “We know that there is a taxonomy within these companies, because we have seen them dial up and dial down the exposure of quality news outlets.” Internally, platforms rank journalists and outlets. These scores help develop algorithms for personalized news recommendations and products.
 

Very occasionally, these designations are used to apply labels. One such example came last year, when Facebook and Twitter announced they would flag state media on their platforms. Facebook also announced it would block state media from targeting US residents with advertising, including on Instagram.  In practice, the application has been inconsistent. Even if a page is flagged on Facebook, individual posts continue to float around without a label. And Facebook has refused to identify Voice of America as state media—which posed a problem last year when Trump replaced its staff with loyal propagandists.


While early attempts at labeling were shaky, how far are social media platforms prepared to go in categorizing other pages that are just as manipulative but less glaring? Grygiel doesn’t like the notion of tech giants certifying journalists, but does feel a need to draw lines and to focus on misinformation-spewing websites that have ties to political funders or partisan think tanks. “We don’t want credentialing for news,” Grygiel told me, “but we can apply tests for what is definitely not news.”


Take the case of Texas Scorecard, which identifies on Facebook as a “Media/News Company.” On election night this past November, Texas Scorecard published one of the most viral stories about ballot counting—an easily debunked article about the “suspicious” movement of large cases into and out of a Detroit voting center. (“The ‘ballot thief’ was my photographer,” Ross Jones, an investigative reporter for WXYZ Detroit, tweeted.) Its inaccuracy was the product not of poor reporting, but political interest; Texas Scorecard is a project of Empower Texans, a right-wing lobbyist group, and the categorization as “Media/News” was self-applied—on Facebook, almost anyone is permitted to call themselves a publisher. Texas Scorecard has effectively disguised itself as a legitimate local news source to its nearly two hundred forty-five thousand followers—almost a hundred thousand more than the highly reputable Texas Tribune.


Over on Google, by contrast, Texas Scorecard did not show up in the “News” tab. Unlike Facebook’s honor system, Google’s search engine deploys an algorithm and a panel of human beings to decide who falls into the “news source” category. But that doesn’t mean Google has solved the disinformation problem: the “news source” label doesn’t consistently reflect veracity; even the Epoch Times, the conspiracy-driven pro-Trump Falun Gong–linked newspaper, meets the standard. 
 

Politically funded local “news” sites like Texas Scorecard became a signature of the 2020 campaign cycle and represent a new model for using the trappings of journalism to wield dark-money influence. At the Tow Center for Digital Journalism, we conducted extensive research into this phenomenon, examining how platforms have struggled with these false proprietors of “journalism” in their labeling and flagging processes. Just last year, Facebook announced that it would prevent sites with “direct, meaningful ties” to political organizations from claiming to be news and using its platform for promotion. Yet Texas Scorecard remains “Media/News.”   


In deciding where and how to apply labels, tech companies are, in a sense, defining what journalism is. As Jillian York—the author of a new book, "Silicon Values: The Future of Free Speech Under Surveillance Capitalism"—pointed out to me, this is not a novel concern. “It feels as though we had many intense discussions around the issue of ‘Who is a journalist?’ in around 2010, when we were considering how to think about organizations like WikiLeaks,” she said. Since then, tech companies’ reluctance to get involved in editorial matters has provided us with a working definition of journalism—a confused and undermining one that offers a weak gesture toward “balance.” Facebook has practiced this technological false equivalence as recently as 2018, when Mother Jones, a lauded print magazine that produces hard-hitting investigative journalism, learned that it was subject to an algorithm change that weighted its site negatively. Meanwhile, the Daily Wire, a right-wing site, was weighted positively. 
 

Even with the inconsistencies, scholars haven’t given up on the potential of labeling news on platforms altogether. “We perhaps need new language for some of these digitally native, wildly popular disinformation sites,” Ethan Zuckerman, a media scholar at the University of Massachusetts at Amherst, told me. Zuckerman believes that tech platforms should make use of the news quality assessment work done by organizations like NewsGuard and the Trust Project.
 

Of course, the people who will ultimately be making these designations are tech executives, who tend to espouse both a profound faith in the idea of free speech and an extreme skepticism of journalists. How they settle their approach to labeling matters; the proven harm of failing to distinguish between truth and fiction, or to account for the motivations and funders of those who aim to mislead, requires that platforms be more open with news producers. But unless the platforms want to change (which would include altering their revenue systems) the odds don’t look good. 

 

STORIES YOU MAY HAVE MISSED
 
  • Earlier this month, the Institute for Nonprofit News published their yearly impact report highlighting the growing influence of nonprofit newsrooms in preserving and diversifying local news across the country. According to the report, there are currently more than 2500 nonprofit reporters in newsrooms across the U.S. They predict that these newsrooms, over the next ten years, “will produce a significant share of the news consumed by most Americans about our civic life.” INN also points to the potential of nonprofit models for journalism to help ease the pains of decimated ad-based revenue models. Nonprofit models, they argue, are more sustainable financially and more ept at reporting on underserved communities and beats. The Tow Center continues to research the role of nonprofit models on the journalism industry.

 
  • Trusting News, a blog and newsletter that aims to address the “education” gap between journalists and their audiences, featured last week’s Tow newsletter piece in an article on how “newsrooms...can help assess newsroom staff, processes, coverage and trust.” Tow Fellow Letrell Deshan Crittenden’s rubric, which the article is based on, allows media organizations and researchers to rate how staff members, communities and readers of color are treated. On the problem of how newsroom representation can fail communities, Trusting News writes that, "When users..." ask questions or make observations about staff diversity, they are too often ignored or given inauthentic or defensive answers.” And on the issue of coverage of these same communities, Trusting News (and Crittenden’s rubric) encourages newsrooms and researchers to source with a wider range of perspectives, engage more regularly with local residents and eliminate coverage that’s tokenizing or harmful to people of color. In Oakland, one news site has implemented these types of assessments already

 
Twitter
Facebook
Website
Copyright © 2021 Tow Center for Digital Journalism, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp