Marketers see progress in Twitter’s efforts to stamp out hate

While testifying in front of Congress in Sept. 2018, Twitter CEO Jack Dorsey said he wanted to “increase the health of public conversation” and be more proactive about removing hateful content rather than relying on user reports. Marketers say they are seeing that commitment firsthand — even if the platform still routinely struggles with brand-safety issues.

“There isn’t a single person at Twitter who doesn’t talk about how health is a priority. So the top-down message from Jack [Dorsey] has resonated. They recognize they need to make Twitter a place for civil discourse without abuse or letting it be a tool or weapon for bad actors,” said Joshua Lowcock, global brand safety officer at UM.

UM’s Lowcock said he was particularly impressed by Twitter’s decision to “get off the MAU/DAU treadmill” by making moves that would affect their numbers. For example, Twitter has repeatedly committed to eliminating fake and bot accounts from the platform overall and inactive accounts from users’ follower numbers. Twitter recently changed reporting metrics to “monetizable” daily active users instead of monthly active users.

“While it might adversely impact user numbers that Wall Street obsesses about, it’s the right step to take if you want to be a responsible platform,” Lowcock said.

Twitter, like its peers, has been investing in content moderation through hiring human moderators and building more technical solutions. Twitter doesn’t tout how many human moderators it’s hired, unlike Facebook and YouTube. A Twitter spokesperson declined to share the size of the team. Publicly, like in Dorsey’s testimonies in DC, the company discusses organizational improvements and its technological investments. For example, in June 2018, Twitter acquired an anti-abuse startup Smyte that identifies unwanted online behavior including fake accounts and hate speech. Technology that can quickly identify harm is especially useful during a global crisis such as the recent shooting in New Zealand.

Joe Barone, GroupM’s managing partner for brand safety in the Americas, said he worked with Twitter in the wake of the New Zealand shooting.

“I can’t tell you if [Twitter’s] AI and machine learning is as good as YouTube, but they deploy a lot of the same types of capabilities to snuff out the content. More generally, they have a real human-curation approach when it comes to products like pre-roll,” Barone said.

Twitter lets brands work with a curated list of more than 200 publishers to buy pre-roll ads or other sponsorships, as Twitter shows in its pitch deck. That’s ideal for brands who are more risk-averse and may be less willing to tweet their own content and therefore risk being adjacent to hateful content in the feed or have bad actors reply to the ad.

For brands that are more risk-averse, Twitter allows brands to blacklist accounts that have made harmful or negative comments about the brand in the past, so they will not be in the targeting group for an ad, Barone said.

Twitter also is testing a feature that would allow for comment moderation on organic posts as part of its push to improve conversation in the app. This feature, which Digiday saw in an app demo in October at Twitter’s NYC office, was spotted by Jane Manchun Wong in February, prompting a response from Twitter senior product manager Michelle Yasmeen Ha.

That moderation feature has yet to be widely released. Haq tweeted that it may be tested publicly in the coming months. A marketer, who works in-house at a technology brand, said they would welcome the ability to moderate comments, as they can on Facebook and Instagram.

In fact, this marketer has been resistant to invest ad dollars in Twitter. While they have repeatedly expressed concerns about Twitter’s health and safety to sales reps at the company, in conversations and via email, for the most part, the marketer hasn’t been impressed by the feedback.

“They sent me some BS propaganda on ‘How Twitter is actually a really positive place’ and it boils down to: OK, great, donut dad had a busy store for a few days. Hate and white supremacy means people die and crazy folks send bombs and plan to kill journalists,” this marketer said.

Unlike with Facebook and YouTube, marketers haven’t been as reactionary about pulling their ad dollars during high-profile incidents on Twitter. One marketer told Adweek they paused their ad spend in response to their sponsored tweets appearing on Twitter profiles that promote the illegal sale of narcotics.

A recent Digiday study of 71 media buyers found Twitter to rank 8th out of 13 platforms when rating it on brand safety.

Marketers at agencies say they try their best to use Twitter without engaging with any hateful content. Abby Eckel, social media strategy at DEG, Linked by Isobar, said her clients don’t shy away from Twitter, but they make sure that their tone on the platform is aligned with the brand and the users they respond to are legitimate.

“We do our due diligence and make sure that we’re not giving the trolls the attention that they think they deserve and we’re not engaging in misinformed conversation. We pay attention to people who are engaging with us, what they engaged in previously,” Eckel said.

But while brands and agencies can try to manage their own behavior on Twitter, marketers say they would appreciate more effort from Twitter to rid the platform of distasteful content such as white supremacy.

“Twitter, like all platforms, struggles with making proactive decisions around content moderation outside of what is being reported by users and the community. Everyone looks to someone else to lead, then they are fast followers,” UM’s Lowcock said.

That’s not just a Twitter problem. All platforms are forced to decide whether to make an editorial decision to ban particular content or to accept the responsibility for promoting it.

“We need to stop treating platforms as neutral vehicles when they are engineered to amplify the content posted on them and keep users engaged in the content,” Lowcock said.

Barone, who recently met with Twitter’s vp of client solutions Jean-Philippe Maheu, said he believes the company recognizes that larger responsibility.

Brands “don’t want the egregious content to be there at all. It’s more about how the platforms can excise the negative content, as much as we want separation from [negative content and] the brands, it’s more about the social responsibility,” Barone said.

https://digiday.com/?p=329234

More in Marketing

Why the ad industry is redefining what it means to be a creator vs. influencer

As the industry evolves, there are now more ways to differentiate between the types of content coming from creators and influencers.

What a second Trump presidential term means for media and advertising

Donald Trump is poised to take the reins of the U.S. presidency for a second term, and this time, the impact on the media and advertising industries is set to be significantly more profound.