How agencies adapt as bots evolve

Illustration of a robot talking to a person.

Social media bots may represent just a sliver of an app’s total users, but it turns out they may be generating more content than we thought.

While media agencies find bot content concerning, some say it won’t become a higher priority until both platforms and advertisers sound the alarm. At the same time, media firms and agencies are employing artificial intelligence and developing broader social strategies to ensure brand safety, as bot content becomes more widespread across social media.

“We simply need to try to keep an eye on it,” said Drew Himmelreich, senior analyst at digital agency Barbarian. “It remains an open question to what degree brands actually want to know what percent of their engagement is authentic…Our clients tend to focus on more standard performance metrics and haven’t expressed an appetite to allocate additional resources toward trying to quantify or contextualize the role of bots or inauthentic activity.”

Research by analytics platform Similarweb recently determined that bots generate somewhere between 20.8% and 29.2% of the content posted to Twitter in the U.S., while accounting for some 5% of the platform’s monetizable daily active users. That means a small number of accounts actually generate a substantial amount of content on the social site, with other studies estimating that bots produce 1.57 times more content than human users.

“I’d say what all that bot-generated content really endangers is the engaging experience advertisers want to be part of,” said David F. Carr, senior insights manager at Similarweb. “If Twitter users sense that too many of the accounts they interact with are robotic rather than genuine — or they get turned off by what they’re reading in the media about bot activity — they’re likely to use Twitter less or engage with a lot more skepticism.”

Similarweb points out that other platforms, including Meta’s Facebook, also deal with bots on their platforms. “The problem certainly is not unique to Twitter,” Carr said.

Using AI for prevention

Simply put, bots are essentially programs used to perform repetitive tasks, which can range from posting spam comments to clicking links. On social platforms, this can result in fake accounts that post frequently, or bots that manipulate information in conversations — both of which are potentially harmful for any associated brand content.

“The bad ones are responsible for those spam comments and messages you’re always seeing on feeds or can even scrape website content, among other things,” said Matt Mudra, director of digital strategy at B2B agency Schermer. “The question is, how can brands and agencies prevent their content from being affected?”

At Barbarian, for example, Himmelreich said analysts use automated alerts and tools to flag unusual social media activity. In this case, the automation serves as an added layer for human reviewers, who are still necessary when looking at large spikes in conversations or other major abnormalities on these apps. Barbarian also uses different measures for certain channels, based on varying platform and account risks.

“Our analysts know to be on the lookout for red flags when they are doing performance reporting, and we have automated alerts in place for our clients’ brands that inform us of unusual social conversation activity,” Himmelreich said.

Brian David Crane, founder of digital marketing fund Spread Great Ideas, added that focusing on preventative measures is key for agencies. Using automation and machine learning as part of the bot management solution is becoming more prevalent, and that includes bot monitoring tools like Bot Sentinel and Botometer. In other words, bots policing bots.

“In the wrong hands, automated bots on platforms like Twitter can manipulate information and create glitches in the social fabric of trends and conversations,” Crane said. “It can be very challenging for brands or agencies to tackle them head-on since bots are easy to code, can be implemented from the shadows and can be hard to track back to the source.”

Developing best practices

Increasingly, agencies and creative firms are incorporating best practices to combat bot problems as part of their brand safety measures. And there are many safeguards that don’t require AI or additional information technology training, some of which continue to evolve as brands more heavily invest in social channels.

Tyler Folkman, chief technology officer for influencer marketing company BEN Group, said that agencies and brands can follow some simple guidelines even as bots get more sophisticated. These include looking for shallow engagement, such as single emojis, looking for accounts with a small following but that follow a large number of accounts, and weeding out accounts with “poor profile pictures.”

“It’s a place to start to help brands be smarter,” Folkman said.

Agencies can also use internet protocol filtering and blocking to stop traffic from certain IP addresses associated with spam and bot activity, Mudra added. This means they can use something called frequency filtering to limit the number of times a visitor can view an ad or website.

“For context, any viewing numbers past three times is most likely a bot. Another easy one is blocking sources that may show suspicious behavioral patterns. Remember that bots behave differently than humans would,” Mudra said.

When it comes to search engine optimization, which remains a major focus in social strategies, Baruch Labunski, CEO of SEO marketing firm Rank Secure, said bad bots can actually steal an agency or brand’s content and harm their reputation if left unchecked. Some of the ways to combat this include simply searching for copies of your content through tools like Copyscape, and regularly getting rid of spam comments and bad links.

“There are also good bots that can do this automatically, depending on the platform,” Labunski added. “Block both unknown IP addresses and known bots. Test your site’s speed so you will know if it slows down. A slowdown can indicate you have some bad bots.”

But as noted, the bot challenge extends beyond Twitter’s domain. Himmelreich noted that bot issues seem more pronounced on Twitter, but that it is “rarely the most important social channel in the marketing mix.”

“Bots seem to be most prominent on Twitter, but inauthentic activity more broadly, like orchestrated campaigns by agitators or abuse of a platform’s algorithms, we also see as risks inherent to social media as a marketing vertical,” Himmelreich said.

Experts believe TikTok, Instagram and Facebook are also tackling their own bot problems, with Mudra adding this will “most likely intensify” in the social space and beyond. Instagram may be particularly vulnerable.

“If you’ve noticed on your social feeds over the past 12 to 24 months, there’s been a large uptick of bots spamming content on Instagram posts,” Mudra said. “I also suspect many blog sites, wikis and forums are seeing higher occurrences of bot traffic and bot activity.”

What is in agreement is bots are sticking around — so now it’s a matter of sorting the good from the bad.

https://digiday.com/?p=467630

More in Media

Publishers’ Privacy Sandbox testing enters a ‘holding pattern’ 

Google’s Privacy Sandbox needs some work before publishers say they’re willing to dedicate time and attention to testing further.

NewFronts Briefing: TikTok and Meta pitch short-form video offerings and AI tools on the final day

On the last day of the IAB’s four-day NewFronts event, social platforms like Meta and TikTok pitched their short-form video offerings to advertisers – and the latter social media platform addressed the Senate’s recent ruling to ban it in the U.S.

Google and DOJ attorneys begin closing antitrust arguments

The day covered a number of key topics core to the case including search market landscape, search quality, alternatives options for users.