With Snapchat and Meta’s new tools, generative AI enters the social media space
With Snapchat and Meta both recently debuting new artificial intelligence capabilities, social media’s race to incorporate generative AI is gaining traction.
Snap yesterday released a new chatbot for Snapchat called “My AI,” which is powered by OpenAI’s ChatGPT and helps generate text-based messages to answer trivia answers, write haikus, come up with recipe ideas and plan trips. But despite all the potential fun, Snap — which is making the feature available to paying users through its Snapchat Plus version — was also upfront with users by warning that things could still go wrong.
“Please be aware of its many deficiencies and sorry in advance,” read Snap’s blog post. “All conversations with My AI will be stored and may be reviewed to improve the product experience. Please do not share any secrets with My AI and do not rely on it for advice. While My AI is designed to avoid biased, incorrect, harmful, or misleading information, mistakes may occur.”
Although the feature is only for paying subscribers, the number of Snapchat+ users isn’t necessarily small. According to Snap, there are now 2.5 million users paying for early access to features such as My AI who will also help provide feedback as the company builds its AI capabilities.
Integrating ChatGPT into Snapchat could give users another reason to spend time in the app — and maybe even another reason for others to try using the app. Snap also worked with OpenAI to train the chatbot to have a tone based on Snapchat’s personality as an app and also to adhere to the platform’s trust and safety guidelines.
Although marketers praised Snap for being early with rolling out a new AI-generated feature, there still doesn’t seem to be any clear uses brands might want to tap into — at least not yet.
“Until they get to some audience, I’m not sure if brands are going to be clamoring to get in there,” said Brian Yamada, chief innovation officer for VMLY&R. “There’s plenty for brands to explore on their own outside [of Snapchat].”
The news comes just days after Meta provided a glimpse of how it is thinking about generative AI. But rather than announce any features for everyday Facebook and Instagram users, Meta’s news has more immediate potential for researchers. On Friday last week, the social giant announced its latest large language model called LLaMA, which is short for Large Language Model Meta AI. It’s also smaller than some others. For example, OpenAI’s GPT-3 has 175 billion parameters, but Meta’s LLaMA includes models that range in size from 7 billion to 65 billion.
In Meta’s blog post and about LLaMA, the company also acknowledged there’s still more work to be done when researching and addressing risks of bias, toxic comments and wrong answers. It also said the plan is to allow access on a case-by-case basis to academic researchers and others that are affiliated with governments, civil society organizations and industry labs.
Alex Olesen, vp of vertical strategy and product marketing at Persado, a marketing AI company, said Meta’s new large language models and the bots used to train them to have the potential to help resource-strapped companies. Others point out that Meta’s news doesn’t really help businesses yet with their own AI efforts.
“It’s critical that businesses ensure that generative AI is trained on trusted, pre-qualified enterprise data before using it for marketing or customer service,” Olesen said via email. “In the past few weeks, there have been numerous reports about generative AI bots serving up content that is wildly off.”
There’s still a question of whether businesses and everyday users want this type of AI functionality. In a survey by the Morning Consult earlier this month, just 10% of consumers said they thought generative outputs were “very” trustworthy, 42% thought AI tech can’t be easily controlled and 44% didn’t think it’ll be developed responsibly. And when asked about their specific concerns, 74% mentioned being “very” or “somewhat” worried about their personal data privacy while 70% mentioned being wary about misinformation being included in AI-generated search results.
Marketers and AI experts say Meta is wise to limit the size of its large language model. For example, it could help Meta mitigate other risks that some warn of when it comes to generative content — and thereby avoiding a repeat of some of the mistakes it’s made in the past with data privacy and misinformation.
By allowing limited access to its large language model, Meta is making it “a bit more of a controlled substance than a street drug,” according to Steve Susi, director of brand communication at the brand strategy firm Siegel+Gale.
“Imagine a tennis court the size of the universe and the referees are really only at maybe one certain section of it,” Susi said. “And all the rest of that tennis court, that whole real estate, is where new cool weird games are being made that don’t follow tennis rules. There’s no one there to watch, but it’s still an incredibly powerful AI apparatus that could be misused.”
More in Media
AI Briefing: Why Walmart is developing its own retail-specific AI models
Walmart debuted its own set of retail-specific AI models to help power the company’s “Adaptive Retail” era of personalized shopping and customer service.
Media Briefing: Publishers confront the AI era during the Digiday Publishing Summit
This week’s Media Briefing recaps what publishers had to say about AI platforms during the Digiday Publishing Summit’s closed-door town hall sessions.
Mastercard, Samsung and 7-Eleven are 2024 Greater Good Awards winners
The honorees of this year’s Greater Good Awards, presented by Digiday, Glossy, Modern Retail and WorkLife, recognize the importance of empowering communities and fostering economic opportunities, both globally and closer to home. Many of this year’s entrants and subsequent winners also collaborated with mission-driven organizations to amplify their efforts in education, inclusion and sustainability. For […]