Publishers that want to experiment with using generative AI technology to build products and features like creating chatbots and analyzing data have to evaluate which large language models best fit the bill.
And it turns out that one of the biggest factors in these evaluations is how easy it is to integrate an LLM into their companies’ tech systems — such as different product suites and content management platforms — according to conversations with three publishing execs. That often means choosing the LLMs owned by companies with which they already have enterprise technology or content licensing agreements.
For example, a spokesperson at one publisher – who asked to remain anonymous — told Digiday their company isn’t experimenting with an array of different LLMs and is primarily using OpenAI’s models. Their company has a content licensing deal with OpenAI that followed a successful project to build a chatbot using OpenAI’s GPT model. The publisher has continued to use GPT for other needs, like productivity tools.
Another publishing executive, who spoke on the condition of anonymity, said they are using OpenAI’s GPT model for internal use instead of other LLMs, simply because they have a content deal with OpenAI.
And a third publishing exec, who also requested anonymity, said their company is using LLMs owned by companies like Microsoft and Google because it is already paying to use their enterprise software, including Microsoft Office 365 and Google Workspace.
This makes it “very easy for us to integrate it into our full development ecosystem,” the third publishing exec said. As a customer of the Microsoft Office 365 product suite, the company can “integrate [Copilot] natively into the tools that we already use, like Outlook, Excel, Powerpoint [and] Word,” they added.
Nate Landau, chief product and technology officer at TheSkimm, said using technology offered by companies they already have existing agreements with means they don’t have to build their own solutions.
“Some of our partners, like [data storage company] Snowflake, offer their own AI integrations and we prioritize those over building solutions from scratch where applicable,” he said.
And because TheSkimm doesn’t have an exclusivity deal with an AI tech company, they have the flexibility to work with different models, Landau added.
“Given the rapidly evolving nature of this space and the varying strengths and weaknesses of different models, we haven’t centralized on a single provider,” he said. “For each use case, we evaluate multiple providers to ensure the best fit.”
Some other factors media companies take into account when evaluating which LLMs to use include the cost, the performance and quality of the models’ outputs and privacy and security considerations.
“It is extremely important to us that the models are not trained on our users’ information or inputs and that any interactions with AI products are safe and protected,” said Vadim Supitskiy, Forbes’ chief digital and information officer.
Testing performance
LLMs have to work well for publishers’ use cases in order for all of this to be worth their time and money.
When choosing which LLM to use for different processes, TheSkimm runs side-by-side tests of the models and compares their outputs “to ensure they align with our brand voice and editorial standards,” Landau said. That’s because the major differences between these models are their “voice, tone and accuracy across various use cases,” he added.
For example, Claude has a “softer, more natural tone” and is “particularly accurate,” which is why Landau prefers it for tasks where response voice and tone are especially important. But for tasks requiring less creativity — such as working with TheSkimm’s datasets — he prefers models like Meta’s Llama “for their reliability in providing accurate, actionable responses and avoiding hallucinations.”
However, as LLMs continue to rapidly evolve, these distinctions are “becoming increasingly subtle,” Landau said.
TheSkimm uses LLMs for three things in particular: data analysis, audience acquisition and experimentation, according to Landau. These models help TheSkimm surface key trends and cohorts from its data, and help create audience segments to target messaging and products, such as for its shopping and commerce businesses and for sponsored content.
The third publishing exec said their company uses Google Gemini and its services for product development, such as building chatbots. For internal efficiency tools the publisher uses Microsoft Copilot. The company is currently evaluating Google Gemini to integrate the AI features into Google Workspace (which includes products like Gmail, Docs and Meet, among others).
Cost considerations
Mark Howard, chief operating officer at Time, said he considers the financial incentives to work with one LLM over another (Time has a content licensing deal with OpenAI) as well as the opportunity to be involved in future product development to benefit the business by having a seat at the table to help shape new products that can be beneficial for publishers.
“Some of [these deals] are more about being part of new marketplaces that they’re developing. And to me, the part that’s the most interesting — companies looking to really build something that doesn’t currently exist. As they’re building those, I would rather be a part of that,” Howard said. Time also has deals with Perplexity and ProRata, to participate in the former’s ad revenue share program and the latter’s per-use compensation structure.
TheSkimm is currently using a combination of open-source models (such as Meta’s Llama and French-based Mistral) and commercial products, including Anthropic’s Claude and OpenAI’s GPT models.
The company weighs the “pros and cons” of paying fees to access private LLMs’ APIs and the potentially more cost-efficient option of hosting their own open-source models, Landau said.
More in Media
Remote work is now the top requested workplace accommodation
It comes as more major companies shift away from the hybrid arrangements they were in last year, and are requiring staff to work from offices five days a week.
Media Briefing: The Financial Times’ AI paywall is improving subscriber metrics, but not lifting conversions yet
The FT’s AI-powered paywall has helped the publisher get more revenue from its subscribers — but improving conversion and retention rate is still in the works.
Trump’s war on remote work clashes with RTO rebellion as some WFH roles spike
New data reveals a workplace revolution that’s still going strong in many sectors, even as some companies slam their office doors shut.