Secure your place at the Digiday Publishing Summit in Vail, March 23-25
‘Not a big part of the work’: Meta’s LLM bet has yet to touch its core ads business
Using large language models for ranking is a future bet, not a current reality at Meta.
The company’s CFO Susan Li made the admission at the Morgan Stanley Technology, Media and Telecommunications conference in San Francisco on Wednesday (March 4). In an industry where execs routinely overstate how far along their AI transformation actually is, Li’s candor was notable.
Meta runs one of the most sophisticated content and advertising targeting systems ever built. Every time someone opens Instagram or Facebook, algorithms decide in milliseconds what they see. That system, known internally as ranking and recommendation, is the invisible engine behind the company’s $160 billion-plus annual revenue. It is also, according to Li, not yet meaningfully powered by LLMs — the same class of tech behind ChatGPT, Claude and Gemini.
“We are not by and large using LLM architecture to do ranking and recommendations work yet,” Li said at the event. LLMs, she added, are “not a big part of the work in core ranking and recommendations” today.
Eventually, the hope is that one day they will be because LLMs don’t just optimize what already works, they can reason about content and context in ways that current systems fundamentally cannot. Today’s ranking engines are built on engagement signals — likes shares and watch time — and those signals require scale, and scale requires time. It’s a feedback loop that works extraordinarily well at Meta’s size but it has a ceiling. It can only optimize for what users have already responded to, making it inherently backward looking and blind to content or context it hasn’t encountered before.
LLMs can bypass it entirely, reasoning in real time about whether a piece of content is likely to interest a specific user based on what the system already knows about them without needing the engagement history to learn from first. That’s the capability Meta wants to bring to its core products, and the reason doing so could make today’s already formidable ads business look primitive by comparison.
“That is something that is a little bit of a longer-term research effort,” Li said. “We don’t know exactly what that will look like, but we think it’s worth investing in.”
Until then, LLMs are being inside Meta in two relatively limited ways. The first is content understanding: running posts, videos and other material through language models to better predict whether a piece of content will be interesting to a given user without needing to wait for engagement signals to accumulate. The second is on Threads, Meta’s text-based X rival, which Li described as “a little bit further ahead” in applying LLM-based approaches to ranking — a natural fit given that language models are built for text.
“The traditional recommendation engine relies a lot on engagement signals, and then you need a lot of engagement to happen to get the engagement signals,” said Li. “LLMs can reason in real time about whether this is a piece of content that would likely be interesting to you.”
The infrastructure problem
The real reason for the delay is compute. Building and deploying LLMs at the scale Meta operates — billions of users across Facebook, Instagram, WhatsApp and Threads — requires infrastructure that doesn’t yet exist in sufficient quantity.
Li acknowledged that what the company thought would be adequate capacity even 24 months ago has proven badly wrong. Data centers have long lead times, and much of what Meta is building now won’t come online until 2027 or later. In the meantime, the company has resorted to creative workarounds. Li mentioned that Meta has deployed industrial grade tents — rated to last 25 yeas and withstand tornadoes — to house servers and get capacity online faster.
The harder problem, Li suggested, is forecasting what comes next. Training a model is a one-time cost. Running it continuously — in real time, at the scale of billions of daily users — is an ongoing and far less predictable one.
“I worry almost that we will underestimate it,” she continued.
When Meta integrates something new into its family of apps, it instantly reaches a user base that most tech companies will never approach. Infrastructure timelines don’t bend to that kind of demand.
What the current system can do, and why that matters
None of this is to say Meta’s existing ranking system is standing still. If anything, Li’s account of the current state of play was a reminder of how much headroom remains in the pre-LLM era.
Meta tracks advertising performance through an internal metric called irev, and every six months its teams arrive with a new list of experiments — each one incrementally pushing that number as they compound on the gains from before. Li called it “one of the maybe modern wonders of the world.” In the fourth quarter alone, product ranking improvements on Facebook drove a 7% lift in organic content views — the highest revenue impact launch the company has had in two years. Each gain makes ads perform better, which lowers costs for advertisers, which brings more budget onto the platform, which funds the next cycle of improvements.
It is this flywheel — not LLMs — that currently generates Meta’s profits. And it is the proceeds of that flywheel that are funding the research, the talent and the data centers that might one day replace it.
More in Media
Media Briefing: As AI search grows, a cottage industry of GEO vendors is booming
A wave of new GEO vendors promises improving visibility in AI-generated search, though some question how effective the services really are.
How creator talent agencies are evolving into multi-platform operators
The legacy agency model is being re-built from the ground up to better serve the maturing creator economy – here’s what that looks like.
Why more brands are rethinking influencer marketing with gamified micro-creator programs
Brands like Urban Outfitters and American Eagle are embracing a new, micro-creator-focused approach to influencer marketing. Why now?