As a Digiday+ member, you were able to access this article early through the Digiday+ Story Preview email. See other exclusives or manage your account.This article was provided as an exclusive preview for Digiday+ members, who were able to access it early. Check out the other features included with Digiday+ to help you stay ahead
Despite the market disruption wrought by the technical feats of China-based DeepSeek’s new R1 large language model, privacy experts warn companies shouldn’t be too quick to dive in head-first.
However, market opinion is split already. Some privacy experts, marketers and tech execs are advocating for more testing and better guardrails before companies adopt DeepSeek’s latest AI model. Meanwhile, DeepSeek’s progress has shaken the psyche of Silicon Valley — and its investors.
Following last week’s release of the open-weight LLM, the young China-based AI startup has quickly caught attention for its low cost, fast speed, and high performance. DeepSeek’s own chatbot — a ChatGPT rival — has also risen to become the top free app in Apple’s app store. (DeepSeek also released a new AI image model on Monday called Janus-Pro.)
DeepSeek, founded in 2023, was started by Liang Wenfeng, who also founded of the China-based quantitative hedge fund High-Flyer, which is also reportedly one of DeepSeek’s investors.
R1’s rise comes as Chinese tech companies face more U.S. scrutiny over data privacy and national security issues. While TikTok and CapCut face regulatory purgatory, others — including the gaming and social gaming Tencent — have recently been added to a list of companies with alleged ties to China’s military.
Tech and marketing experts are excited about the prospect of using a cheaper alternative LLMs from OpenAI, Anthropic, Google and Meta. However, privacy professionals warn about potential risks to user privacy, content censorship, and corporate IP theft. Will marketers rally around an AI model from China or hold off under the pretense of privacy and regulatory uncertainty?
Key privacy considerations
According to DeepSeek’s own privacy policy, there are a number of terms that experts say could threaten U.S. user privacy. Some examples:
- DeepSeek user data is stored in China
- DeepSeek may share information collected through your use of the service with our advertising or analytics partners
- DeepSeek will collect personal information through cookies, web beacons and pixel tags, and payment information
- Collected data also includes chat history, device data model, IP address, keystroke patterns, OS, payment information, and system language
DeepSeek’s privacy policy allows it to share info with its corporate group, noted Carey Lening, a privacy expert with the Ireland-based consultancy Castlebridge. She also noticed DeepSeek’s policy allows it to share data with third parties as part of “corporate transactions.” However, the policy doesn’t include details on the topic. Furthermore, DeepSeek says its partners may also share data with the startup “to help match you and your actions outside of the service.” That includes:
- Activities on other websites and apps or in stores
- Products or services purchased online or in-person
- Mobile identifiers for ads, hashed email addresses, phone numbers and cookie identifiers
DeepSeek collects and shares data similar to its rivals but their marketing-related data policies differ. For example, Google uses plenty of data for ad-targeting, but its policy says it doesn’t use Gemini conversations. Perplexity’s says it may disclose user data to third-parties, including business partners and companies that run ads on its platforms or “otherwise assist with the delivery of ads.” However, OpenAI’s policy says it avoids sharing user content for marketing purposes and that it doesn’t build user profiles for ad-targeting.
DeepSeek did not immediately respond to Digiday’s request for comment.
Split opinions
“We think TikTok is just the thin end of a large wedge,” said Joe Jones, director of research and insights at the International Association of Privacy Professionals. ”We’re seeing a lot more hawkishness in terms of data going to countries where there are lower standards or even where countries are perhaps more adversarial.”
Despite concerns, some AI experts think R1 can be a secure and viable enterprise-grade LLM if it’s deployed through client-controlled environments like local installation on a laptop or run through servers hosted in the U.S. and Europe. The bigger risk, some say, is using DeepSeek’s API, chatbot app, or web version.
Concerns haven’t stopped some companies, like Perplexity, from moving forward with adoption. On Monday, the AI search platform made R1 available to help premium users with deep web research and provide R1’s reasoning capabilities. Addressing data concerns, Perplexity CEO and co-founder Aravind Srinivas wrote on X that all DeepSeek usage on Perplexity is “through models hosted in U.S. and European data centers.”
Some think data protection and security concerns have been largely overlooked amid all the hype. Phillip Hacker, a German law and ethics professor at European University Viadrin, noted that U.S. rivals also collect plenty of data but also have stronger privacy policies. In a LinkedIn post, Hacker asked why DeepSeek feels “particularly creepy.”
“We know from the US TikTok case that any Chinese company has to surrender its data to the Chinese government if the latter so wishes,” Hacker wrote. “Integrate DeepSeek in your products, and you enable a whole new level of industry espionage. Beyond what TikTok already facilitates.”
Guardrails and guidelines
Before adopting AI models, experts suggest companies run tests to make sure they don’t accidentally use data in ways that break privacy laws — such as those in Europe and various U.S. state laws.
Companies can improve privacy — and business value — by building it into systems proactively, said Ron De Jesus, field chief privacy officer at Transcend, which helps companies test data compliance when using various AI models and other tech. President Donald Trump’s recent decision to rescind then-president Joe Biden’s executive order for responsible AI policy has created more regulatory uncertainty, reduced guidance for responsible AI development and adoption, and left chief privacy officers anxious about compliance.
“We can’t keep banning companies because they’re based in China,” De Jesus said. “We need to have a better way to scrutinize [companies] and look at their compliance programs.”
Privacy experts are concerned about R1’s impact with European AI and data laws, that it might weaken IP protection, increase content biases and enable Chinese content censorship. New AI efficiencies also has experts worried about AI-generated fraud, deepfakes, misinformation and national security risks.
Marketing execs also have expressed concern. One marketer testing in a personal capacity is Tim Hussain, global svp of product and solution design at Oliver. He observed DeepSeek’s app returning a “Let’s talk about something else” when asked about actions of the Chinese state, such as actions in the South China Sea or the Tiananmen Square massacre.
“How can we trust an AI that so blatantly censors itself?” Hussain wrote on LinkedIn. “While the LLM space continues to excite us with innovation and potential, DeepSeek’s example raises serious concerns—especially for businesses considering embedding such models. How do you ensure reliability and integrity when the results are clearly manipulated?”
More in Media
How publishers are choosing which LLMs to use
Publishers are prioritizing ease of integration when it comes to choosing which LLMs to use to build products and features powered by generative AI technology.
Remote work is now the top requested workplace accommodation
It comes as more major companies shift away from the hybrid arrangements they were in last year, and are requiring staff to work from offices five days a week.
Media Briefing: The Financial Times’ AI paywall is improving subscriber metrics, but not lifting conversions yet
The FT’s AI-powered paywall has helped the publisher get more revenue from its subscribers — but improving conversion and retention rate is still in the works.