McAfee’s CTO on AI and the ‘cat-and-mouse’ game with holiday scams
Black Friday and Cyber Monday ushered in holiday shopping, but the jingle-jangle of AI has shoppers wrapped in worry about seasonal scams.
In a recent survey conducted by McAfee, 88% of U.S. consumers think hackers will use AI to “create compelling online scams” during the holidays. Meanwhile, 57% expect scam emails and messages will be more believable while 31% think it’ll be harder to know if messages from retailers or delivery services are real or not. (The report includes answers from 7,100 adults surveyed this past September in the U.S., Australia, India, U.K., France, Germany and Japan.)
Concerns about AI could also lead some to shop less online; 19% of respondents that expressed worry about AI said they plan to shop less online this year as result. However, that doesn’t seem to be the case for everyone. According to Adobe Analytics, U.S. shoppers spent a record $12.4 billion on Cyber Monday this year, a 9.6% increase over 2022.
Using AI to craft a well written email or other correspondence has become popular with fraudsters, explained McAfee CTO Steve Grobman. Holiday shopping also poses risks for digitally active consumers who are buying gifts on websites they might not usually visit, Grobman said, adding that “sometimes a deal that’s a great deal can can play at odds with your sense of caution.”
Since the beginning of the year, McAfee has developed a number of tools for detecting and preventing scams, including a way to automatically alert people about dangerous links in messages. The company also is developing AI models that predict if a text someone receives has the potential to be related to a scam. (For example, if an online shopper is asked to provide their social security number while buying a vinyl record.)
“My goal is to constantly monitor what the bad actors are doing,” Grobman said. “And then build models that are able to detect either AI-generated information, which sometimes is possible and sometimes is more difficult.”
Grobman spoke with Digiday about current AI trends and how tech has evolved since the boom began early this year. While companies have made progress with detecting some AI content formats like — photos, audio and videos — it’s still hard to detect AI-generated fraudulent text.
Editor’s note: This interview has been edited for brevity and clarity.
How has the world of GenAI fraud evolved this year?
We’re just inundated with this constant set of fake messages or scams. The general public has gotten pretty good at [noticing] the ones that are deployed en masse. But as they get more targeted — whether they’re using fake audio like a distress call from a loved one or something that is very specific about the individual — I think those are the ones consumers need to be much more on guard about. Because it’s very easy for scammers to manipulate emotions that have people act with a sense of urgency.
Would it be easy for McAfee to build its own bot to help people vet whoever they’re messaging with to see if something is authentic or a scam?
That’s the exact type of technology that we’re working on. I think it’s important to call out it’s not a one-and-done, solve the problem…Part of the challenge is AI can be trained to have an objective. So one of the things scammers are able to do is they actually use whether victims fall for different acts as inputs to train models to get better. So the scammers can use the technology, try it on very large populations of users, and actually get better over time.
Can new tools for detecting GenAI be used to stop other types of scams? Or are they unique to generative AI?
I think it can be used for a wide range of circumstances. Part of the challenges with fraud is it often has adjacencies with legitimate interactions. A good example of that is romance scams. We could detect that there’s significant probability of intent that an interaction is trying to gain confidence in a relationship in order to ultimately exploit a victim. But there are legitimate online connections that are not malicious or fraudulent. And it can be very difficult, so part of it is getting the general public to understand a lot of the world that we’re moving into is not black and white. There’s going to be a lot of gray.
We need to think about it more like when the weather forecaster tells you there’s a 70% chance of rain. That doesn’t mean it’s definitely going to rain tomorrow. So if we say, ‘Hey this looks really suspicious,’ there could be some scenarios where it’s legitimate. Similarly, if we say there’s a 30% chance this is related to fraud or a scam, don’t read that as you’re home free, go ahead and give them all your information. Because there is that 30% chance. I don’t know if the word will resonate with consumers, but I think of it as non-deterministically. Like, you don’t actually know what the outcome is from the input. You can get some advice on how suspicious you should be, how guarded you should be, and then make a judgment based on your own personal level of risk.
What are the advertising implications for all of this? Whether it’s efforts like water-marking AI content or other ways companies can help consumers not fall victim.
One thing advertisers or legitimate companies can do is to remind users to always start at their explicit site. Googling can be dangerous, especially Googling for things like ‘support.’ A very, very common scam is scammers will buy up search terms so that when you’re looking up ‘Amazon support’ for your Kindle, it takes you to somebody that says, ‘In order to get support, I need a gift card for payment.’ Or ‘the problem is actually on your computer. Let me remote into your computer.’ Some of these sound fairly absurd, right? But for a lot of people, they sound professional.
Earlier this year, we talked about how open-source AI has a unique set of opportunities and challenges. How is that space evolving, especially as companies look to diversify the large language models (LLMs) they use?
There’s some really great technology from multiple sources. It’s not just one provider that’s going to power large language models. If you look at the work that Meta has done with LLaMa 2, that’s a very powerful set of language models that are meant to be used for beneficial purposes. And given their nature that they can be tuned and used by legitimate entities for a wide range of purposes, it’s great. But we do see derivatives that have made their way into the underground that are being used to write phishing [scams] or can be used to make malware more lethal with generative code. And it’s not going to stop.
The data sets to train these models are now becoming more and more readily available. And bad actors really don’t care about things like licenses, restrictions, regulations [or] executive orders. If you’re a cyber criminal, you basically do whatever you want with whatever you can get your hands on, which adds to the asymmetry in the way this technology will be used by bad actors and legitimate companies…It really provides an upper hand to them.
Any other ways you think the landscape has changed?
People are starting to recognize the limitations. Large language models and other generative AI are exceptionally good at certain tasks, but they’re also not great at others. As people use LLMs more and more, they’re starting to understand some of those limitations and even dealing with some of the problems that we’ve known about from the beginning…So I think that’s one thing: Just recognizing the technology is not perfect — it’s very powerful, but also has challenges we’ll need to continue to work through.
More in Media Buying
Advertising Week Briefing: Some worry the DOJ is ‘fighting yesterday’s war’ in ad tech antitrust case
At Advertising Week, attendees mull the potential consequences of Google’s travails with the Justice Department.
How independent agencies go on the offensive to find new growth opportunities
Independent agencies are taking an offensive approach in order to identify new work streams with clients and become more proactive in pitching.
Digiday+ Research digest: Growth ahead, investments in digital and other takeaways from our Media Agency Report
Digiday+ Research’s full Media Agency Report examines the current and future state of media agencies, and delves into the impact of retail media. In the meantime, we’ve collected some of the biggest takeaways from the report.