Limited seats remain

Secure your place at the Digiday Publishing Summit in Vail, March 23-25

REGISTER

Execs are ignoring the dangers of ‘confidently incorrect’ AI and why it’s a massive problem

Illustration of a robot talking to a person.

This story was first published by Digiday sibling WorkLife

Why don’t scientists trust atoms? Because they make everything up. 

When Greg Brockman, president and co-founder of OpenAI, demonstrated the possibilities of GPT-4 – Generative Pre-trained Transformer 4, the fourth-generation autoregressive language model that uses deep learning to produce human-like text – upon launch on Mar. 14, he tasked it to create a website from a notebook sketch

Brockman prompted GPT-4, on which ChatGPT is built, to select a “really funny joke” to entice would-be viewers to click for the answer. It chose the above gag. Presumably, the irony wasn’t purposeful. Because the issues of “trust” and “making things up” remain massive, despite the incredible yet entrancing capabilities of generative artificial intelligence. 

Many business leaders are spellbound, stated futurist David Shrier, professor of practice (AI and innovation) at Imperial College Business School in London. And it was easy to understand why if the technology could build websites, invent games, create pioneering drugs, and pass legal exams – all in mere seconds.

Those impressive feats are making it more challenging for leaders to be clear-eyed, said Shrier, who has written books on nascent technologies. In the race to embrace ChatGPT, companies, and individual users, are “blindly ignoring the dangers of confidently incorrect AI.” As a result, he warned that significant risks are emerging as companies rapidly race to re-orient themselves around ChatGPT without being aware of – or ignoring – the numerous pitfalls.

Click here to read the full story.

More in Marketing

In graphic detail: How Anthropic’s Pentagon refusal is paying off in downloads, brand trust and enterprise deals

OpenAI’s Pentagon deal seemed to spark uproar among its users, many of whom were against it. Anthropic’s refusal to agree to the terms was seen by users as the more trustworthy alternative.

How AI could disrupt retail media’s $38 billion search ad market

ChatGPT and other AI chatbots could divert shoppers from retailer sites, putting the $38B retail search market at risk.

‘Brand safety is moving from fear to curiosity’: Zefr’s Raddon on content-level accreditation – and what it exposes about the industry

The threat is no longer a discrete piece of bad content that a keyword list or a domain block can catch. Its volume.