IBM’s global chief design officer on generative AI and creativity

Long before the current generative AI hype cycle began, Billy Seabrook experienced the evolution of technology and design firsthand, first at various agencies and then at major brands like eBay and Citi. Seabrook, who joined IBM in 2017 as global chief design officer for IBM iX, now oversees a 7,000-person consulting team — which includes 1,600 designers — at a pivotal time as IBM markets its own AI capabilities while the broader design world wrestles with how to adopt new AI tools. (In May, the company announced a new platform called WatsonX, which includes a number of new AI tools for companies including a development studio, data store and governance toolkit.)

So far this summer, IBM has announced a number of new AI partnerships including a new content supply chain deal with Adobe and new AI integrations for FYI, an app developed by artist Will.I.Am to help artists with various business-related tasks. IBM is also showcasing its generative AI tools at major sporting events by developing a new AI commentator for this month’s Wimbledon tennis tournament after using the tech in a similar way this spring during the Masters.

In an interview with Digiday, Seabrook spoke about how IBM’s partnerships, but also about how he sees generative AI creating new opportunities and new challenges for designers and other roles across the marketing industry.

“All of a sudden, you’re going to have this sort of collapse of creative talent into these hybrid roles,” Seabrook said. “It’s going to be more about your ability to adapt your curiosity, your stamina. It’s gonna be those types of traits, not necessarily that you’re an expert in Illustrator.”

Note: This interview has been edited for length and clarity.

Whether you’re thinking about your role at IBM or if you were still leading design teams at Citi and eBay, how do you see GenAI changing workflows?

When you think about the rhythm of the creative process and how people work together and collaborate, it takes time for a reason. People need to digest an idea, they need to think about what they want to write, they want to think about what the art direction is. When it happens [at the speed of generative AI], it can lead to burnout. Because when you’re done with a project so quickly, you’re going to be asked to work on the next project immediately.

How do you work at that pace and switch your mindset and be able to jump from project to project to project at that velocity to keep up with what the GenAI can do? That is a concern in terms of whether humans can start to keep up with the possibilities of the scale and the speed that technology is now allowing us to do. We might have to find a happy medium there… How do you ensure that everyone wants to work at that same velocity and keep those rhythms in place?

How do you make sure you don’t end up with too much sameness based on the training data?

Creative homogeny is a risk. Just the notion that if everybody’s using the same models, then everything’s going to look the same. Where’s the creativity? And I think that then gets into a conversation around how do you mix the models, if you will, the foundation models, the LLMs? How do you create different models for different purposes? And then the IP differentiator will become which models are you using. So a big brand like Coca-Cola or even IBM or somebody will have our base foundation model that’s sort of trained on all of the information in the world that everybody can use.

But then we’ll have our own private cloud with our own language model that’s trained on our specific stuff, like our brand, or the history of our brand, our industry expertise. And it’s that mix that will create some original work, theoretically. And you can always you can plug in other open-source models as well, and sort of experiment on what will this AI cocktail come out with. The ones that really hone those models and get almost proprietary data could be at a competitive advantage if they get really great results out of it.

What’s your process for adopting generative AI within your department and elsewhere at IBM?

The transition sort of [has] two things to it. There’s this learning the skills, understanding the platforms, how they work, what is the architecture of a large language model, our WatsonX platform. There’s a little bit of that technical training, then there’s a little bit of the creative training… What are the techniques that you can use to kind of manipulate these new tools? And how and when would you use these tools in your current process to actually get an advantage [and] speed up some of the slow-going stuff that we deal with in the creative process?

The design process can be automated and actually take a lot of stress out of the system if you apply it in the right ways. There are some advantages where this is time-intensive work that a creative would frankly love to automate. The third area is around more of the mental health side of it. It’s having a lot of conversations with the broader teams with sort of an open-door policy to discuss any hopes and fears that people are feeling through this transition, which shouldn’t be underestimated.

This topic reminds me of concerns when IBM made news this spring for saying it will cut jobs or pausing in hiring for jobs AI can do instead.

If you look back at the industry, we used to have webmasters and web designers. That was basically it. Then AI started, and then UX started, and then it just exploded into hundreds of different sorts of roles and titles in the creative and digital business. And I’ve seen a collapse happen over the years, too. UI and UX are really becoming one. People just want one designer that can do both. There is this sort of convergence now of all these different skill sets…It’s like the classic T shape of someone’s skill set, there might be a really deep ’T’ like their true raw talent might be in visual design or something, but GenAI can fill in the rest of their ’T’ really fast.

All of a sudden, you’re going to have this sort of collapse of creative talent into these hybrid roles… It’s going to be more about your ability to adapt your curiosity, your stamina. It’s gonna be those types of traits, not necessarily that you’re an expert in Illustrator.

https://digiday.com/?p=510117

More in Media

Earnings from social and search players signal that AI will be a long-play investment

Giants like Google, Meta and Microsoft say investors and advertisers might have to wait longer for AI to generate a better return on investment.

Why some publishers aren’t ready to monetize generative AI chatbots with ads yet

Monetization of generative AI chatbot experiences is slow going. Some publishing execs said they’re not ready to add advertising to these products until they scale or can build a subscription model first.

Media Briefing: Publishers who bet on events and franchises this year are reaping the rewards

Tentpole events and franchises are helping publishers lock in advertising revenue.