Could chief diversity officers tackle companies’ growing AI decisions?
This story was first published by Digiday sibling WorkLife
Chief diversity officers are in the hot seat.
In the aftermath of the demand boom for CDOs to place diversity, equity and inclusion at the heart of their companies in the last two years, the effectiveness of having a single role responsible for moving the needle on DE&I targets has come under fire.
The talent pool has also changed: diversity executives exited as company priorities shifted, as The Wall Street Journal reported this summer. High-profile DE&I executives at companies including Netflix, Disney and Warner Bros. Discovery resigned or were fired this summer. And thousands of diversity-focused workers were laid off since last year, while some companies scaled back racial justice commitments.
As some organizations make their budgets for next year, there’s potential for an overlap between their investments in generative artificial intelligence and meeting their DEI goals, one that could renew the importance of a CDO.
As the new year approaches questions could include: how executives ensure AI is fair and unbiased, who should be at the table when organizations make tech decisions, which vendors they should support and how to use it in a way that isn’t harmful to marginalized groups. Who’s more familiar with bias and fairness than a CDO?
What we know so far about how AI can impact marginalized groups
Most companies use some form of AI in their hiring processes. But these processes can still be prone to bias if the wrong tech is used, which can result in certain pools of candidates being overlooked — a focus for any CDO.
Choosing the right tech and overseeing the necessary audits, could all be folded into the CDO’s remit especially as legislation enforces this process, such as Local Law 144 in NYC that requires a bias audit on any automated employment decision tool prior to its use.
Selecting a proper AI vendor is just the start. Then there needs to be oversight of how all employees use the tech, to ensure it’s equally leveraged across age groups and genders.
New research from Charter, a media and services company focused on the future of work, found some gaps in how AI has affected historically marginalized groups. The research, conducted in August which included literature review, expert interviews, and a 1,173-person survey, noted a difference in opinion among respondents.
Charter’s data found that over half of Black respondents were concerned about AI replacing them in their jobs in the next five years, which is 14 percentage points higher than white respondents. And female respondents (35%) are less likely than male respondents (48%) to be using generative AI tools in their jobs currently. And individuals aged 18 to 44 years old are much more likely than their 55+ year old colleagues to have used generative AI in their work to date.
“As I reflect on this, there is a real watchout space around gender and ageism,” said Emily Goligoski, head of research at Charter. “I worry about the intersection of those two things, and what does it mean for those workers’ participation and mastery of generative AI tools?”
To read the full story click here
More in Media
Media Briefing: Publishers’ Q3 earnings show revenue upticks despite election ad pullback
Q3 was a mixed bag for publishers, with some blaming the U.S. presidential election for an ad-spend pullback.
Workplace policies poised for seismic shakeup post-election
Topping the list of expected changes: a rollback of many health insurance reforms provided under the Affordable Care Act, better known as Obamacare.
News publishers didn’t sustain a traffic bump in the 2024 presidential election week like they did in 2020
Unlike the drawn out process of the presidential election in 2020, this year’s election quickly revealed that Donald Trump would be the winner – and that meant less of a sustained traffic bump to publishers.