Gen Z workers speak out on AI ethics, data security, career disruption

This story was first puublished on Digiday sibling WorkLife

This article is part of WorkLife’s special edition, which examines how the jobs and careers of Generation Z professionals will be reshaped and evolve in the AI-informed era. More from the series →

Gen Z is the newest generation in the workforce. They’re also digital natives and therefore accustomed to constant change. That means they’re bringing their generative AI chops to work, and are excited at how the tech can help enhance their jobs, rather than replace them.

Some have recently taken AI classes at college before graduating and joining the workforce, while others are playing around with the tools on their own time and even inventing AI-powered startups. But they are savvy to AI’s current pitfalls too.

We asked five Gen Z workers to share their thoughts around AI, including how they use it, what their biggest concerns are, what excites them, how it might influence their career, and more. 

Answers have been edited for clarity and flow.

Maya Bingaman, 25, communications and content officer at social entrepreneurship marketplace MIT Solve

What do you use AI for?

I use AI for lower-lift content, or things that are already fairly templatized within my own arsenal of materials. So, for example, cover letters. If I have to submit a cover letter, and I already have 20 versions, which I have had, and I just want to refresh it, I may just tweak it with a prompt like ‘can you update this mentioning XYZ here, or make it shorter?’ ChatGPT is really good at cutting words and taking stuff that you’ve already created down to what you’re hoping for it to be. I personally feel like it sounds like a robot, but maybe it’s because I’m a writer. I can say pretty confidently that there’s never been a single thing that ChatGPT has produced that I haven’t had to retool and rework myself. It’s just a starting block. 

Do you think AI will lead to less human connection amongst coworkers or supervisors?

No, not really. I think AI is just a tool that we’re using to make our lives a little more efficient and easier. And if anything that should allow for more space for that communication, for that planning, for those in-person meetings and conversations about things that maybe a robot can’t spit out, where they don’t understand the nuances of your organization. So I wouldn’t say that it’s going to take things away. It’s just giving us the time and space to make meaningful connections on the side now that we have a little more time back. 

Do you have any ethical concerns around AI?

From a writing standpoint, as someone working in PR, I’m really used to people taking my work and claiming it as their own with their name on a byline. That’s just the name of the game, and I’ll say it’s ethical because I give them the permission to do that. But the fact that no one has ever paused and said, ‘Well, should we give PR people credit for when they draft commentary or bylines or blogs,’ but now people are stopping and saying should we give robots and AI and ChatGPT and the engineers credit? I would say no, because we haven’t stopped and thought about the humans that have been doing this for decades and decades before ChatGPT and stuff like that even existed. So that’s my take on the ethical side, looking at plagiarism specifically.

To read the full story click here

More in Media

Inside The New York Times’ plans to correlate attention levels to other metrics

There’s a lot of buzz around attention advertising right now, but The New York Times is trying to stay grounded even as it develops its own plans.

Why publishers are preparing to federate their sites

The Verge and 404 Media are exploring the fediverse as a way to take more control over their referral traffic and onsite audience engagement.

Why publishers fear traffic, ad declines from Google’s AI-generated search results

Some publishers and partners hope for more transparency from Google and other AI companies related to AI-generated search.