We are gen Z – and AI is our future. Will that be good or bad?
Our panel responds: the more we know about this technology, the more it is the source of hope and worry. We have views that must be heard
What’s fact, what’s fiction? Will we know?
Sumaiya Motara
An older family member recently showed me a video on Facebook. I pressed play and saw Donald Trump accusing India of violating the ceasefire agreement with Pakistan. If it weren’t so out of character, I would have been fooled too. After cross-referencing the video with news sources, it became clear to me that Trump had been a victim of AI false imaging. I explained this but my family member refused to believe me, insisting that it was real because it looked real. If I hadn’t been there to dissuade them, they would have forwarded it to 30 people.
On another occasion, a video surfaced on my TikTok homepage. It showed male migrants climbing off a boat, vlogging their arrival in the UK. “This dangerous journey, we survived it,” says one. “Now to the five-star Marriott hotel.” This video racked up almost 380,000 views in one month. The 22 videos posted from 9 to 13 June on this account, named migrantvlog, showed these men thanking Labour for “free” buffets, feeling “blessed” after being given £2,000 e-bikes for Deliveroo deliveries and burning the union flag.
Even if a man’s arm didn’t disappear midway through a video or a plate vanish into thin air, I could tell the content was AI-generated because of the blurred background and strange, simulation-like characters. But could the thousands of other people watching? Unfortunately, it seemed not many of them could. Racist and anti-immigration posts dominated the comment section.
I worry about this blurring of fact and fiction, and I see this unchecked capability of AI as incredibly dangerous. The Online Safety Act focuses on state-sponsored disinformation. But what happens when ordinary people spread videos like wildfire, believing them to be true? Last summer’s riots were fuelled by inflammatory AI visuals, with only sources such as Full Fact working to cut through the noise. I fear for less media-literate people who succumb to AI-generated falsehoods, and the heat this adds to the pan.
AI can help tell great stories – but who controls the narrative?
Rukanah Mogra
The first time I dared use AI in my work, it was to help with a match report. I was on a tight deadline, tired, and my opening paragraph wasn’t working. I fed some notes into an AI tool, and surprisingly it suggested a headline and intro that actually clicked. It saved me time and got me unstuck – a relief when the clock was ticking.
But AI isn’t a magic wand. It can clean up clunky sentences and help cut down wordiness but it can’t chase sources, capture atmosphere or know when a story needs to shift direction. Those instinctive calls are still up to me.
What’s made AI especially useful is that it feels like a judgment-free editor. As a young freelance journalist, I don’t always have access to regular editorial support. Sharing an early draft with a real-life editor can feel exposing, especially when you’re still finding your voice. But ChatGPT doesn’t judge. It lets me experiment, refine awkward phrasing and build confidence before I hit send.
That said, I’m cautious. In journalism it’s easy to lean on tools that promise speed. But if AI starts shaping how stories are told – or worse, which stories are told – we risk losing the creativity, challenge and friction that make reporting meaningful. For now AI is an assistant. But it’s still up to us to set the direction.
Author’s note: I wrote the initial draft for the above piece myself, drawing on real experiences and my personal views. Then I used ChatGPT to help tighten the flow, suggest clearer phrasing and polish the style. I prompted the AI with requests such as: “Rewrite this in a natural, eloquent Guardian-style voice.” While AI gave me useful suggestions and saved time, the core ideas, voice and structure remain mine.
Cover photo: Guardian Design/Getty Images/EPA/Alamy