Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
When 16-year-old Lara Jeetley checks her phone in the morning she scrolls through the messages her friends have sent her and then, if there’s something on her mind, she opens ChatGPT, asks a question out loud and listens to the answer.
“Sometimes I ask things I was thinking about overnight,” she says. “Just random thoughts. Or if I had an interesting dream I might ask about that.”
Lara uses ChatGPT every day, multiple times a day. Everyone she knows at school does the same. According to the last poll by Ofcom, four out of five 13-17 year olds in the UK are using generative AI. Early attempts to ban the technology in schools have given way to acceptance (or resignation) that this is an inescapable part of the world that students are growing up in.
Understanding the way teenagers use technology is not just helpful for keeping an eye on what they get up to, it can be a form of divination too. In 2009, a 15-year-old intern at Morgan Stanley caused a stir when he wrote a research report about media, surprising adults by revealing that he never read a paper, didn’t watch much TV and rarely listened to the radio — all habits that are now widespread.
Lara’s preference for ChatGPT over Google (“with Google you have to click on websites and you have cookies and adverts — it’s annoying”) and voice over text could be an indication that one day, we will all access AI this way.
Because they are less enmeshed in existing structures, teenagers tend to be more willing to play around with new technology — finding their own shortcuts and use cases. Playing around is how Lara discovered that generative AI works best when you give it a short prompt then refine your query later. She and her friends were using AI image generators like Ideogram AI, creating a game in which they thought of silly prompts and sent the image back and forth.
It is also how she found that she could push back on generative AI’s mistakes. “When it’s not perfect I get annoyed but it doesn’t stop me from using it,” she says. “I try to make it better.”
Lara has been using ChatGPT since it was released by OpenAI in 2022. She remembers typing a question about her biology homework and seeing the reply appear “like magic”. Now it’s even better. “It used to be repetitive in the way it replied . . . now it speaks informally to me, it jokes with me, it adapts to my tone.”
As generative AI improves, more students are using it to write their essays and complete homework assignments. Lara’s parents are trying to guide her away from this sort of over-reliance (“they tell me it can cause intellectual atrophy”) so she uses it to create checklists before writing or to suggest refinements instead. She also used ChatGPT to create a tailored exam revision timetable and have practice conversations for her Mandarin GCSE.
ChatGPT is her favourite app but in computer science class she has tried out Anthropic’s AI assistant Claude to spot mistakes in her code. For school she has note-taking app Notion AI and she uses Descript to edit the podcast she set up. If she wants to find new AI apps she looks on Futurepedia. If she’s looking for ideas about how to improve prompts she checks GeniePT.
She has less time for the AI apps that promote themselves as companions, such as Replika and Meta AI. A few of Lara’s friends use Snapchat’s ‘My AI’ like this. “They tell it their problems and ask for advice about things that have happened.”
Unmonitored, hyper-personal interactions between AI and teens is something Mhairi Aitken, a senior ethics fellow at The Alan Turing Institute, worries about more than teenagers using AI to help with their schoolwork. Last year, a mother in Florida sued chatbot creator Character.ai, claiming that it contributed to her 14-year-old son’s suicide by exacerbating his depression. The company responded by saying that its rules prohibited the promotion of suicide and that it would add more safety features for young users.
“AI companions are designed to affirm whatever the user’s worldview is,” says Aitken. “In some cases that leads to amplifying harmful views.” She is concerned about how close young people might get to chatbots. “It’s no longer sci-fi that their first romantic relationship might be with AI.”
Lara knows social media can be bad for your mental health so she steers clear of that. She’s a little worried about privacy on AI tools too. And accuracy and energy use. But she can’t see a future in which generative AI isn’t in most people’s lives. “It can already do almost anything,” she says. “By the time I get to university I think it will be part of everything.”