Estonian youths increasingly see LLMs as friends

A study conducted last year reveals a significant shift in the digital habits of young people in Estonia: artificial intelligence is increasingly being used not just as a learning tool, but also as a conversational partner.
The recent EU Kids Online study shows that Estonian children's and adolescents' digital behavior has changed significantly in recent years. Compared to the previous survey conducted in 2018, internet use has increased, especially for schoolwork, and digital environments have become an integral part of young people's daily lives. With this increase in use, exposure to online risks has also grown. However, this does not automatically mean that young people's overall well-being has declined.
University of Tartu sociology professor and lead author of the study Veronika Kalmus said that a considerable amount of time has passed since the last survey. In addition to internet use, researchers also examined, for the first time, the use of artificial intelligence and young people's mental well-being.
One of the clearest changes is the rise in internet use specifically for learning. According to Kalmus, there's a logical explanation for this. "The COVID-19 period significantly boosted the importance of online learning and that's where the growth stems from. On top of that, AI has emerged, which is also used in online environments, often specifically to complete school assignments," she said.
The increase in internet use isn't limited to schoolwork; young people are also spending more time online overall, increasingly through smartphones. According to the study, this rise in screen time is accompanied by more frequent encounters with online risks — for instance, exposure to harmful content. There are also slightly more signs of excessive internet use than before.
Kalmus said this development is to be expected. "The more the internet is used, the more likely one is to encounter risks. It's inevitable," she noted. One of the more serious online risks is cyberbullying, which often leads to emotional harm. In this area, the study did not show significant changes.
"Cyberbullying remains at the same level as in the previous survey. That was somewhat surprising, considering how much work has been done in Estonia on this issue — campaigns, anti-bullying programs in schools and so on," the professor said. This suggests that while awareness may have increased, young people's behavioral patterns have not changed significantly in this regard.
The EU Kids Online 2025 study also addressed the use of artificial intelligence. It is the first study in Estonia to examine the topic in such depth. The findings show that AI use is no longer a fringe phenomenon among young people. Among 15–16-year-olds, 83 percent have already tried AI tools and even younger children aged 9–11 have shown surprisingly high levels of engagement.
So far, AI in Estonia has been mostly discussed in the context of schoolwork — whether and how it should be used in completing assignments. However, the new study reveals that young people's relationship with AI is far more diverse and complex. One of the most striking findings is the emergence of so-called AI companions — chatbots that young people interact with as friends, confidants or even romantic partners.
University of Tartu media studies professor and study co-author Andra Siibak said this was one of the biggest surprises in the report. "Normally we see international tech trends reach Estonia with a three- to four-year delay. This time, it turns out that these chatbot companions are already clearly present among our youth," she said.
In qualitative interviews, young people described instances where someone had created an AI-based fictional companion, such as a boyfriend or girlfriend, with whom they chatted daily. According to Siibak, this may not be problematic for everyone, but for some young people, the line between the real and virtual worlds may start to blur.
"For some, it's simply a way to experiment — to try out different forms of communication and roles. But for a young person already struggling with mental health issues or social isolation, it can lead to a situation where interaction with AI starts to replace real relationships," she explained.
International examples from Asia show that in some societal groups, AI-based romantic partners have already become normalized. Estonia hasn't reached that point yet, but the study suggests it's time for a public conversation before these issues become more widespread.
Internationally, AI is also increasingly being used to create sexualized or otherwise harmful visual content, including material that depicts minors. According to Siibak, this kind of practice has not yet become widespread among Estonian youth.
"In neither the quantitative survey nor the interviews did we find many examples of AI being used to create sexualized images. Internationally, however, it's already a very serious and rapidly growing problem," she noted.
That's precisely why she considers this moment especially important. Since the issue hasn't yet taken root widely, there's still time to act proactively rather than reactively. "Now is the critical moment to start regulating this. By the time such content becomes commonplace, it will already be too late," she emphasized.
In Siibak's view, it should be clearly established that the creation, distribution and storage of AI-generated sexualized imagery, especially involving children, must be unequivocally prohibited and punishable under criminal law. This, she said, would send a clear message to society about which practices are unacceptable, regardless of whether the images are real photos or AI-generated.
"We can't rely on young people to always recognize boundaries or foresee the long-term consequences of such content. That's exactly why clear and understandable regulations are needed," she said.
She added that international experience shows that if regulation and public discourse lag behind, the consequences can be severe — both for the victims and for the young people who create or share such content, often without understanding its legal and moral implications.
Girls likelier to use AI than boys
When looking more broadly at AI use, the study found that girls tend to use artificial intelligence more frequently. According to Andra Siibak, there is no definitive explanation for this yet, but possible reasons may be linked to how AI is being used.
"AI, in a sense, requires interaction. If we look more broadly at technologies that involve communication, like social media or messaging apps, girls are more active users there as well," she said.
Use related to schoolwork may also play a role. "Girls are often more conscientious and try harder to keep up with school tasks. For them, AI may function as a kind of helper, not just a tool to save time," Siibak added. At the same time, she emphasized that this interpretation reflects her personal view.
Study aid or a convenient way out
The question of whether artificial intelligence supports learning or simply does the work for students cannot be answered definitively based on the study. In practice, both uses seem to coexist. In-depth interviews conducted as part of the research revealed that some young people use AI as a sort of substitute teacher — for instance, asking it to explain topics they didn't understand in class or to catch up on material they missed.
At the same time, the other side of the coin isn't hidden: AI is also used to quickly complete so-called tedious or pointless homework tasks. "We heard examples from absolutely every subject — from math and science to essays and creative assignments," Siibak noted.
One area of concern that emerged from the study, according to Siibak, is that young people often lack the skills to use AI in ways that genuinely support thinking and learning. "Young people themselves say they primarily use AI to save time and for convenience, but their understanding of how AI could enhance their analytical skills and knowledge development is still quite limited," she said.
Young people's awareness of how AI works and its potential risks also tends to be limited. The most commonly reported issue was encountering incorrect or so-called hallucinated answers, which they later discovered were false after checking elsewhere.
To some extent, young people were also aware of privacy risks, but these were not considered especially significant. There was far less awareness of broader issues such as environmental impact, bias, authorship or ethical considerations. "Their understanding was largely limited to what has been covered in school so far — mainly things like how to phrase prompts and how to check the accuracy of answers," Siibak explained.
Confusion in schools
The study also reveals a contradictory picture when it comes to schools. Teachers' attitudes toward and knowledge of artificial intelligence vary widely and internal school policies are often lacking. "Each teacher seems to decide on the spot when AI use is acceptable and when it's not. This creates confusion among students," said Andra Siibak.
According to Siibak, a recent analysis showed that out of more than 700 schools in Estonia, only 15 had any mention of AI use in their internal rules. While she does not consider nationwide standardized rules to be necessary, she emphasizes the importance of establishing clear school-level agreements developed in collaboration with students.
From the perspective of children's rights, the study highlights the limited involvement of young people in AI-related decision-making. Many students feel that adults are making these decisions without consulting them. "When young people's voices are ignored, it leads to resentment in the long run," Siibak said.
--
Editor: Marcus Turovski









