Gregor Kulla: AI Leap into the unknown

The AI leap is somewhat flawed, as it throws fuel onto the unresolved problems of an education system already dragging its feet. The goal seems to be to boast, "we were first," without analyzing whether AI is needed in education at all, writes Gregor Kulla in a commentary originally published in Õpetajate Leht.
I recently reflected on my life experience as a so-called Generation Z member in a text for the Trickster magazine, to go with artist Madlen Hirtentreu's exhibition. On where and how do I live? That is where the essay's title, "To Live Online," came from. I, like many other young people, spend a great deal of time behind a screen: we grew up with it. I was born at the turn of the millennium and my coming of age was marked by the rapid development of technology. Today I couldn't imagine myself without Tumblr, Orkut, Neti, Rate, Facebook, YouTube, Blogger or Vine. Some of them don't even exist anymore.
School bullying probably would have hit me harder and I might still feel ashamed of my thoughts and appearance. Those who know, know what it's like to grow up different in a small town. The internet was a huge support for me. And it still is.
Forcing AI into education
It's often the case that when you grow up, you start to see your caretakers in a different light. The same has happened with the internet, under whose shadow I've long hidden and which I've come to know quite closely — how it behaves and how it works.
Around 2010, it was still friendly, but not so much anymore. There are so many traps, so much anger and confusion. Wherever I click, someone wants something from me. Some are after my contact details, some want me to agree to miles-long terms and conditions that aren't meant to be read or to download countless apps. Often there isn't even much room left for other people, because everything is crowded with bots begging me to ask them something. And the answers are almost always as barren as the forest outside Põlva.
I remember how annoyed I was in high school whenever I had to do something in Moodle or read a textbook on a screen. I didn't understand the point. A book is so much easier to handle and it's more comfortable to write on paper. Not to mention Miksike — which I hated in middle school because it felt like I was competing against a computer. Creativity was missing.
In my final years of high school, we had to present our internship experience as a website, which already then seemed like a waste. Who would ever visit that site? It didn't feel practical and no one explained why it might be useful, what it could teach me. With today's push for artificial intelligence in education, I wonder whether it's really any different from what I experienced back then.
Browsing through articles, it seems not much has changed. For instance, Piret Oppi and Mikk Granström wrote on ERR's portal that a little over half of the teachers surveyed have already used AI tools, but they haven't integrated them much into teaching. According to researchers, the issue isn't a lack of skills but rather readiness and perceived usefulness. Just like me as a student, teachers also want to understand why.
Experience shows that AI tends to hold back teaching: a robot has no imagination. Take article writing, for example. I can demand extremely thorough research from AI, but in the end I still have to go through it all myself — make sense of it, create connections and arrive at some kind of opinion or idea. And for that, imagination is needed, something a chatbot doesn't have.
The process often ends up taking longer, because when I read the material myself, I simultaneously develop a perspective and internalize the knowledge. Machine-like language and connections that have nothing to do with me only reinforce the feeling of how much I don't know — which in turn affects the balance between relying on my own experience and knowledge versus the information produced by AI. And that information is often not flawless either.
Connections and memories not formed
A student in 11th grade at Pelgulinna High School was asked what they use AI for. "Sometimes for homework, sometimes for tests. [AI] does the tasks quickly, but over time you lose the sense of writing, the sense of language."
That's exactly my point — it gets the work done easily and comfortably, but unlike earlier forms of cheating, it doesn't involve effort. When making a cheat sheet, you had to process the material, condense it, write it down, invent abbreviations so you could fit as much information as possible into the smallest space. Asking AI for an answer rarely involves that kind of processing — who would bother? AI generates, predicts, searches, but doesn't study or verify. And if you don't think and don't create connections, nothing sticks.
A study conducted this year at the Massachusetts Institute of Technology (MIT) had different groups write analytical essays — one with AI assistance, the other without. It turned out that in the early stage, 83 percent of the so-called AI group reported difficulty recalling citations and not a single person produced a correct citation. In a later test, when the AI support was removed, 78 percent of that group couldn't cite anything at all (11 percent gave a correct citation). In comparison, in the so-called brain group, the figures were 11 percent and 78 percent. That's a significant difference.
After several rounds of writing, it also became clear that the more they had to write, the more the AI group leaned on AI. By the third and fourth attempts, they had started copy-pasting. And these participants were adults.
The researcher who led the study published it before it had been peer-reviewed, out of fear that some politician might push AI into preschools and that young people, whose brains are still developing, might stop developing much at all — "The developing brain is the most at risk." That fear was also voiced on ERR by neuroscientist Jaan Aru (Õunapuu, J., Sept. 18, 2024). One of the greatest dangers of using AI, he said, is that students may lose their ability to think and reason. The MIT study was one of the first of its kind. More are needed to determine how the use of chatbots affects learning and brain development. For now, the signs aren't good.
Good for searching, bad for researching
It's clear that AI use has to be learned, and the only way to do that is by actually using it. The question is how and where to do so. It doesn't fit into our current performance-based education system, because performance is exactly where AI excels. It always performs, but it does so by predicting.
It's also good for searching, but not for researching. Facts need to be verified: AI makes mistakes and it's a "yes-man" that agrees with the user, which makes it unsuitable as a partner for bouncing around ideas. Nor does it understand how to answer "I don't know" or "I can't find it"; instead, it always produces some kind of response, the content of which often varies.
What's more, no matter how much we try to adapt it to our education system, both teachers and students will always have the option of using so-called unadapted AI. So it's a catch-22 situation. For teachers, it's a challenge too, because on top of their own work they now have to do the job of the Ministry of Education and Research: AI turns performance-based learning upside down.
"Leap" is a fitting description, because it certainly isn't a smooth transition. But a sensible leap requires at least some vision of where we're jumping to. That means solid, up-to-date digital knowledge, which the current curriculum doesn't provide and which the ministry also lacks. The curriculum didn't provide it back when I was in school either. The AI training sessions held for teachers in August, whose content still remains a secret, will not patch up more than ten years of the ministry's inaction.
Landing zone resembling a quagmire
A good leap also requires a solid starting point. Right now, the ground looks more like a swamp — mostly in the form of social media. The internet is no longer very friendly: it has been cleverly retooled to influence users for profit, benefiting businesses and sometimes politicians. There's a fitting term for this: the political economy of technology. If the internet once seemed self-running, now it feels like strings are being pulled: not just young people but also those in power have realized how much weight it carries.
Clever people have used technology to their advantage and as a result the internet has become not only unsafe but has also reshaped our offline world, including politics. Because with it, opinions can be shaped, attitudes directed — and of course, money made.
When I think back to the platforms I mentioned at the beginning, they really were quite pleasant once. I miss them. I could see what my friends were doing, what they were talking about, look for my own communities and so on. The feed was chronological, not algorithmic, and there weren't nearly as many ads. Or maybe I just didn't notice them. Today I can't even imagine an ad-free internet.
It's comparable to the bus stop at Tallinn's bus station, where the buildings are covered with Prisma and Lidl logos, and in worse times, the slogans of the Center Party and Reform Party. The difference is that online, each person can be shown a different ad — the one most likely to influence them.
Say you've just been talking with a friend or colleague about how you can't be bothered to rake leaves anymore this fall and it's time to buy a leaf blower — even if everyone will hate you for it. The moment your conversation partner walks out the door, you open Facebook and bam! — a Makita leaf blower at Stokker is only €200! Are they eavesdropping on me?
They probably aren't listening in, but they — meaning the big corporations — know you well enough to predict what's on your mind, based on the massive amounts of data they've gathered and transmitted almost in real time. Think about all the accounts you've created online, the terms and conditions you've never read but accepted; think about your Google searches, the videos you've watched, the social media scrolling, the cookies you've accepted... Every single step is traceable.
The data we generate with our every move is like gold for advertisers and companies to find their buyers. But it's also valuable to banks, leasing companies and real estate firms, so they can know who to give a loan to, what price offer to make and so forth. In recent years, alongside financial profiling and risk assessments, a new trend has emerged: social media profiling, offered as a service. This means the automatic collection and analysis of data about users from social networks and related services, in order to draw conclusions about their traits, interests and behavior patterns.
No privacy
Anyone with a newer car should beware: it's not just a shiny ride, it's also a data shark. Modern cars don't just know what you say, what your voice sounds like, where you drive, what's around you or what you look like — some even know your genetic information. Based on the data, an algorithm (AI) can also predict your abilities, personality and intelligence. After all, cars connect to your phone.
Nissan's privacy policy even stated outright that they are allowed to collect information about a user's sex life and sexual orientation. With Subaru, you agree to their terms the moment you sit in the car. No box check required. If your friend owns a Subaru, the company may get a detailed profile of you just from you riding along in the passenger seat.
For those without a smart car, there's a good chance you've picked up a smart washing machine, refrigerator, oven tray, robot vacuum, Google Home or Alexa, smart TV, printer, smart lamp or smartwatch. The vast majority of new technology collects data — ostensibly to improve device algorithms, but also to target advertising.
This information is also sold to data brokers, who compile so-called deep data profiles of users. These profiles stretch from online behavior to health information, personal traits and lifestyle choices. They even know about your pets. It's already been confirmed that the business model of major social media platforms, like Facebook (Meta), outright depends on extensive data collection and targeted advertising. And it's no coincidence that the likes of Elon Musk (X) and Mark Zuckerberg (Meta) are among the richest people in the world. Musk, in fact, is number one.
How strong is data protection?
Now, if we think about what kind of data students and teachers will start entering into the future AI application of a major U.S. corporation, the question arises: how strong is our country's data protection and sensitivity really? Especially if no agreement is reached with OpenAI on model separation or on ensuring that data from our national AI system is not used to train OpenAI's public model.
The chancellor of justice's investigation, which revealed that over a year and a half government agencies made tens of thousands of queries — more than 30,000 by the Police and Border Guard Board and nearly 2,000 by the Financial Intelligence Unit — without any legal basis, suggests that the protection isn't very strong.
I live in Vienna, which means I'm loosely familiar with the politics of the German-speaking world. There, it's been discussed how X and TikTok amplified content from the far-right party AfD, even to users who hadn't shown any interest in right-wing material.
It was found that up to 78 percent of recommended political content on TikTok and 64 percent on X favored AfD ads, helping the party deceitfully reach young people. That's how the extremist AfD became Germany's second most popular party. Those who remember will recall how Facebook once heavily influenced the Brexit referendum by targeting its users with right-wing content.
This happens because extremist content is profitable for certain companies, especially social media platforms. Governments and police are also clearly interested in data, as a way of spying on citizens. Major platforms prioritize content that grabs attention and keeps you online longer. That content is often far-right, misogynistic and full of hate. It is particularly aimed at (young) male users: they are the target group. Just like you were the target group for that leaf blower ad not long ago.
A Media Matters report from this year showed that last year, nine out of the ten most popular websites in the U.S. leaned right, with more than 197 million followers across various channels; right-wing videos made up 65 percent of all YouTube views, totaling 65 billion views. By now, we all know about boys being radicalized online. This fact has been acknowledged and it undermines social cohesion, democracy and human rights. Yet authorities have done nothing to stop it. Data continues to be collected and money continues to be made. Always at the expense of someone — or something. That is the ground from which we are leaping.
No up-to-date knowledge
At this point, I'll repeat myself: the problem isn't technology — it's the people, the corporate leaders who use it maliciously. Technology itself is neutral and it surrounds us everywhere anyway. What matters is knowing how to handle it carefully.
The AI leap, in my view, is flawed in the sense that fuel is being added to an education system that is already dragging its feet and its unresolved problems. All so that someone can later say, "we were first," without asking whether AI is needed in education at all. If technology is to be brought into schools, it should only be when it adds something that screen-free methods cannot. Such situations are rare, concludes Grete Arro.
What's missing is up-to-date knowledge — not the digital tools that are supposed to get us to that knowledge. Education Minister Kristina Kallas said at the World Education Forum in London that Estonians are open to digital tools, including teachers. Yet on the AI Leap's own website, teachers at Nõo High School speak instead of digital fatigue.
Honestly! Instead of ensuring that young people and teachers gain the knowledge they need in a saturated digital world — how technology works, how it affects the environment, how to protect themselves, how to defend democracy and how to develop digital critical thinking, which is as scarce alongside fast-developing technology as health insurance and money are for an artist — the state is rushing into decisions whose fruits rot before they ripen.
Things seemed a little simpler before, I think. e-School, Stuudium, the e-textbook — those were our own platforms, though they had their problems too: no one monitors the content, no one takes ownership of educational materials, there's no quality control or guarantee that they'll still exist the following year, copyright disputes are constant and so on.
With AI and chatbots, the situation is different, because they belong to large American corporations — hungry for money, hungry for data. We also don't know exactly how chatbots affect young people's development and learning, nor can we be sure what data they collect, how much, for what purposes or how long they store it. And who is going to oversee all this sustainably? There have been too many cases where information that was supposedly never meant to be retained ends up in a chatbot's "knowledge."
The EU's General Data Protection Regulation requires a data protection impact assessment in cases of high-risk data processing, especially when it concerns minors, since that poses a significant risk to individuals' rights and freedoms. Yet it seems our state has not followed this requirement in launching the AI Leap. I, at least, couldn't find any such impact assessment.
Very old curriculum
Estonia is starting somewhere. The plan is to develop an AI supposedly tailored for education. To me, that feels like a strange starting point. Wouldn't it make more sense first to develop the education system itself — where AI has already entered and meddled — so that young people have the knowledge to protect their freedom of thought, their data and their rights? That, in turn, would safeguard the country's freedom of thought, data and rights in the future. The computer science curriculum is 15 years old. We don't even have computers that old anymore.
--
Editor: Marcus Turovski










