Ivo Visak: Doing nothing is not a strategy

Students are using AI anyway and schools must guide that use with clear rules — whether through restrictions or permissions. Pedagogy and technology must be intertwined in a way that keeps the human in charge, not the other way around, writes Ivo Visak.
A nationwide student survey conducted at the beginning of 2024 (n = 15,631; grades 6–12) confirms that the use of artificial intelligence is widespread at the upper secondary school level. Over 90 percent of students have at least tried using it in their studies and a large share use it on a weekly basis.
The most popular tools have been freely accessible conversational models (primarily ChatGPT, followed by Gemini, which was known as Google Bard at the time of the survey), often used with default settings that allow the models to be further trained based on the conversations.
This is our reality — not a hypothesis about the future. Nearly a year separates the student survey from the official declaration of an AI leap.
Wild West web
If the majority of students are using free generative AI models, then we are living in a world of default settings — where data training, server locations and the filtering of conversation content (including on topics like mental health) are under the control of no one but the tech giants who own these models.
Recent observations show that platforms have loosened their warnings (for example, that a language model is not a professional psychologist) to build trust and increase usage. All of this directly affects young people in Estonia. The question is simple: are we okay with that?
The AI leap does not romanticize generative AI technology, but it also doesn't pretend it doesn't exist. Our starting point is the facts: if more than 90 percent of students are already using AI, then the state's response cannot be to sit idly by. That would be — and in many cases already is — an implicit legitimization of poor learning habits.
Technology now commonplace and all the more serious
Arvind Narayanan and Sayash Kapoor's view of "AI as ordinary technology" helps disentangle utopian and dystopian thinking. "Ordinary" doesn't mean harmless — it means treating AI the way we treat electricity or the internet: powerful, but clearly demystified. This kind of approach forces us to ask: what are the use cases, what are the risks and how are control and responsibility distributed?
None of this rules out the political dimension of new technologies. Dan Bogdanov has emphasized that artificial intelligence benefits those who are able to control it. If education becomes dependent on a handful of global service providers, our ability to shape the digital future of the Estonian language and culture will shrink.
We need to engage more deeply with European supercomputing clusters, build up our language resources and actively promote our own (open) models. This cannot be left to a few foundations, institutes or concerned individuals — it must be a clearly state-coordinated effort, with control mechanisms and a plan that extends beyond a single election cycle.
The language risk is sobering: Wikipedia in smaller languages is flooded with machine translations and large language models then learn this artificial junk as if it were the truth. This traps vulnerable small languages in a vicious cycle. If we don't invest in high-quality Estonian-language datasets and models tailored to our linguistic environment, there will be no "market solution." Small languages are responsible for their own fate here and Iceland's approach offers a valuable example.
Non-hierarchical intertwining of pedagogy and technology
"Which comes first — technology or pedagogy?" Tim Fawns offers a clarification: it's not a question of sequence, but of entanglement. The quality of learning emerges from goals, values, assessment, context, the agency of teachers and learners and the tools used. The emphasis is on the fact that agency is shared between the teacher, the learners and the institution; tools are chosen based on goals and pedagogical approach — not the other way around.
That's why the core of the AI leap is pedagogical, not gadget-driven. It's about strengthening teachers' professional agency (through learning circles, peer-to-peer learning), teaching learning strategies and self-regulation and reshaping assessment so that the focus is on the thinking process, not just the final result. The goal is to shift default behaviors in the classroom — not simply add "one more app."
Three yardsticks
How do we know we're succeeding?
First, by the quality of thinking. Learning is born from effort and a transparent thinking process. That's why we design activities where the model can't "do it all," but instead requires the student to explain, compare, justify, rephrase and create their own examples.
Second, by equal opportunity. If we leave things to the so-called Wild West, we reinforce existing educational inequalities: strong family support and better digital skills lead to success, while others feel pressure to seek "quick fixes." Educational tools must therefore be accessible to all upper secondary students.
Third, by the strength of the Estonian language and culture. If we do not produce high-quality Estonian-language training data and collaborate with European supercomputing clusters, a generation will grow up using tools that distort their linguistic intuition. This is cultural policy, not just an IT procurement issue.
Underlying all of this is safety and privacy. Schools cannot direct children to use global platforms with default settings. The AI leap aims for use cases tailored to the Estonian language and under the control of schools (logic: less data risk, more pedagogical control). Data policy must be firmly rooted in Europe until comprehensive solutions are developed within Estonia itself.
What the AI Leap will actually do (and what not)
The AI leap isn't about bringing a "new AI gadget" into the classroom. It's already been there for years — in various forms and largely uncontrolled. The AI Leap aims to change how the machine behaves: to pose questions that don't come with ready-made answers, to promote processes that require real thinking and to foster assessment that values explanation and proper sourcing.
Yes, we're patching things — for now. But not to cover up a wound. We're doing it to give teachers the tools they need while we build a more lasting solution: Estonian-language tools, clear rules, AI learning circles for teachers and modules for teaching critical AI literacy.
Yes, we're collaborating with the big players — while preparing our own path: better Estonian-language corpora, an open ecosystem with visible source code and public oversight that demands the attention of lawmakers. This also helps us avoid vendor lock-in — that is, becoming dependent on a single service provider.
What will schools be responsible for?
A school must not delegate the task of education to artificial intelligence. No techno-fix and no techno-phobia can relieve us of the responsibility to raise young people who know how to think — who can not only find information, but assess its reliability; who do not operate mechanically, but understand the impact of the tools they use on themselves and on others.
The AI Leap is an education reform program centered on human development. The teacher remains in the role of the human; the model remains a tool. As Max Tegmark aptly put it, the benefits of civilization are the result of human intelligence. AI amplification is only useful as long as we keep it human-centered.
That's precisely why the AI Leap's core is focused on the professional agency of teachers — not the introduction of new technology. We don't ask which model you use; we ask how the student makes their thinking visible. How do you evaluate reasoning, not just the end result?
Doing nothing is not a solution
Tanel Mällo is right to point out that the AI hype can serve the interests of those in power and turn society into a kind of "testing ground."
The risks are real in opaque algorithms, data vulnerabilities and cultural dependency. But that doesn't mean the best education strategy is to do nothing. On the contrary, the best defense is a pedagogically strong school — one that sets its own rules and boundaries. The AI Leap doesn't aim to colonize the classroom with artificial intelligence; it reinforces the teacher's agency and the learner's responsibility within it.
--
Editor: Marcus Turovski










