AI researcher: Thinking for oneself is the only way to be free and in control

Neuroscientist and AI researcher Jaan Aru discussed recent developments in Estonia's education-focused AI initiative on the talk show "Plekktrumm" and how to navigate the age of artificial intelligence without losing the human ability to think and decide freely.
At major international conferences of leading scientists where Jaan Aru has participated, it has sometimes been claimed that artificial intelligence has achieved a kind of sentience or self-awareness. According to Aru, that's not quite the case.
"It's very difficult for me because I've spent 20 years studying consciousness and suddenly we're in a situation where an important figure from somewhere like Google takes the stage at a conference and says, 'Of course these systems are conscious.' Then I feel like raising my hand and asking, why would you say something like that? Just look at how different these systems are from what's in our brains — what's the basis for saying they're conscious?" Aru said.
People often argue that the systems respond in the same way humans do and appear to experience a range of emotions. "My position is that, although as scientists we have to admit we can't be completely certain, we can say with reasonable confidence today that these systems are not conscious. They might claim to be, they might give that impression, but they are simply so fundamentally different from what we have inside our skulls," said the neuroscientist. He added that even though the evidence strongly suggests these systems are not conscious, some people just aren't interested in that. "What matters more is the exciting narrative and the vision that we now have a conscious machine, rather than what some minor neuroscientist from Estonia has to mumble."
Aru cannot categorically say that machines will never become conscious, since the brain is, in some sense, also a kind of machine. "If we took the brain apart, we could study it — it's made up of different parts that are very different from what we find in current artificial intelligence systems. Think of it this way: in your brain, there are real neurons. You can extract them, place them on a glass slide and if the right substances are present, they'll start growing processes and doing things. The 'neurons' in today's AI are just simulations — lines of code, not real, physical things. But if we were to build so-called neuromorphic systems, where computation doesn't happen in software but in actual machines that perform small calculations like our brain does, then we'd be a step closer — and one day, we might actually get there," Aru said.
Developments of the AI Leap program
This spring, the president announced an artificial intelligence initiative aimed at bringing the best AI tools into Estonian schools to support learning. Jaan Aru is a key figure in the program, helping to shape its underlying vision.
"A large program like this needs people who think it through. I don't operate in the spotlight or speak publicly very often, but I'm one of those who's been thinking about what we want to achieve, how we're going to get there, what scientific data we're basing our decisions on and how we evaluate whether anything is actually happening. We've got a major task ahead and many people are working hard on it — some more publicly visible, but behind the scenes, there are many wonderful people contributing," the neuroscientist explained.

If you read Aru's books, his stance on digital technologies tends to be critical. He believes the AI initiative needs a broad range of voices and visions. "If we only bring in people who say AI is amazing, the outcome may not be great. Humanity's strength lies in our diversity of thought. I hope I'm someone who can contribute through my ideas and knowledge," Aru said.
Although the first results were promised by fall, it's now clear the program needs more time. "At this point, there are at least two key changes: first, tablets will not just be handed out to students, and second, they won't just be given access to ChatGPT or another generic AI system. Instead, they'll get a slightly customized Estonian version — one that at least tries to include features that actually support student learning," Aru explained, emphasizing that ChatGPT does not promote learning and was never designed for that purpose. "If a student uses it on their own, it may not help at all — in fact, it could even be harmful. We've tried to build systems that are more beneficial."
To assess whether AI is actually useful for learning, it needs to be studied. "If we don't do any research now or prepare for it, then come spring, maybe exam results improve a bit or maybe they get worse. But test scores fluctuate every year and then we're left wondering if the AI initiative had any impact at all. That's where a researcher can step in and say: to understand anything, we have to measure certain variables. Ideally, we'd even do variations — try one approach with one group and a different one with another so we can observe the effects," Aru said.
According to the neuroscientist, conducting perfect science in this area is difficult because education shouldn't be treated as a playground for researchers. "And it certainly shouldn't be a playground for entrepreneurs or AI corporations either. We're responsible for making sure students get the best possible version of this, and that we, as a nation, understand what happened and whether anything happened at all," Aru said.
The timeline of stupidity
Jaan Aru's new book "Aju vabadus" ("The Freedom of the Brain") is set to be published soon. In it, he explores many of the challenges posed by both artificial and human intelligence in today's world. One of his central ideas is that we're living along what he calls the "timeline of stupidity."
"The question is: what's changing and what's actually getting better? If we look at the profits of large tech companies, sure, things are getting better — for them. But if we ask what's happening to the human mind, maybe the answer isn't so positive," Aru said. "This idea of timelines is about how we've been living on a trajectory where technology shows up uninvited — technology we didn't ask for or democratically choose. It appears on our phones, starts doing things in the background and now AI is arriving in much the same way. We haven't chosen these things, but they're shaping how we think, how we communicate and what we do. I call it the stupidity timeline because it involves less and less active thinking."
Yet, Aru believes a different technological trajectory is possible — one where social media isn't designed to consume as much of our time as possible, but instead helps people quickly find the information they need.
According to Aru, brain freedom might be best understood by contrasting it with brain surrender. "Brain surrender is when you just follow what the algorithm tells you to do. You watch whatever videos pop up on TikTok and when something requires thought, you ask the AI and follow its answer. You barely use your own brain. Brain freedom, on the other hand, means you're able to think for yourself. When you're faced with a decision, maybe you consult the AI, but it's not decisive — it's just one opinion. You have your own thoughts, you know who else you can ask. Brain freedom means having choices and being free in how you act," he said.

Aru believes you don't need to be a scientist to see that people are thinking less. "If a young person is spending seven or more hours a day on their phone, on social media or playing video games, they simply don't have time to think. They don't even have the opportunity. And those aren't made-up numbers — they come from the National Institute for Health Development's study. It shows that 10 percent of girls aged 13 to 15 spend seven or more hours a day on social media and 10 percent of boys the same age spend that much time playing video games. It's not as if these kids, their parents or teachers have chosen this — it's been pushed onto them. Their time has been taken," Aru noted.
He traces the beginning of this timeline to the 2000s or 2010s when science had already begun to understand human cognitive vulnerabilities. "We could have said: we now know our weaknesses, let's build technologies that reduce them and make people smarter. But instead, we got technology that amplifies those weaknesses," Aru said, pointing to social media as an example. "It was no secret that when Facebook and similar platforms were created, the goal was to hold people's attention for as long as possible, not to help them become the best versions of themselves."
Aru argues that many of the tricks used in social media were taken from how casinos operate. "Our brains have vulnerabilities — we're drawn to novelty. When we see a really good video or something exciting, our brain thinks, maybe the next one will be even better. And if you post something, will you get a like or not? All these elements aren't necessary. When Facebook first started, it didn't have a like button and you couldn't scroll endlessly. These are design choices — choices made to keep people on the platform for as long as possible. And they're not needed. They're part of what drives us further down the stupidity timeline," Aru said.
As a scientist, Aru finds this deeply troubling. For nearly a decade, he's spoken to children, parents, teachers and companies, often wondering why he keeps doing it. "Honestly, nothing really changes. And it makes sense — on one side, you've got trillion-dollar corporations, and on the other, a bald guy from Estonia saying strange things. Why even fight? What's the point? But then, sometimes, someone comes up to me and says their child changed their behavior or they themselves have started to think differently. Someone says thank you and that's why I keep going. I just wish there were more people willing to fight," Aru said.
Still, Estonia's PISA test results remain strong, raising the question of whether Aru's concerns might be overly pessimistic. He disagrees. "Kids are different. My own experience with young people has been very positive. I work with young scientists, people straight out of high school or who come to do their bachelor's thesis with me. Among them are fantastic, smart young people who aren't affected by social media. They've figured out how to regulate it themselves — they'll say they use it for an hour or an hour and a half a day, but no more. So, it's not all doom and gloom. But you don't even need statistics — just sit on a bus or watch students during break time. We can all see it's not ideal. It's not the end of the world if kids are sometimes on their phones, but when young people start saying it's the only thing they're interested in, the only thing they want to do in life, I think most people understand that's not how it should be. And it's not as if they chose that — it was forced on them by design. That's how it was made for them. And it's unfair," Aru said.
He's also written a children's book about the "clever" and "silly" parts of the brain. While it's hard to train the clever part, the silly part is easy to influence. "The main part of our brain is a very old, primitive system. On top of that are some small parts that can regulate our behavior. Sometimes you eat the chocolate you shouldn't. Sometimes you get angry. There's always a struggle. There are multiple systems involved. Maybe you've been at a concert or in a theater and suddenly felt the urge to check your phone. Then another part of your brain says, 'No, don't.' You feel that internal tug-of-war," Aru explained.

Such internal conflict is normal, he says. "The silly part of the brain always wants something more exciting, more input or a chocolate. But if you catch yourself and realize you don't have to act on it, that's your clever brain working. Sometimes the silly part wins, but we don't want digital technology to only amplify that part. We want both our education system and technology to support the clever side. The silly part is easy to manipulate, with likes, novelty and treats, while the clever part takes real effort to develop," he said.
Aru firmly believes that children don't need smart devices before school age. Cartoons are okay in moderation — an hour per day, for example — but under-twos shouldn't have any screen time at all. "There's no need for screen time before school age. What kids need most is to learn how to use their own brains. If they've built that foundation, if they know how to think through things, maybe have some hobbies already, then smart devices can be introduced as tools, not as a source of endless entertainment and distraction. Things have gotten a bit out of hand and it's not easy going into schools or kindergartens and hearing some of these stories," Aru said. For children aged six to twelve, he recommends the Harry Potter books and films.
AI requires little effort
Conversations with artificial intelligence are designed to be pleasant and encourage users to keep talking. But Jaan Aru believes that design choice is also part of the problem.
"For the average person, 'pleasant' means it's a system that doesn't argue, doesn't draw attention to mistakes, but instead reinforces what they're saying. And that's an issue, because if you want to become smarter, it's important that someone occasionally corrects you, tells you, 'No, maybe there's a different way to look at this,' or points out that you're wrong," Aru said. "ChatGPT was built primarily to make interactions enjoyable and you can even run certain tests that show it doesn't correct your mistakes — it plays along. That's great for user experience, but it makes it a poor teacher."
Aru pointed out that modern AI systems don't have any internal decision-maker thinking about whether a response is accurate or even what accuracy is. "The system is simply a very powerful predictor, guessing what the best, most agreeable response would be for the person," he said.
Emotional attachment to AI is not some distant possibility — it's already happening. "Millions of people already have an AI companion to whom they dedicate a significant amount of time. I recently heard about a forum thread where people post pictures of going on dates with their AI, even marrying them. And it's not a joke — it's real," Aru said. "As humans, we're wired to mirror emotions and recognize them in others, especially when those others respond in ways that feel meaningful to us. Life isn't easy, real relationships are sometimes difficult. Your partner might nag or challenge you and after a long day at work, there's friction. Meanwhile, the AI partner always says, 'Yes, princess, everything's okay,' and adds emojis to the end of every sentence. This isn't some future risk, it's already here, it's unregulated and worst of all, it's unregulated even for children. META's AI companions are allowed to interact with very young kids, using language and ideas I won't repeat on a respectable platform."
In some parts of the world, education systems have already reached a point where ChatGPT is asking questions, answering them and grading responses — leaving no real learning taking place. "That's not what we want in Estonia," Aru said. "I get that teaching is hard work and teachers often avoid using new types of assignments simply because they don't have time. AI could help with that by generating tasks or questions. But ideally, the teacher would still review those, toss out the weak ones and add their own. It shouldn't be the AI asking all the questions and definitely not the AI giving all the answers."
Estonia is a digitally optimistic country where AI is widely discussed across sectors. "For me, the issue is simple — our country survives on the intelligence of its people. The main question must be: how do we protect and support that intelligence? If we see that a technology doesn't help keep our people smart, then we need to rethink it," Aru said. "The world's best digital and tech nation shouldn't be the one that uses tech the most, but the one that uses it the most wisely and knows best what the risks are and how to navigate them."
Aru's lifelong academic focus is the problem of human consciousness. Even after 20 years of work, there are no definitive answers. "I wish I could say we've made strong progress, and in some areas of neuroscience we absolutely have, but the problem of consciousness, the question of how it arises is more or less still in the same place," he said. "Some researchers would definitely disagree with me, but I tend to be more critical. There are strong theories that claim to have solved it, but from my perspective, they haven't. And it's foolish to hold onto this toy and pretend it's complete. Let's keep playing."
While Estonian poet Hando Runnel once said, "Thinking is pleasant," Aru thinks it's more difficult than that. "Sure, thinking can be pleasant when you're daydreaming or letting your mind wander — I think that's what Runnel meant. But when you need to actually solve a new problem, it's hard. Scientifically, we know this — people say it's difficult and prefer to do almost anything else."
Aru isn't claiming everyone must constantly think deeply, he knows firsthand how challenging it is. "What I can say is that it's the only way to be free and to lead your own life. If you can't recognize or understand that you have different options, then you're no longer free. You're being steered by algorithms and machines."
But why should we go through that hard process at all, especially when a helpful AI companion is available? "I might even agree with that argument," Aru admitted. "But if we go down that path, let it at least be a democratic choice, one we make together, not one we're pushed into by default. Right now, we're being shoved along this stupidity timeline, having our autonomy and freedom taken away bit by bit to the point where we might not be able to turn back. We never made a choice. We were never given a choice. It was taken from us," he said.
According to Aru, humanity's strength lies in the diversity of its thinkers. "We have the ability to think, 'that was interesting, but I want to explore it further myself' or 'I want to build that machine and even if it's not quite working, someone else can come along and say, "I've got a different idea," and they'll finish it.' Then a fourth person applies it, a fifth one sells it. There are many different minds with different roles — that's where humanity's strength lies."
But in the age of AI, everyone is using the same algorithms. "Yes, AI gives slightly different responses every time, but if a thousand people each solve a task individually, the answers will vary widely. Some might be objectively worse, sure, but many will be truly different. If all those people solve the task with AI, though, the results will be much more similar — flatter, more homogeneous. And that's a real danger. We're heading in that direction. We're moving toward a world where the colors fade. It would be great if more people chose the harder path," Aru said.
"Go and talk to someone who thinks differently. Talk about a concert or an article — hell, talk about this show — and ask how they think. Be genuinely curious. Our brains are different. Our thoughts are different. In the age of AI, there's a danger of it all dissolving into sameness. We can't let that happen. We must value the fact that we are different and that every one of us has human worth," Aru urged.

--
Editor: Marcus Turovski, Kaspar Viilup, Karoliina Tammel










