GPT-5 has just been unveiled, and the world is already actively testing it. Some dream of even smarter suggestions, while others imagine finally fully automating their lives. We are not hiding our interest either, because without AI, it is hard to picture even one working day. Yet amid this enthusiasm, there are some worrying notes: recent studies suggest that active use of AI can impair critical thinking.
Of course, this is not a reason to abandon the technology. But it is a reason to understand why this happens and how to use AI in ways that develop our minds rather than letting our intellectual muscles atrophy.
Are some brain functions really atrophying due to the development of technology?
Yes, but not in all cases and not everywhere. In 1981, psychologist James Flynn discovered that IQ had been rising throughout the 20th century — by 3–5 points per decade in developed countries.
This increase lasted roughly from the 1930s to the 1980s. Flynn hypothesised that it was influenced by better nutrition, access to education, improved healthcare, greater gender equality, and the growing need to adapt to technological progress in daily life.
But here’s the interesting part: since the 1990s, the situation has changed. The Flynn effect has gradually disappeared in many developed countries and has even reversed — instead of rising intelligence, a decline has been recorded in stable nations. This phenomenon is known as the ‘reverse Flynn effect’. Possible factors include a decline in education quality, media influence, migration, and… yes, technology.
A 30-year study conducted in Norway, published in 2018, showed a decline in IQ scores among men born after 1975. The study examined tests of more than 730,000 Norwegian men, using data from male military personnel, since IQ testing is mandatory before military service in Norway.
The researchers compared not only different families but also brothers from the same family to reduce the margin of error caused by differing ‘starting conditions,’ such as family wealth. Even among siblings, however, IQ scores were higher in those born earlier. So, the cause was likely not genes or family wealth, but environmental factors.
The more we rely on AI, the less critically we think?
Reflections on the degradation of mental abilities took a new turn this summer with the publication of a preprint by the Massachusetts Institute of Technology (MIT). The study claimed that ChatGPT increases human cognitive debt — a state in which excessive dependence on artificial intelligence replaces independent thinking.
During the study, special equipment recorded participants’ brain activity. Researchers observed lower brain activity in those who used ChatGPT compared to those who relied solely on their own minds. Scientists worry that AI may cause cognitive debt: with the brain less engaged, new skills are not acquired. The methodology of this study has faced criticism because it has not been peer-reviewed, and the small sample size limits its comprehensiveness. Interestingly, the situation is similar in online gaming environments, such as at Richard Casino, where relying too heavily on automated suggestions or game guides can reduce players’ engagement and strategic thinking, highlighting how excessive dependence on digital tools can impact mental activity.
However, other researchers have reached similar conclusions. In particular, Microsoft, in a joint study with Carnegie Mellon University, noted that the more people rely on AI, the less they engage in critical thinking.
Still, the general idea of delegating routine tasks to AI to free up resources for more important work can be effective if AI use is kept within limits. A study of 4,800 developers from Fortune 100 companies showed that those who used AI within specified boundaries were able to complete 26% more work and speed up code compilation by 38%. Importantly, the researchers did not observe any obvious negative consequences from using the technology at work.
AI is indeed changing the way we work — just as dozens of previous technologies have done. Concentration spans may decrease as the brain adapts to new rules, and widespread AI use may eventually contribute to the partial loss of some skills, such as critical thinking. On the other hand, the brain gains new abilities: it processes information faster and learns to perform multiple tasks simultaneously.
Regarding IQ tests, there is ongoing debate in scientific circles about their relevance and their true impact on human success. Perhaps we are not becoming less intelligent — we are adapting.
7 ideas for using AI wisely
So, AI can be both a personal trainer and a ‘mental crutch.’ It all depends on how we use it. Below are 7 ideas for using artificial intelligence to help preserve and develop your mental abilities, rather than trading them for the illusion of comfort.
1. Treat AI content as a draft
It is not enough to simply generate and copy data from AI. Each output should be evaluated as carefully as you would check poker chips worth before a game: verify factual accuracy, assess conciseness, and ensure the right tone. Rewrite the information in your own style, and do not pass off a summary created by AI as your own without checking every point. Treat AI content as a starting point for deeper research.
2. First, try to solve the problem yourself
The transfer of mental work to tools has been noted since the days of the ‘Google effect’ — the so-called digital amnesia. Why memorise something when a search engine can find it in seconds?
Recent research confirms that frequent use of AI for quick answers can reduce critical thinking skills. Therefore, attempt to solve problems independently first, then use AI to check your results. This way, your brain continues to train its analytical abilities.
3. Check answers and ask for explanations
Generative models can hallucinate. If false information provided by AI becomes the basis for further work, you may need to redo everything.
Models improve with guidance, and carefully designed prompts (e.g., “Use only verified facts and provide sources for each point”) reduce mistakes. However, a Stanford study found that models do not self-check unless prompted. Overreliance can foster excessive trust.
After receiving an answer, ask: “Why this option?” or “What are the alternatives?” and verify multiple sources. This disciplines thinking and reduces the risk of AI hallucinations.
4. Formulate queries that stimulate thinking
Instead of asking AI to simply ‘give an answer,’ ask it to provide multiple options, justify each, and offer counterarguments.
For example: “Give three approaches with pros and cons and explain which is best and why.” This forces your brain to compare, analyse, and draw conclusions. Passive consumption puts the brain in ‘energy-saving mode.’
5. Use AI as a tool for reflection, not just as a source of answers
Generative AI can act as an informal mentor. With pre-designed prompts, it can simulate a dialogue with a coach, stimulating deeper understanding, analysis, and self-assessment.
This approach encourages not just information consumption but also self-knowledge. The effectiveness again comes down to the quality of your prompt.
6. Limit ‘instant help’
Even a short period of independent work before receiving a hint reduces passive copying. A Princeton University study showed that students who had time to think before receiving a hint performed 22–36% better on tests.
The best approach is to present AI with specific ideas or hypotheses. Another method is the ‘cognitive mirror’: explain your train of thought to AI before asking for an answer. This improves understanding and helps systematise information. Then compare your reasoning with the AI’s response.
7. Gradually reduce hints from AI
Effective learning requires strong support at the start, followed by gradual reduction. Research on integrating generative models into education confirms this approach.
Early on, you can actively use AI prompts, but with each new task, reduce their use and move toward independent work. This prevents the risk of ‘unlearning’ how to think for yourself.