After LLM comes LRM

Filip Tichý | 17.11.2025 | News

The authors of this article, Filip Tichý (Partner at Grant Thornton Slovakia) and Jakub Chudík (Co-Founder at Assetario), take you through the world of artificial intelligence in the AI Breakfast series. This article was written without the use of AI.

When using various LLM models such as Chat GPT or Gemini, you have probably noticed that the LLM model spits out answers at lightning speed, and the answers sound convincing and professional, but it is often apparent that they are not very "smart." There are obvious errors or missed connections.

 

This is a logical result based on the nature of LLM models - quickly assembling words (or tokens) on a statistical basis without "real" thinking. Of course, despite this, such LLM models are extremely powerful and useful.

 

When building AI agents, there was a growing need not only for fast and productive models, but also for systematically thinking models. This need resulted in the introduction of the first LRM models, or Large Reasoning Models, at the end of 2024. These include DeepSeek R1, Gemini Thinking, and o3 (from Open AI). The arrival of LRM models represents a significant shift in how artificial intelligence can solve complex and multi-step tasks.

 

Unlike LLM, which are set up to complete the next words based on large amounts of data, LRM are designed for systematic thinking, problem solving, and structured planning. The main advantage of LRM is its ability to analyze individual steps and create a comprehensive plan of action, which significantly increases the accuracy of results.

 

How does it work technically? LRM is essentially the same technology as LLM with the same technological architecture. It is a transformer model, but it is configured differently. It has different procedures and differently set goals. To put it very simply, it is as if we put a hundred prompts such as "think again," "think why," and the like into LLM. It is actually a new application of the same technology (LLM). The result of this new configuration is that while the output of LLM is probabilistic text, the output of LRM is a deterministic solution.

 

LRM can then divide the problem into smaller steps and search for solutions to them. LRM models improve not only with more data, but also with time; the longer they work, the better they become. A simple example to illustrate this: ask LLM and LRM to solve a crossword puzzle. LLM will return it filled in within a few seconds, but with obvious errors. The LRM will take a few minutes (which is an eternity in IT), but it will be almost completely accurate because the LRM does not just try to generate text, but also aims to understand what the goal is, what is the correct and incorrect result, and to implement this knowledge into the proposed outputs.

 

It is precisely this approach that hides the greatest strength of LRM models - they can "think" about a problem in a similar way to humans and respond flexibly to new information or changed conditions. Their approach is closer to logic and analytical thinking, which allows them to handle even demanding tasks where LLM models often fail.

 

LRM models are also a game-changer for AI agents, where it is often useful to have LRM models in addition to powerful LLM models. For example, for certain planning tasks or steps that require finding solutions. In addition to the use of AI agents, LRM models will certainly find wide application in various business processes, similarly to LLM.

Related articles

Martina Švaňová | 18.11.2025 | News

Abolition of the obligation to keep and submit pension…

From January 1, 2026, employers will see a long-awaited change in social…

Martina Švaňová | 18.11.2025 | News

New rules for employee incapacity for work in 2026

The National Council of the Slovak Republic has adopted several changes…

Jana Kyselová | 3.11.2025 | News

Meal allowances for business trips to increase from…

The Ministry of Labour, Social Affairs and Family of the Slovak Republic has…