Hierarchical Reasoning Model
Juan Vera
August 2025
Abstract
Reading Hierarchical Reasoning Models by Wang et al.
Reasoning is still a critical challenge in AI. The Chain of Thought (CoT) of language models suffers from task decomposition,
This stems from autoregressive inference, as a language model samples as , where is the th token in the sequence, and is the set of tokens before . Meaning small hallucinations in early steps can lead to larger errors in downstream steps. See more by Yann LeCun here.
extensive data requirements ( for self-attention), and high latency.
Directly inspired by the human brain, HRMs execute sequential reasoning in a single forward pass through two interdependent recurrent modules—one for slow and abstract planning, and a low-level module for rapid and detailed computations.
With 27M parameters and ~1000 training examples, HRM beats larger transformer-based models on ARC-AGI-(1 & 2), Sudoku-Extreme, and Maze-Hard.
Introduction
Language models, despite being overly parameterized, with many transformer blocks stacked on top of each other, are paradoxically shallow.
This paper proves that language models can be simulated by uniform constant-depth threshold circuits, meaning they are in the class .
is the class of functions computed by uniform families of Boolean circuits of constant depth and polynomial size, with unbounded fan-in AND, OR, and NOT gates.
, generally sitting lower in the complexity hierarchy, meaning they can't solve all problems in (polynomial time), placing fundamental limits on what they can compute directly. There are simply some functions or patterns transformers cannot represent (more here).
LLMs are therefore not Turing-complete, as Turing machines can solve all problems in , and therefore cannot compute a significant portion of problems efficiently, or even at all.
Chain of Thought reasoning can also easily break down, where a single misordering of steps can collapse the entire reasoning process.
In this paper, they explore latent reasoning, where the model reasons within its internal hidden state, aligning with the fact that the human mind reasons separately from language.
Recurrent Neural Networks use such a hidden state, but they are computationally expensive, not parallelizable, and suffer from the vanishing gradient problem due to backpropagation through time.
The human brain is a compelling blueprint, organizing computations hierarchically rather than sequentially, operating at different speeds, which can enable deep multi-stage reasoning in parallel.
The HRM is constructed similarly, with a high-level module designed for abstract deliberate reasoning while the low-level module is designed for rapid and detailed computations, which helps avoid the rapid convergence of recurrent models, which they coin as "hierarchical convergence".
The higher-level module only advances after the lower-level module has completed multiple steps and reached a stable state.
They also propose a one-step gradient approximation for training the HRM, which eliminates the need for backpropagation through time, decreasing the computational requirements of training the recurrent model.
Due to its architecture, the HRM offers excellent performance using only 1,000 training examples, without any pre-training or CoT SFT, even learning solutions to problems that are intractable to LLMs.
Architecture
The model is inspired by three principles:
- Hierarchical Processing, where the brain processes information over hierarchical cortical regions—differentiating long-term slow reasoning from fast thinking.
- Temporal Separation, where different regions of the brain operate at different Hz.
- Recurrent Connectivity, where different connections in the brain are recursive, allowing for more context-sensitive reasoning.
The model consists of:
- An input network,
- A low-level recurrent module,
- A high-level recurrent module,
- An output network,
An inference pass over the HRM is done through high-level cycles of low-level timesteps, meaning total timesteps.
and each keep a hidden state, for the low-level module and for the high-level module.
Given an input vector , the HRM maps it to an output prediction vector through the following process:
First,
At each time step , the low-level module updates its hidden state conditioned on , , and , while updates its state after a full cycle of using the hidden state of , , and , without :
Meaning unless is divisible by with no remainder, the is not updated.
Until full cycles are completed, a prediction is extracted from the hidden state of the output module:
The entire set of timesteps within the HRM represents a single forward pass through the HRM.
The HRM is designed to counteract the premature convergence brought upon by RNNs by allowing the high-level module to only advance after the low-level module has completed a full cycle of timesteps—which allows the low-level module to reach a "local equilibrium" before updating the high-level module.
Approximate Gradient Methods
Recurrent models use BPTT to compute gradients, but BPTT requires storing the hidden states from the forward pass and then combining them with gradients during the backward pass, which is computationally expensive in terms of memory, as it has a memory complexity of , growing linearly with the sequence length , thereby forcing small batch sizes.
Consider the situation when you have an RNN, , that is trained on the same input, , over timesteps, with weight updates at every timestep.
Eventually, you will converge to a fixed point, or hidden state, , where .
Meaning after lengthy recursive training on the same inputs, the RNN will reach a fixed hidden state—or in the case of the HRM, a local equilibrium.
So if we consider the HRM where the high-level module serves as conditioning to the low-level module , we can observe that the ideal HRM would reach a fixed point, , where , and only then the high-level module can update its hidden state as .
This is because during every low-level iteration, is conditioned on and some input vector , meaning the only variable that can be updated is , and given that it's defined as a recursive function, it will eventually converge to a stabilized and fixed point.
Let's define as the transformation which contains the updates of the high-level module and the low-level module, , where .
The fixed point where we have the equilibrium can be written as
is the Jacobian matrix of with respect to .
If the matrix is invertible at and is continuous and differentiable, the implicit function theorem then allows for the computation of the exact gradient at that point without explicit backpropagation:
Computing the inverse of is computationally expensive, so we can take the Neumann series expansion of the inverse:
and approximate the first term at , which leads us to an approximation of the gradients at .
Deep Supervision
Given a data sample , you run multiple forward passes of the HRM models, where is the total number of forward passes, and is the th forward pass.
For each segment , you compute similar to gradient descent as:
with the caveat that is not involved in the computation of gradients, as we're approximating the gradient with a single timestep and would not need to compute the gradient.
Adaptive Computational Time
They incorporate a halting strategy into the HRM via -learning, enabling the HRM to dynamically select the number of segments based on the complexity of the task.
The -head uses the final state of to predict the Q-values, , as:
where is the sigmoid gate to derive the Q-values for halting and continuing.
Let:
- be the maximum number of segments
- be the minimum number of segments, where , sampled probabilistically in a uniform manner with probability from the set and with probability that it is .
The criteria for halting is given by:
- When the segment count surpasses the maximum threshold
- When the halt value exceeds the continue value
- The segment count has reached the minimum threshold
The Q-head is trained through Q-learning, defined by a Markov decision process with a -state space, -action space, and a reward function , where is the state, is the action, and is the reward.
Once the Q-head halts, it returns a prediction and a corresponding binary reward for its prediction. Continuing returns a reward of .
For each possible action, :
Meaning we halt if the steps we've taken exceed the maximum steps, or if the halt value exceeds the continue value.
The loss function is then .
The stability of Q-learning is questionable, but under some conditions—such as Post-Normalization and weight decay—stability can be achieved.
Architecture
The HRM is a sequence-to-sequence architecture: the input and output are both sequences of tokens, which are then mapped into vectors.
The model includes an embedding layer that converts tokens into vectors, and an output head that transforms the hidden state of the final timestep into the output probability vector .
The low-level and high-level modules are implemented using encoder-only Transformers with identical architectures and dimensions. They include enhancements in modern models, such as RoPE, GLUs, and Post-RMSNorm.
Results
- 40.3% accuracy on ARC-AGI-1
- 5.0% accuracy on ARC-AGI-2
- 55.0% accuracy on Sudoku-Extreme
- 74.5% accuracy on Maze-Hard
Beating DeepSeek R1, Claude 3.7, and o3-mini-high across all benchmarks.