Speeding up RL with high-leverage samples
We learn the most from a problem when we struggle but eventually solve it. We learn what works and what doesn’t, preparing us for the next one. But not all parts of this process teach us the same amount: the rare flashes of insight teach the most.
Unlike humans, who attempt a problem once and then solve it or give up, language models generate many independent attempts, or rollouts. Naive reinforcement learning algorithms assume that all of these attempts contain the same amount of information, and thus use all of them to update the model. But just as humans learn the most from rare insight, models learn the most from rare rollouts. If we give a model one hundred attempts at a difficult math problem and it solves it only ten times, those ten attempts teach the model far more than the ninety others.
In this blog, we formalize this intuition and find that in a problem with a 10% success rate like the one above, each successful rollout is 81 times more valuable than a failed one! More generally, we introduce sample leverage, which quantifies how much training signal a rollout contains in the binary reward setting. Using it, we construct the leverage thresholding algorithm, which improves compute-efficiency by identifying and selectively training on high-leverage rollouts.
Three sources of noise in the RL policy gradient
Researchers use large batches of rollouts to accurately estimate the policy gradient in each step of RL. Understanding the sources of noise in this gradient allows us to determine which rollouts give the cleanest gradient estimates.
For a given rollout, define its reward as . Then, the policy-gradient theorem gives us the gradient as:
We can estimate the gradient via REINFORCE as follows:
- Sample rollouts from our policy,
- Compute rewards for each rollout,
- Estimate the gradient by averaging .
Let’s break down the error in this gradient estimate. Let the training distribution of input prompts be and the real-world distribution be . Say we sample datapoints , and for each datapoint , we sample rollouts Our gradient estimate is then:
We’ll analyze this expression with the following three levels of gradient estimates:
- Set to be the gradient estimate of a trajectory , i.e. .
- Set to be the gradient estimate of a datapoint , i.e. .
- Set to be the gradient estimate of a distribution of datapoints , i.e. .
The error, or noise, in our gradient estimate can be broken down into three components, corresponding to the three levels above:
These three terms correspond to rollout noise, datapoint noise, and distribution noise, respectively. Many important RL optimizations reduce one or more of these noise sources:
- Larger batch size: by sampling more rollouts per batch or more datapoints per batch, we can reduce our rollout or datapoint noise, respectively. In practice, we do not train on the entire dataset each step, so the noise does not precisely follow the above formula.
- Higher data quality: with higher quality data, we align the train and real-world distributions more closely (reducing distribution noise) and can also reduce the datapoint noise if the values of are tightly clustered.
- Accurate reward baselines: advantage estimators, such as Group Relative Policy Optimization (GRPO), reduce the rollout noise by modifying the quantity , typically by adjusting the reward of a trajectory based on the datapoint .
Our work focuses on the first source of noise. We analyze the gradient variance of different trajectories and use them to define the sample leverage, which captures the signal-to-noise ratio of an RL rollout. By only training on the highest-signal datapoints, we free up more resources for producing rollouts.
Models learn more from rare outcomes
Depending on their reward, different trajectories provide gradient estimates of different magnitudes. Since higher gradient magnitudes correspond to higher signal-to-noise ratios, trajectories with different rewards have different noise levels.
Consider a single prompt with a binary reward (e.g., a math problem). RL traditionally optimizes the probability of success : the probability that the model gets a reward of 1 on any given attempt. As a reminder, the policy gradient theorem gives:
Let’s analyze what happens if we condition on the value of . We drop the subscripts for readability:
We can similarly condition on to get:
Within a rollout, only some tokens meaningfully contribute to the reward and thus to the mean gradient. However, all tokens contribute to the gradient noise. Thus, conditioning on the reward should not affect most tokens’ contribution to the gradient noise, so we posit that the noise is the same for all types of rollouts:
where is the total variance. We will justify this assumption both theoretically and empirically. Specifically, it allows us to derive GRPO and provides the basis for an experimentally faster RL algorithm.
Before we get into that, let’s do some basic analysis of these formulas. Recall that the gradient estimate of a trajectory is:
Define the noise of this gradient estimate as its deviation from the mean, so that:
For example, at , the gradient signal-to-noise ratio is 9 times higher for a sample with reward 1 than for one with reward 0. This discrepancy allows us to selectively train on the samples with the highest signal while controlling the gradient noise.
Deriving GRPO from relative gradient signals
In the previous section, we assumed that the gradient logprobs of rollouts with different rewards have the same noise. We’ll find that optimizing for the lowest gradient noise with this assumption leads us to GRPO.
Let be trajectories sampled from our model with rewards 0 and 1 respectively. Define the following estimators:
Both quantities are unbiased estimators of , with variances:
where .
Define the estimator for weights . We'll find the best way to choose these weights to get an unbiased, low-variance estimate of .
- Unbiased: , so we must have to get an unbiased estimate.
- Low-variance: We want to minimize the variance . Setting the derivative to :
Thus, for some normalization constant ,
In other words, the optimal estimate of the gradient given rollouts is proportional to . These weights are exactly the ones GRPO assigns!
The math in this section scales to multiple rollouts. It can be shown that if we sample and rollouts with rewards 0 and 1, respectively, we should weight each rollout with reward 0 proportional to , and each rollout with reward 1 proportional to : the same weights as GRPO. Thus, assigning advantages to minimize gradient noise is equivalent to GRPO across a batch.
Rollout value is captured by the sample leverage
As demonstrated above, the signal-to-noise ratio is:
Correspondingly, we can calculate how many samples with reward 1 yield the same ratio as a sample with reward 0. Since taking m samples increases the signal to noise ratio by .
For instance, if , each reward 0 sample is worth as much as 81 reward 1 samples. On the other hand, if , each reward 1 sample becomes 81 times as valuable as reward 0 samples. This result is our key takeaway.
Formally, we define the leverage of a sample as:
The leverage has a few nice properties:
- The average leverage of samples from a given prompt is .
- The leverage of a sample is proportional to its GRPO advantage squared.
- The optimal gradient estimate from a set of samples has variance proportional to
Property 3 means that if we sample a group of 128 rollouts from the same datapoint and train on a subset of them with total leverage 64, we get the same train signal as having sampled and trained on 64 rollouts! Thus, the leverage is the effective number of samples we train on. We can convert this property into an efficient RL algorithm by sampling many rollouts and selectively training on the ones with high leverage.
Discarding low leverage samples
Let’s analyze how the leverage scales as we train on an increasing number of datapoints. Suppose a problem has a probability of success . Then we can selectively train on rollouts with , giving us more leverage per sample without introducing bias. For any datapoint, if we take the rollouts with the highest leverage, their total leverage will be greater than . Across different success rate distributions (simulated with beta distributions), we get the following plot:

Let’s examine some numbers for the uniform case. By using only 20% of the data, we already capture 60% of the leverage for a uniform success rate distribution. After we use more than 50% of the data, the graph is linear, and training on an extra datapoint only gains leverage! We can exploit this tradeoff by allocating more compute resources towards generating rollouts and selecting the highest-leverage ones for training.
How to use sample leverage to speed up RL
We introduce the leverage thresholding algorithm:
- Set a leverage proportion threshold .
- When the inference engine returns a group of rollouts from the same datapoint, reduce the list of rollouts to the smallest set which meets the leverage threshold (i.e., has total leverage at least where is the group size). Note that we estimate the success rate and compute advantages before reducing our rollout list since the reward distributions of the original and reduced lists are different.
- Collect selected rollouts into a training batch and train on them.
By giving the training process less work to do, leverage thresholding allows us to shift compute from the training to the inference engines. The threshold controls the strength of this shift: lower values lead the training process to strongly prioritize high-leverage samples and discard the rest. Using a threshold ensures that every datapoint is represented in our training batch. If we just collected the highest leverage samples from across datapoints into a batch, some datapoints would have very few or no samples selected, reducing our effective dataset size.
We use the following train setup:
- Advantage estimator: GRPO
- Model: Qwen3-8B
- Compute: One 8xB200 node
- Algorithm: Fully asynchronous RL
The list of experiments is:
| Run type | Training Process GPUs | Inference Engine GPUs | Leverage thresholds |
|---|---|---|---|
Baseline | 3 | 5 | - |
Baseline | 2 | 6 | - |
Leverage thresholding | 2 | 6 | 0.88, 0.9 |
Leverage thresholding | 1 | 7 | 0.5, 0.6 |
In baseline runs, we find that with 3 GPUs, the training process outpaces the inference engine, and with 2 GPUs, it is slower. Thus, one of these options is the optimal baseline split for our workload. Leverage thresholds are set empirically to balance the training and inference speeds. (See the appendix for batch and dataset details.)
Leverage thresholding leads to improved runs in practice
Training with the leverage thresholding algorithm results in each step containing less total leverage. However, leverage thresholding runs process more total leverage per unit time, leading to better eval performance.
All figures in this section use the following recipe for readability:
- We omit the baseline run with 2 train GPUs since it underperforms the baseline run with 3 train GPUs in all metrics.
- We believe that the differences between leverage thresholding runs with the same compute split are largely due to variance, so we plot the mean value and shade the range to represent runs with the same compute split.
- We use ‘base-x-y’ to denote a baseline run with train GPUs and rollout GPUs and ‘lev-x-y’ to denote the leverage thresholding runs with train GPUs and rollout GPUs.
Refer to the appendix to see each run’s performance individually.
The training process’ focus on high-leverage samples leads to faster steps, as shown in Figure 2. For runs with leverage thresholding, each train step has total leverage less than the batch size (768), so we also plot the cumulative leverage (Figure 3).

Both plots show that the thresholding leverage algorithm has better performance. Finally, every single leverage thresholding run had the same or better eval score as the baseline runs:

Conclusion: additional ways to apply sample leverage
“Garbage in, garbage out” has always been true; that’s why researchers spend months painstakingly constructing quality datasets. But unlike identifying the best prompts—a largely qualitative task—identifying the best rollouts is easy. We have a formula for it! The leverage thresholding algorithm provides automatic trash disposal, exploiting the gap between batch leverage and batch size to get an efficiency win.
We hope that the formalism provided by sample leverage will lead to more exciting work. Here are some of the possible extensions:
- Generalizing sample leverage past the binary reward setting is difficult since conditioning on a certain reward value leads to a biased estimate of the gradient. Thus, arbitrarily removing samples from a batch also biases the gradient. However, it may be possible to extend sample leverage to work with rubric-based rewards, where the reward is a sum of independent binary rewards.
- Instead of using a fixed threshold as we did for our experiments, we could dynamically adjust at each step to keep both the training process and inference engines running continuously.
- Deploying a model in production may produce more data than is feasible to train on. Prioritizing training on samples with the highest leverage allows for maximizing the signal extracted per datapoint used.
The leverage thresholding algorithm provides a knob you can turn to better utilize a finite pool of computing resources by increasing the speed of your training process, allowing you to allocate more compute towards sampling. More generally, sample leverage is a framework for the analysis and optimization of dataset construction. We hope it allows the community to post-train models effectively and efficiently.
Appendix
1. Scaling leverage thresholding
We also examined the performance of the leverage thresholding algorithm with five nodes. We were unable to optimize the compute or leverage threshold hyperparameters at this scale, but we observed clear improvements with the parameters used. We trained Qwen3-8B with both our baseline and leverage thresholding setups with one node for training and four nodes for sampling, with a leverage proportion threshold Dataset details are further in the appendix. Our results suggest improved step time and effective sample processing speed:

The leverage thresholding run also exhibited better eval performance.

We found that the training process was much slower than the inference engine with this split, so there is room to improve the baseline by increasing the number of train GPUs. We also expect the leverage thresholding run to improve significantly with optimal hyperparameters. Despite the suboptimal hyperparameters, these results suggest that we can still selectively train on high-leverage samples for faster training at this scale.
2. Leverage thresholding shows gains across rollout lengths
As the model learns math, it is able to reason more efficiently and uses fewer tokens per rollout. Shorter rollouts mean faster steps, so step times generally decrease throughout a run. However, we observed significant run-to-run variance in how much rollout lengths decreased, adding noise to our step time comparisons. To control for this variation, we compared the step time to the mean rollout length of that step. We also plotted the effective step time by dividing the step time by the mean batch leverage proportion. Note that ‘lev-x-y-z’ refers to the leverage thresholding run with train GPUs, rollout GPUs, and a leverage threshold of .

In both plots, we see an improvement from most leverage thresholding runs across most rollout lengths. Thus, the speedups are due to the algorithm itself and not rollout length variance.
3. Leverage thresholding balances the training and inference engines
We use the idle ratio of each run (that is, the proportion of the time the training process spends waiting for rollouts) to compare the relative speeds of the training and inference engines.

In the ‘base-2-6’ run, the training process hardly ever waits, showing that it is slower than the inference engine. On the other hand, with 3 GPUs, the idle ratio is always positive, so the training process is faster than the inference engine. The runs with leverage thresholding have intermediate idle ratios, indicating that both engines progress at nearly the same speed. For any compute allocation where the training process is slower than the inference engine, we can tune the leverage threshold to balance their speeds.
4. Dataset and batch details
We filtered out problems which were too hard or too easy from the DAPO math dataset by giving Qwen3-8B four attempts on each and removing any that it solved or times. We then used of the datapoints for training and for evals. Each eval datapoint was run four times, and the reported eval score is the proportion of successes.
Our one-node runs used batches of datapoints with rollouts generated from each, while our five-node runs used batches of datapoints with rollouts generated from each.
5. Additional plots
Earlier in the blog, we made plots more readable by compressing leverage thresholding runs with the same compute allocation into one curve. Here, we plot each run separately for the interested reader. Note that ‘lev-x-y-z’ refers to the leverage thresholding run with train GPUs, rollout GPUs, and a leverage threshold of .
