Unifying RLHF Objectives

Reinforcement learning from human preferences (RLHF) tries to teach language models to optimize for human preferences, rather than the supervised perplexity from pretraining. It does so by collecting a dataset of language model outputs, and then having humans rate which output is better (Do you prefer answer A or B?). Here, I describe different commonly-used RLHF algorithms in terms of their gradient.

Consider the problem of optimizing a language model $\pi_\theta$ from a preference dataset consisting of context $x$ and two completions: the chosen completion $y_+$ and the rejected completion $y_-$. These represent, eg, two different possible responses from a chatbot, and chosen by a human.

We can view different RLHF algorithms by considering the gradient of their loss function:

\[\\- \nabla_\theta \mathcal{L}(\pi_\theta) = w_{+} \nabla_\theta \Bigl( \log \pi_\theta(y_+ | x) \Bigr ) - w_{-} \nabla_\theta \Bigl( \log \pi_\theta(y_- | x) \Bigr )\]

Intuitively, these algorithms typically increase the probability of the chosen completion, and decrease the probability of the rejected completion. Different algorithms are differentiated by their choice of $w_+$ and $w_-$ (for methods which do not operate on paired data, simply consider $w_- = 0$). They may also use a reward function $r(x, y)$ representing the “elo” of the full completion. Note simplifications are made for ease of comparison.

Summary

Supervised (weight on $\nabla_\theta \log \pi_\theta$ is always positive):

Unpaired (increase $w_+$ proportional to $r(x, y_+)$; assume $r(x, y_+) > 0$ for clarity):

Paired (push $y_+$ and $y_-$ apart):

Note that unpaired methods may also have negative weights when $r(x, y_+) < 0$. Thus, we can think of them as dynamically choosing which samples should have negative weight, rather than the paired methods which set them directly based on the dataset.

PPO derivation

I only include the derivations for PPO and RM as illustrative examples.

PPO starts from a policy $\pi_\text{ref}$ at the beginning of training which generates the dataset used for training, and enforces a KL divergence constraint $KL(\pi_\theta || \pi_\text{ref})$ to ensure that $y_+ \sim \pi_\theta(\cdot | x)$ does not diverge too much from the data used to train the reward model. It does this by maximizing:

\[-\mathcal{L}(\pi_\theta) = \text{min}\Bigl(\frac{\pi_\theta(y_+ | x)}{\pi_\text{ref}(y_+ | x)}, 1 + \epsilon \Bigr) \cdot r(x, y_+)\]

which immediately sets the derivative equal to zero when $\frac{\pi_\theta(y_+ | x)}{\pi_\text{ref}(y_+ | x)} > 1 + \epsilon$.

Then, take the derivative for the other case:

\[\\- \nabla_\theta \mathcal{L}(\pi_\theta) = \frac{1}{\pi_\text{ref}(y_+ | x)} \cdot r(x, y_+) \cdot \nabla_\theta \Bigl( \pi_\theta(y_+ | x) \Bigr)\]

We use the “policy gradient trick” from the chain rule, $\nabla_x f(x) = f(x) \nabla_x \log f(x)$, which yields the final gradient:

\[w_+ = \begin{cases} \frac{\pi_\theta(y_+ | x)}{\pi_\text{ref}(y_+ | x)} \cdot r(x, y_+) & \frac{\pi_\theta(y_+ | x)}{\pi_\text{ref}(y_+ | x)} < 1 + \epsilon \\ 0 & \text{o.w.} \end{cases}\]

One can perform a similar derivation for the $1 - \epsilon$ side of the PPO surrogate objective. This one-sided derivation is not exactly right, but captures the spirit of the maximization.

We can see that, compared to APA, PPO maintains a positive $w_+$ until the $1 + \epsilon$ ratio is hit – enforcing the KL divergence irrespective of $r$ – while APA has positive $w_+$ until the log-ratio is equals the reward.

Reward modeling derivation

In this case, we are considering only the task of training the reward model $r_\theta(x, \cdot)$ from preference data; thus, we consider the derivative which has parameters with respect to $r$, rather than the policy $\pi$.

Using the Bradley-Terry model for pairwise comparisons (where $r_\theta$ can be interpreted as an “elo”), we optimize the objective:

\[-\mathcal{L}(r_\theta) = \log p_\theta(y_+ > y_- | x) = \log \sigma(r_\theta(x, y_+) - r_\theta(x, y_-))\]

We utilize some useful properties of the sigmoid function:

  1. $\sigma(x) = 1 - \sigma(-x)$
  2. $\nabla_x \sigma(x) = \sigma(x) (1 - \sigma(x)) = \sigma(x) \sigma(-x)$ (by applying (1))
  3. $\nabla_x \log \sigma(x) = \sigma(-x)$ (by applying the chain rule and (2))

This thus yields:

\[\begin{align*} - \nabla_\theta \mathcal{L}(\pi_\theta) &= \nabla_\theta \log \sigma(r_\theta(x, y_+) - r_\theta(x, y_-)) \\ &= \sigma(r_\theta(x, y_-) - r_\theta(x, y_+)) \nabla_\theta \Bigl( r_\theta(x, y_-) - r_\theta(x, y_+) \Bigr) \\ &= \sigma(r_\theta(x, y_-) - r_\theta(x, y_+)) \Bigl( \nabla_\theta \Bigl( r_\theta(x, y_+) \Bigr) - \nabla_\theta \Bigl( r_\theta(x, y_-) \Bigr) \Bigr) \end{align*}\]

which completes the derivation with $w_+ = w_-$. DPO follows a similar derivation using their implicit reward $\hat{r}_\theta = \log{\frac{\pi_\theta(y | x)}{\pi_\text{ref}(y | x)}}$ which intuitively means the policy $\pi_\theta$ “values” $y$ proportional to its log-probability.

We can see DPO has a very similar formulation to APA, where both aim to softly increase $\pi_\theta(y_+ | x)$ until $\pi_\theta(y_+ | x) = e^{r(x, y_+)} \cdot \pi_\text{ref}(y_+ | x)$. Then this is very similar to the PPO objective, except it has a hard clip once the log-ratio exceeds $1 + \epsilon$. RRHF also uses a hard clip, but replaces $\pi_\text{ref}(y_+ | x)$ with $\pi_\theta(y_- | x)$, ensuring $\pi(y_+ | x) \geq \pi(y_- | x)$.

In contrast, C-RLFT / Decision Transformer-style methods do not “push down” the $w_-$ term; rather, they condition on some notion of negative reward. Therefore, the suboptimal behavior is still in the model, but must be solicited via a negative prompt.

Commentary

I chose the above methods because they have been used to train top models on the Chatbot Arena benchmark:

  1. SFT is present in many models, including Hermes
  2. PPO is used in top pretrained foundation models such as ChatGPT and Gemini
  3. C-RLFT is used in OpenChat, the top 7B model as of Feb 2024 (used to initialize Starling)
  4. APA is the final stage of Starling, which builds on OpenChat
  5. DPO is popular in the open-source community, but performs relatively poorly in Chatbot Arena, with its best 7B model being Zephyr
  6. The authors of RRHF went on to build Qwen, which at the time of writing is the top open-source model on the leaderboard

Ultimately their objective functions are conceptually very similar and performant after tuning, and obviously the real power is in the dataset (and the weighting of it). The more interesting question is how the objectives enable you to train on different datasets: offline vs online, paired vs unpaired.

Offline vs online training

Open-source methods typically use GPT-4 as training data, which has already undergone online RL optimization; thus, it seems they can get away with offline-only training. However, at the time of writing, it does seem necessary to have some sort of online optimization at some point in the pipeline, and we continue to see online PPO being used to train the world’s largest closed-source models.

Paired vs unpaired data

DPO, furthermore, only trains on paired data; while this enables its simplicity, it is also a limitation. On Huggingface, there is a lack of open-source preference datasets available in coding, math, and reasoning. If we evaluate the performance of Zephyr (DPO) on MT-Bench, we see Zephyr performs worse than its non-preference-constrained 7B open-source models in these categories:

It is relatively easy to obtain preference data if using the GPT-4 API as a judge for your own datasets, but real “human” preferences are likely going to be continually limited in contrast to large closed-source labelling efforts (and finetuning is always limited compared to pretraining). Also – definitionally – unpaired datasets will always have strictly more data than paired ones. Thus, unpaired methods might be more flexible.

Paired reward model training

I think there’s an interesting question here: if the paired preference dataset is lacking in some task (say, $x$), perhaps a DPO-style paired training might perform poorly on task $x$, but can a good reward model (also paired) be trained for $x$ (and then used for an unpaired method)?

One idea might be that, conditioned on the task $x$, the policy only needs to upweight completions relative to other completions for the same task $x$. Then, even if the reward model is not trained on $x$, as long as the final policy learning procedure trains on $x$, the relative weighting of the reward model might still be useful, or possibly the reward can be ignored. In this sense, as long there are some examples of the task $x$ in the final training stage, the policy might not “forget” it (effectively, $w_+ = 0$).

Unpaired training

Another approach is simply the C-RLFT-style method where the model remembers everything ($w_+ > 0$ always). Other training recipes blend RLHF with an additional pretraining loss to try to avoid degradation; RRHF performs something similar with its $w_+ = 1 + w_-$ term. However, for truly undesirable behaviors (toxicity, etc), you probably do want their $w_- > 0$ and send their probability all the way to zero.

Anyway, there are a lot of moving pieces involved and required for successful RLHF – I think this is just one interesting perspective on some of them.

Notes mentioning this note