【RL第三篇】REINFORCE Leave-One-Out(RLOO)算法(基于留一法的REINFORCE策略梯度算法)

发布于:2025-07-28 ⋅ 阅读:(12) ⋅ 点赞:(0)

一、前言

Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs

paper: https://arxiv.org/pdf/2402.14740

提出了基于REINFORCE的RLOO强化学习算法(REINFORCE Leave-One-Out)

二、RLOO

2.1 REINFORCE Baseline

from:https://people.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf

对于基线,无参数选择是利用过去价值的移动平均值作为baseline, 即Moving Average Baseline,可以是有窗口的滑动,也可以是无窗口的滑动,比如无窗口的滑动,会把训练历史中所有的回报做平均:

bMA=1S∑sR(τ)b_{\text{MA}} = \frac{1}{S} \sum_{s} R(\tau)bMA=S1sR(τ)

其中S为训练步骤,R为s步骤下的回报。

也可以是指数滑动平均baseline(指数滑动平均blog:https://zhuanlan.zhihu.com/p/670490330),即Exponential Moving Average Baseline:

bEMA=α⋅bEMA+(1−α)⋅Rˉ(τ) b_{\text{EMA}} = \alpha \cdot {b_{\text{EMA}}} + (1 - \alpha) \cdot \bar R(\tau) bEMA=αbEMA+(1α)Rˉ(τ)

其中,α\alphaα 是滑动平均的衰减因子,通常设为 0.9 或 0.99,Rˉ(τ)\bar R(\tau)Rˉ(τ) 是当前训练回合的平均回报(通过当前步骤的多次轨迹采样计算回报的平均值)。

2.2 REINFORCE Leave-One-Out

然而类似于取移动平均作为baseline,依旧方差很大。相比于REINFORCE,RLOO核心实现细节在于,它采用批次中其他样本的平均奖励来计算基线,而不是对批次中的所有奖励取平均值。

  • 针对response level的reward,action=轨迹,对于llm任务来讲,prompt为state,response为action,一个response一个reward,此时

R(τ)=r(s,a)R(\tau) = r(s, a)R(τ)=r(s,a)

  • 需要对于一个prompt采样生成多个相应(state->多个action/多个轨迹)

对于RLOO基线,给定 KKK 个采样轨迹或动作 a1,…,aKa_1, \ldots, a_Ka1,,aK,对于给定的提示 sss,每个提示的基线为:

b(s,ak)=1K−1∑i=1,i≠kKr(s,ai) b(s, a_k) = \frac{1}{K - 1} \sum_{\substack{i=1, i \neq k}}^{K} r(s, a_i) b(s,ak)=K11i=1,i=kKr(s,ai)

从而带来每个提示的优势:

A(s,ak)=r(s,ak)−b(s,ak) A(s, a_k) = r(s, a_k) - b(s, a_k) A(s,ak)=r(s,ak)b(s,ak)

等效地,这可以表示为:

A(s,ak)=KK−1(r(s,ak)−1K∑i=1Kr(s,ai)).(21) A(s, a_k) = \frac{K}{K - 1} \left( r(s, a_k) - \frac{1}{K} \sum_{i=1}^{K} r(s, a_i) \right). \tag{21} A(s,ak)=K1K(r(s,ak)K1i=1Kr(s,ai)).(21)

三、代码理解

import torch
local_batch_size = 3
rloo_k = 4

rlhf_reward = torch.tensor([
    1, 2, 3, # first rlhf reward for three prompts
    2, 3, 4, # second rlhf reward for three prompts
    5, 6, 7, # third rlhf reward for three prompts
    8, 9, 10, # fourth rlhf reward for three prompts
]).float() # here we have 3 prompts which have 4 completions each

# slow impl
baseline = (rlhf_reward.sum(0) - rlhf_reward) / (rloo_k - 1)
advantages = torch.zeros_like(rlhf_reward)
for i in range(0, len(advantages), local_batch_size):
    other_response_rlhf_rewards = []
    for j in range(0, len(advantages), local_batch_size):
        if i != j:
            other_response_rlhf_rewards.append(rlhf_reward[j : j + local_batch_size])
    advantages[i : i + local_batch_size] = rlhf_reward[i : i + local_batch_size] - torch.stack(
        other_response_rlhf_rewards
    ).mean(0)
assert (1 - (2 + 5 + 8) / 3 - advantages[0].item()) < 1e-6
assert (6 - (3 + 2 + 9) / 3 - advantages[7].item()) < 1e-6

# vectorized impl
rlhf_reward = rlhf_reward.reshape(rloo_k, local_batch_size)
baseline = (rlhf_reward.sum(0) - rlhf_reward) / (rloo_k - 1)
vec_advantages = rlhf_reward - baseline
torch.testing.assert_close(vec_advantages.flatten(), advantages)

batch_size = 3, 三个不一样的prompt,每个prompt生成4个response

刚开始baseline为所有reward的mean值,可以具体Debug看step1的各个相关值:

rlhf_reward.sum(0)为,每个prompt维度reward之和

tensor([16., 20., 24.])

baseline = (rlhf_reward.sum(0) - rlhf_reward) / (rloo_k - 1) 其中的rlhf_reward.sum(0) - rlhf_reward为相当于每个response单独减去本身的reward(公式为i≠ki \neq ki=k), 最终除以(rloo_k - 1),每个prompt生成四个response,则rloo_k应该设置为4。

这样是不是对RLOO的优化更理解了。

Ref

  • https://rlhfbook.com/c/11-policy-gradients.html#reinforce
  • https://huggingface.co/blog/zh/putting_rl_back_in_rlhf_with_rloo

网站公告

今日签到

点亮在社区的每一天
去签到