Skip to content

feat(grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO)#5199

Draft
casinca wants to merge 4 commits intohuggingface:mainfrom
casinca:VESPO
Draft

feat(grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO)#5199
casinca wants to merge 4 commits intohuggingface:mainfrom
casinca:VESPO

Conversation

@casinca
Copy link
Contributor

@casinca casinca commented Feb 27, 2026

What does this PR do?

TODO

official impl: https://github.com/FloyedShen/VESPO/blob/main/recipe/vespo/code/core_algos.py
paper: https://huggingface.co/papers/2602.10693

Note:

  • The paper and the official implementation can have different variable names, to make things clearer:

    • c1 = k = α
    • c2 = lambda
  • Docstrings/comments are a mix of official impl and my writing.

 

Alternative options:

  • currently VESPO has 4 hparams k_pos, lambda_pos, k_neg, lambda_neg but I could reduce with 2 tuples of 2 floats eg: lambdas (pos, neg) if it's better.
  • original impl also returns for metrics w_seq also. I can include it in metrics, but this would force me to return a tuple in get_gamma_weights or remove @staticmethod. Not sure here what's the preference.

 

For efficiency, the TRL VESPO implementation is slightly different than the official one. It's ~25% faster on gpu, and tested for equivalence.

With importance_sampling_ratio:
-----------------------------------------------------------------
B x T         TRL_VESPO (ms)    OG_VESPO (ms)     Faster
-----------------------------------------------------------------
8 x 128         0.4290          0.5301          TRL_VESPO (1.24x)
16 x 256        0.4281          0.5302          TRL_VESPO (1.24x)
32 x 512        0.4283          0.5299          TRL_VESPO (1.24x)
64 x 512        0.4284          0.5294          TRL_VESPO (1.24x)
128 x 512       0.4286          0.5322          TRL_VESPO (1.24x)
32 x 1024       0.4473          0.5313          TRL_VESPO (1.19x)
64 x 1024       0.4285          0.5360          TRL_VESPO (1.25x)
128 x 1024      0.4240          0.5203          TRL_VESPO (1.23x)
-----------------------------------------------------------------

Fixes #5196

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@casinca casinca changed the title init feat(grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO) Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Variational Sequence-Level Soft Policy Optimization (VESPO)

1 participant