Add GPU-side Gumbel-max sampling for CUDA graph compatibility#18844
Add GPU-side Gumbel-max sampling for CUDA graph compatibility#18844Gasoonjia wants to merge 5 commits intocuda-graphfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18844
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 19 New Failures, 2 Unrelated FailuresAs of commit 93bee20 with merge base a489707 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
f05ebaa to
b4f9eca
Compare
b4f9eca to
c7450dd
Compare
b55f894 to
c7450dd
Compare
1bf973d to
028894e
Compare
|
|
||
| extern "C" { | ||
|
|
||
| AOTITorchError aoti_torch_cuda_rand( |
There was a problem hiding this comment.
Is this from PyTorch/Aten or we are rolling our own?
| example_prefill_len = config.max_seq_len - 1 | ||
| prefill_tokens = torch.zeros((1, example_prefill_len), dtype=torch.long) | ||
| prefill_pos = torch.arange(example_prefill_len, dtype=torch.long) | ||
| prefill_tokens = torch.tensor([[0, 1]], dtype=torch.long) |
There was a problem hiding this comment.
why do we need to revert these changes or just a cherry-pick artifact?
| def _sample(logits, temperature): | ||
| """Sample from logits with temperature.""" | ||
| if temperature <= 0: | ||
| return logits.argmax(dim=-1) | ||
| probs = torch.softmax(logits / temperature, dim=-1) | ||
| return torch.multinomial(probs, num_samples=1).squeeze(-1) |
There was a problem hiding this comment.
Do we need to verify if we are close to this using some unit tests?
| using SizesType = executorch::aten::SizesType; | ||
|
|
||
| // Read a sampled token from the model output tensor [B, 1]. | ||
| // The model performs Gumbel-max sampling on-device and returns a single |
There was a problem hiding this comment.
can you add a comment to the paper or something for Gumbel-max?
| prev_token = cur_token; | ||
| stats.on_sampling_begin(); | ||
| cur_token = llm::logits_to_token(*step_logits_ptr, FLAGS_temperature); | ||
| stats.on_sampling_end(); |
There was a problem hiding this comment.
I guess we can't report sampling time separately anymore.
| # GPU-side Gumbel-max sampling: argmax(logits/T + gumbel_noise) | ||
| # Equivalent to sampling from softmax(logits/T) but fully on-device. | ||
| logits = logits / temperature.clamp(min=1e-6) | ||
| noise = torch.rand_like(logits) |
There was a problem hiding this comment.
Does this one need rand.cu?
| * advances a GPU-resident counter for unique offsets, making this fully | ||
| * compatible with CUDA graph capture and replay. | ||
| */ | ||
| AOTI_SHIM_EXPORT AOTITorchError aoti_torch_cuda_randint_low_out( |
|
@claude review this PR. |
|
Claude finished @digantdesai's task in 2m 46s —— View job PR Review: Add GPU-side Gumbel-max sampling for CUDA graph compatibility
SummaryThis PR fuses sampling into the model's Issues1. Prefill exports with minimal example shape instead of max sequence length
If this comment is still accurate, the change will break prefill for prompts longer than 2 tokens. If the constraint has been relaxed, the comment should be updated. 2.
3. Global mutable state without thread safety or cleanup
4.
5. Gumbel noise epsilon values are ad-hoc
6. No error checking on CUDA API calls in
7. Missing
Suggestions8. Seeding with
9. Consider making the forward method composable As the PR description notes, once the device support pipeline lands,
This is reasonable for the current CUDA-graph-focused use case but should be tracked for future cleanup. 10.
CI FailuresThe Addressing Reviewer QuestionsRe: @digantdesai's questions:
|
This PR replaces cpu sampler with CUDA sampler and fuse sampler with forward method to both eliminate unnecessary data transfer and improve sampling efficient. Decode performance increases from 113.8 token/s to 119.5 token/s
Once we land the device support pipeline, we should decompose the forward method with sampling.