fix: map chat finish_reason to responses status and preserve terminal stream semantics#1995
fix: map chat finish_reason to responses status and preserve terminal stream semantics#1995
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. |
🧪 Test Suite AvailableThis PR can be tested by a repository admin. |
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/schemas/mux.go (1)
1766-1950:⚠️ Potential issue | 🟠 Major
lengthcan still leave earlier text items marked as completed.This only fixes the text item status when it is closed in the terminal chunk. If assistant text is closed earlier at the tool-call transition on Line 1597, that path still emits
response.output_item.donewithStatus: "completed"on Line 1643. A stream that later ends withfinish_reason == "length"will therefore produceresponse.incompletewhile the already-closed text item says completed. Please defer that textoutput_item.doneuntil the terminal chunk, or route the same terminal status through the earlier close path as well.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2caaf961-cc11-41a9-8400-4c6802182cfe
📒 Files selected for processing (3)
core/providers/openai/openai.gocore/schemas/mux.gocore/schemas/mux_test.go

Summary
Implements proper handling of incomplete responses in the OpenAI provider by mapping
lengthfinish reasons to incomplete status and preventing duplicate finish reason forwarding in streaming responses.Changes
lengthfinish reason toincompletestatus withmax_output_tokensreason in both streaming and non-streaming responsesresponsesStatusFromChatFinishReasonandresponsesTerminalFromChatFinishReasonto centralize finish reason mapping logicresponse.incompleteevents instead ofresponse.completedwhen appropriateType of change
Affected areas
How to test
Validate the finish reason mapping and incomplete response handling:
Test with OpenAI provider responses that hit token limits to verify
incompletestatus is properly set withmax_output_tokensreason.Screenshots/Recordings
N/A
Breaking changes
Related issues
Closes #1976
Closes #1821
Security considerations
No security implications - this change only affects response status mapping and metadata handling.
Checklist
docs/contributing/README.mdand followed the guidelines