fix(llmobs): swallow LLMObsAnnotateSpanError on auto-annotation in @llm decorator#17093
Conversation
Codeowners resolved as |
Yun-Kim
left a comment
There was a problem hiding this comment.
let's shorten the release note but otherwise lgtm
releasenotes/notes/llm-decorator-auto-annotation-error-a9ff1d25e3706cd3.yaml
Outdated
Show resolved
Hide resolved
releasenotes/notes/llm-decorator-auto-annotation-error-a9ff1d25e3706cd3.yaml
Outdated
Show resolved
Hide resolved
releasenotes/notes/llm-decorator-auto-annotation-error-a9ff1d25e3706cd3.yaml
Outdated
Show resolved
Hide resolved
…5e3706cd3.yaml Co-authored-by: Yun Kim <35776586+Yun-Kim@users.noreply.github.com>
…5e3706cd3.yaml Co-authored-by: Yun Kim <35776586+Yun-Kim@users.noreply.github.com>
…5e3706cd3.yaml Co-authored-by: Yun Kim <35776586+Yun-Kim@users.noreply.github.com>
|
This change is marked for backport to 4.6 and it does not conflict with that branch. |
|
/merge |
|
View all feedbacks in Devflow UI.
This pull request is not mergeable according to GitHub. Common reasons include pending required checks, missing approvals, or merge conflicts — but it could also be blocked by other repository rules or settings.
The expected merge time in
|
… decorator (#17093) ## Summary - Fixes a regression introduced in #16892 where the `@llm` decorator raised `LLMObsAnnotateSpanError: Failed to parse output messages` when a decorated function returned a value that couldn't be parsed as LLM messages (e.g. a plain string, integer, or non-messages dict). - The decorator now catches `LLMObsAnnotateSpanError` from auto-annotation, logs a warning, and continues — the user's function still succeeds and the span is still created. - Also adds the missing `operation_kind != "embedding"` guard from the 4.6 backport branch to `main`. ## Test plan - [x] Two regression tests added (sync + async) verifying the warning is logged and no exception is raised - [x] Full lint checks pass Co-authored-by: zach.groves <zach.groves@datadoghq.com> (cherry picked from commit 15e61ec)
… decorator [backport 4.6] (#17101) ## Summary Manual backport of #17093 to the `4.6` release branch. Fixes a regression introduced in #16892 where the `@llm` decorator raised `LLMObsAnnotateSpanError: Failed to parse output messages` when a decorated function returned a value that couldn't be parsed as LLM messages (e.g. a plain string, integer, or non-messages dict). The decorator now catches the error, logs a debug message, and continues. Cherry-picked cleanly with no conflicts. ## Test plan - [ ] CI passes on this branch 🤖 Generated with [Claude Code](https://claude.com/claude-code)
Summary
@llmdecorator raisedLLMObsAnnotateSpanError: Failed to parse output messageswhen a decorated function returned a value that couldn't be parsed as LLM messages (e.g. a plain string, integer, or non-messages dict).LLMObsAnnotateSpanErrorfrom auto-annotation, logs a warning, and continues — the user's function still succeeds and the span is still created.operation_kind != "embedding"guard from the 4.6 backport branch tomain.Test plan