Getting empty content while evaluation with llm as judge #13205
Replies: 3 comments 3 replies
-
|
When setting up LLM-as-a-Judge evaluations, empty content in evaluation arguments typically indicates incorrect variable mapping.(1) Common causes:
How to fix:
A similar issue was reported where both 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Verified with above common causes, no above mentioned cause in my case, please help @jannikmaierhoefer |
Beta Was this translation helpful? Give feedback.
-
|
above is screenshot of preview of a sample when setting llm eval, there both arguments {{query}}, {{ava_routing}} are capturing, but while executing llm eval {{ava_routing}} argument is not filling. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe your question
Langfuse Cloud or Self-Hosted?
Langfuse Cloud
If Self-Hosted
No response
If Langfuse Cloud
No response
SDK and integration versions
No response
Pre-Submission Checklist
Beta Was this translation helpful? Give feedback.
All reactions