-
Notifications
You must be signed in to change notification settings - Fork 2
evals #23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
evals #23
Conversation
|
@EbramTawfik please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
✅ Integration Tests PASSEDWorkflow: 🎉 All integration tests passed! This PR is ready for review. |
|
|
||
| Actual AI Response: "{actualResponse}" | ||
|
|
||
| How well does the actual response match the expected pattern (1-5 scale)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be better to use some form of structured output. Also, it may be a good idea to ask the LLM to include chain-of-thought and reasoning for the score as this can generally help to improve the quality of the evaluation / scores produced by the LLM.
For example, the GroundednessEvaluator that ships as part of the Microsoft.Extensions.AI.Evaluation libraries employs some of the above techniques.
| /// <param name="scenarioName">Name of the scenario being evaluated (for logging purposes)</param> | ||
| /// <param name="minimumAcceptableScore">Minimum score (1-5) to pass the evaluation (default: 3)</param> | ||
| /// <returns>The evaluation score (1-5) or null if evaluation failed</returns> | ||
| protected static async Task<int?> EvaluateResponseMatchAsync( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to implement this as an IEvaluator. This function would essentially become IEvaluator.EvaluateAsync() for your evaluator. This function could then return an EvaluationResult that includes one or more metrics (in your case it could be a single NumericMetric with Name like "Match Score").
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend reading through and running the samples under https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/README.md. These samples are structured as a series of unit tests where each test builds upon concepts introduced in previous tests.
I would recommend going through the READMEs, instructions and code for these sample tests one test at a time to understand how the various APIs, concepts and functionality in the Microsoft.Extensions.AI.Evaluation libraries work, and how this functionality can be used within your tests to set up your own offline eval pipelines and reporting.
For example,
expectedResponsePatternshould ideally be passed using a derivedEvaluationContexttype that is tied to your newIEvaluatorabove (similar to howGroundednessEvaluatorContextis used to pass theGroundingContextto theGroundednessEvaluator).minimumAcceptableScoreshould ideally be configured by including a default 'interpretation' as part of theNumericMetricreturned from yourIEvaluator(similar to howGroundednessEvaluatorreturns aNumericMetricwith an interpretation that considers the metric as poor iif the score is less than 2 and exceptional if it is greater than 4).- See the samples under the
Reportingfolder such as Example01_SamplingAndEvaluatingSingleResponse() to understand the general anatomy of a single test including how the reporting configuration is set up to include the set of evaluators and the LLM connection (IChatClient), how tests create individual scenarios (each with its own unique scenarioName) , how the scores for these scenarios get persisted, and how reports can be generated using this persisted data.
No description provided.