-
Notifications
You must be signed in to change notification settings - Fork 160
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Is your feature request related to a problem? Please describe.
When asking the model to predict true/false or MCQ, a confidence score and probability distribution are not enough for the final decision. For example, if the model predicts confidence_score / probability_distribution = [0.9, 0.06, 0, 0.04] for an MCQ with four choices, how should we determine the final output?
Describe the solution you'd like
Can we simply set the threshold to 0.5 to determine whether the model is confident or not?
Additional context
I read the blog (https://www.refuel.ai/blog-posts/labeling-with-confidence) and I am curious whether you used a threshold and how you solve the threshold problem in generating the final decisions and the AUROC plot. Thanks.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request