Skip to content

Conversation

@XiaoBoAI
Copy link
Collaborator

Add error handling for all grader evaluations in PaperReviewPipeline to gracefully handle failures instead of crashing. When a grader returns a GraderError, it is now logged and the pipeline continues with remaining evaluations.

OpenJudge Version

[The version of OpenJudge you are working on, e.g. import openjudge; print(openjudge.__version__)]

Description

[Please describe the background, purpose, changes made, and how to test this PR]

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has been formatted with pre-commit run --all-files command
  • All tests are passing
  • Docstrings are in Google style
  • Related documentation has been updated (e.g. links, examples, etc.)
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @XiaoBoAI, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the resilience of the PaperReviewPipeline by introducing comprehensive error handling for various grader evaluations. Instead of crashing upon encountering an issue with a specific grader, the system will now gracefully log the GraderError and continue with the remaining evaluations, ensuring a more robust and uninterrupted review process.

Highlights

  • GraderError Import: The GraderError class from openjudge.graders.schema has been imported to facilitate specific error handling for grader evaluations.
  • Robust Grader Evaluation: Error handling has been implemented for correctness_grader, review_grader, and criticality_grader within the review_paper function. If a grader returns a GraderError, it is now logged, and the pipeline continues processing.
  • Enhanced Safety Checks: The _run_safety_checks method now includes GraderError handling for jailbreaking_grader and format_grader, ensuring that safety checks can proceed even if one of these graders encounters an error.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds error handling for grader evaluations in the PaperReviewPipeline, which is a valuable improvement for robustness. The implementation correctly handles GraderError by logging it and allowing the pipeline to continue. My review focuses on a significant amount of code duplication introduced by this change. I've provided a suggestion to refactor the duplicated logic into a single helper method to improve code clarity and maintainability.

Comment on lines 109 to +117
correctness = await self.correctness_grader.aevaluate(pdf_data=pdf_data)
result.correctness = CorrectnessResult(
score=correctness.score,
reasoning=correctness.reason,
key_issues=correctness.metadata.get("key_issues", []),
)
if isinstance(correctness, GraderError):
logger.error(f"Correctness grader error: {correctness.error}")
else:
result.correctness = CorrectnessResult(
score=correctness.score,
reasoning=correctness.reason,
key_issues=correctness.metadata.get("key_issues", []),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While the error handling is a great addition for robustness, this pattern of calling aevaluate, checking for GraderError, and then processing the result is repeated multiple times in this file (for correctness, review, criticality, jailbreaking, and format graders).

To improve maintainability and reduce code duplication, consider refactoring this logic into a helper method. This method could accept the grader, its arguments, and a callback function to process a successful result.

For example, you could define a helper like this:

async def _run_grader(self, grader, on_success, *args, **kwargs):
    """Runs a grader, handles errors, and calls a success callback."""
    grader_name = grader.name.replace('_', ' ').capitalize()
    try:
        result = await grader.aevaluate(*args, **kwargs)
        if isinstance(result, GraderError):
            logger.error(f"{grader_name} grader error: {result.error}")
        else:
            on_success(result)
    except Exception as e:
        logger.error(f"An unexpected error occurred in {grader_name} grader: {e}", exc_info=True)

And then use it for the correctness check like this:

if self.config.enable_correctness:
    logger.info("Running correctness detection...")

    def on_success(res):
        result.correctness = CorrectnessResult(
            score=res.score,
            reasoning=res.reason,
            key_issues=res.metadata.get("key_issues", []),
        )

    await self._run_grader(
        self.correctness_grader, on_success, pdf_data=pdf_data
    )

This approach would make the review_paper and _run_safety_checks methods much cleaner and easier to maintain.

Add error handling for all grader evaluations in PaperReviewPipeline
to gracefully handle failures instead of crashing. When a grader
returns a GraderError, it is now logged and the pipeline continues
with remaining evaluations.
@XiaoBoAI XiaoBoAI force-pushed the fix/paper-review-grader-error-handling branch from 26ff5a3 to a463792 Compare January 28, 2026 03:18
@helloml0326 helloml0326 self-requested a review January 28, 2026 07:20
Copy link
Collaborator

@ployts ployts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ployts ployts merged commit c3c757a into main Jan 28, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants