Skip to content

modelscope/OpenJudge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Open-Judge Logo

Holistic Evaluation, Quality Rewards: Driving Application Excellence

🌟 If you find OpenJudge helpful, please give us a Star! 🌟

Python 3.10+ PyPI Documentation

πŸ“– Documentation | 🀝 Contributing | δΈ­ζ–‡


πŸ“‘ Table of Contents

OpenJudge is a unified framework designed to drive LLM and Agent application excellence through Holistic Evaluation and Quality Rewards.

πŸ’‘ Evaluation and reward signals are the cornerstones of application excellence. Holistic evaluation enables the systematic analysis of shortcomings to drive rapid iteration, while high-quality rewards provide the essential foundation for advanced optimization and fine-tuning.

OpenJudge unifies evaluation metrics and reward signals into a single, standardized Grader interface, offering pre-built graders, flexible customization, and seamless framework integration.


✨ Key Features

πŸ“¦ Systematic & Quality-Assured Grader Library

Access 50+ production-ready graders featuring a comprehensive taxonomy, rigorously validated for reliable performance.

🎯 General

Focus: Semantic quality, functional correctness, structural compliance

Key Graders:

  • Relevance - Semantic relevance scoring
  • Similarity - Text similarity measurement
  • Syntax Check - Code syntax validation
  • JSON Match - Structure compliance

πŸ€– Agent

Focus: Agent lifecycle, tool calling, memory, plan feasibility, trajectory quality

Key Graders:

  • Tool Selection - Tool choice accuracy
  • Memory - Context preservation
  • Plan - Strategy feasibility
  • Trajectory - Path optimization

πŸ–ΌοΈ Multimodal

Focus: Image-text coherence, visual generation quality, image helpfulness

Key Graders:

  • Image Coherence - Visual-text alignment
  • Text-to-Image - Generation quality
  • Image Helpfulness - Image contribution
  • 🌐 Multi-Scenario Coverage: Extensive support for diverse domains including Agent, text, code, math, and multimodal tasks. πŸ‘‰ Explore Supported Scenarios
  • πŸ”„ Holistic Agent Evaluation: Beyond final outcomes, we assess the entire lifecycleβ€”including trajectories, Memory, Reflection, and Tool Use. πŸ‘‰ Agent Lifecycle Evaluation
  • βœ… Quality Assurance: Every grader comes with benchmark datasets and pytest integration for validation. πŸ‘‰ View Benchmark Datasets

πŸ› οΈ Flexible Grader Building Methods

Choose the build method that fits your requirements:

  • Customization: Clear requirements, but no existing grader? If you have explicit rules or logic, use our Python interfaces or Prompt templates to quickly define your own grader. πŸ‘‰ Custom Grader Development Guide
  • Zero-shot Rubrics Generation: Not sure what criteria to use, and no labeled data yet? Just provide a task description and optional sample queriesβ€”the LLM will automatically generate evaluation rubrics for you. Ideal for rapid prototyping when you want to get started immediately. πŸ‘‰ Zero-shot Rubrics Generation Guide
  • Data-driven Rubrics Generation: Ambiguous requirements, but have few examples? Use the GraderGenerator to automatically summarize evaluation Rubrics from your annotated data, and generate a llm-based grader. πŸ‘‰ Data-driven Rubrics Generation Guide
  • Training Judge Models: Massive data and need peak performance? Use our training pipeline to train a dedicated Judge Model. This is ideal for complex scenarios where prompt-based grading falls short.πŸ‘‰ Train Judge Models

πŸ”Œ Easy Integration

Using mainstream observability platforms like LangSmith or Langfuse? We offer seamless integration to enhance their evaluators and automated evaluation capabilities. We're also building integrations with training frameworks like verl. πŸ‘‰ See Integrations for details


News


πŸ“₯ Installation

pip install py-openjudge

πŸ’‘ More installation methods can be found in the Quickstart Guide.


πŸš€ Quickstart

import asyncio
from openjudge.models import OpenAIChatModel
from openjudge.graders.common.relevance import RelevanceGrader

async def main():
    # 1️⃣ Create model client
    model = OpenAIChatModel(model="qwen3-32b")

    # 2️⃣ Initialize grader
    grader = RelevanceGrader(model=model)

    # 3️⃣ Prepare data
    data = {
        "query": "What is machine learning?",
        "response": "Machine learning is a subset of AI that enables computers to learn from data.",
    }

    # 4️⃣ Evaluate
    result = await grader.aevaluate(**data)

    print(f"Score: {result.score}")   # Score: 5
    print(f"Reason: {result.reason}")

if __name__ == "__main__":
    asyncio.run(main())

πŸ“š Complete Quickstart can be found in the Quickstart Guide.


πŸ”— Integrations

Seamlessly connect OpenJudge with mainstream observability and training platforms:

Category Platform Status Documentation
Observability LangSmith βœ… Available πŸ‘‰ LangSmith Integration Guide
Langfuse βœ… Available πŸ‘‰ Langfuse Integration Guide
Other frameworks πŸ”΅ Planned β€”
Training verl 🟑 In Progress β€”
Trinity-RFT πŸ”΅ Planned β€”

πŸ’¬ Have a framework you'd like us to prioritize? Open an Issue!


🀝 Contributing

We love your input! We want to make contributing to OpenJudge as easy and transparent as possible.

🎨 Adding New Graders β€” Have domain-specific evaluation logic? Share it with the community! πŸ› Reporting Bugs β€” Found a glitch? Help us fix it by opening an issue πŸ“ Improving Docs β€” Clearer explanations or better examples are always welcome πŸ’‘ Proposing Features β€” Have ideas for new integrations? Let's discuss!

πŸ“– See full Contributing Guidelines for coding standards and PR process.


πŸ’¬ Community

Join our DingTalk group to connect with the community:

DingTalk QR Code

Migration Guide (v0.1.x β†’ v0.2.0)

OpenJudge was previously distributed as the legacy package rm-gallery (v0.1.x). Starting from v0.2.0, it is published as py-openjudge and the Python import namespace is openjudge.

OpenJudge v0.2.0 is NOT backward compatible with v0.1.x. If you are currently using v0.1.x, choose one of the following paths:

  • Stay on v0.1.x (legacy): keep using the old package
pip install rm-gallery

We preserved the source code of v0.1.7 (the latest v0.1.x release) in the v0.1.7-legacy branch.

If you run into migration issues, please open an issue with your minimal repro and current version.


πŸ“„ Citation

If you use OpenJudge in your research, please cite:

@software{
  title  = {OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards},
  author = {The OpenJudge Team},
  url    = {https://github.com/modelscope/OpenJudge},
  month  = {07},
  year   = {2025}
}

Made with ❀️ by the OpenJudge Team

⭐ Star Us Β· πŸ› Report Bug Β· πŸ’‘ Request Feature

About

OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 12