Skip to content

liveresearch

liveresearch

DeepResearchBench scorer — LLM-as-judge for deep research quality.

Evaluates research output quality across four dimensions from the DeepResearchBench rubric: comprehensiveness, insight, instruction_following, and readability. Uses LLM-as-judge with per-task criteria when available, falling back to a generic research quality rubric.

Reference: https://github.com/Ayanami0730/deep_research_bench Paper: https://arxiv.org/abs/2510.14240

Classes

LiveResearchBenchScorer

LiveResearchBenchScorer(judge_backend: InferenceBackend, judge_model: str)

Bases: LLMJudgeScorer

LLM-as-judge scorer for DeepResearchBench deep research tasks.

Evaluates research reports across four dimensions: comprehensiveness, insight, instruction_following, readability. Uses task-specific criteria when available from the benchmark data.

Source code in src/openjarvis/evals/core/scorer.py
def __init__(self, judge_backend: InferenceBackend, judge_model: str) -> None:
    self._judge_backend = judge_backend
    self._judge_model = judge_model

Functions

rescore_from_metadata

rescore_from_metadata(scoring_metadata, dimension_weights=None)

Re-derive (is_correct, updated_metadata) from a stored raw_judge_output.

Returns None if no raw judge output is present or no scores can be parsed. Used by the evals reparse-judge CLI to fix records that failed under the old, stricter parser.

Source code in src/openjarvis/evals/scorers/liveresearch.py
def rescore_from_metadata(scoring_metadata, dimension_weights=None):
    """Re-derive (is_correct, updated_metadata) from a stored ``raw_judge_output``.

    Returns ``None`` if no raw judge output is present or no scores can be
    parsed. Used by the ``evals reparse-judge`` CLI to fix records that
    failed under the old, stricter parser.
    """
    raw = scoring_metadata.get("raw_judge_output")
    if not isinstance(raw, str) or not raw.strip():
        return None

    parsed = _parse_judge_response(raw)
    scores = parsed.get("scores") or {}
    if not scores:
        return None

    weights = dimension_weights or scoring_metadata.get("dimension_weights")
    if not isinstance(weights, dict) or not weights:
        weights = DEFAULT_WEIGHTS

    weighted_total = 0.0
    total_weight = 0.0
    for dim in DIMENSIONS:
        dim_score = float(scores.get(dim, 0.0))
        dim_weight = float(weights.get(dim, DEFAULT_WEIGHTS.get(dim, 0.25)))
        weighted_total += dim_score * dim_weight
        total_weight += dim_weight

    if total_weight > 0:
        weighted_total /= total_weight
    normalized = weighted_total / 10.0
    is_correct = normalized >= 0.5

    new_meta = dict(scoring_metadata)
    new_meta["score"] = normalized
    new_meta["dimension_scores"] = scores
    new_meta["dimension_weights"] = weights
    new_meta["weighted_total_0_10"] = weighted_total
    new_meta["notes"] = parsed.get("notes", new_meta.get("notes", ""))
    return is_correct, new_meta