Skip to content

ama_bench

ama_bench

AMA-Bench dataset loader.

Reference dataset: https://huggingface.co/datasets/AMA-bench/AMA-bench

Paper: https://arxiv.org/abs/2602.22769

This implementation follows the published schema with fields like: - episode_id - task / task_type / domain / source / success / num_turns / total_tokens - trajectory: list[{turn_idx, action, observation}] - qa_pairs: list[{question, answer, question_uuid, type}]

Evaluation protocol follows the paper's long-context baseline: pack the trajectory into the model input, reserving space for the question and answer. When a trajectory exceeds the budget, truncation preserves the first 50% and last 50% of the token budget (matching Appendix B of the paper).

Classes

AMABenchDataset

AMABenchDataset(subset: str = 'default', cache_dir: Optional[str] = None, max_trajectory_tokens: Optional[int] = None)

Bases: DatasetProvider

AMA-Bench agent memory assessment benchmark.

Source code in src/openjarvis/evals/datasets/ama_bench.py
def __init__(
    self,
    subset: str = "default",
    cache_dir: Optional[str] = None,
    max_trajectory_tokens: Optional[int] = None,
) -> None:
    if subset not in ("default", ""):
        raise ValueError(
            f"AMA-Bench supports only subset='default', got {subset!r}",
        )
    self._cache_dir = Path(cache_dir) if cache_dir else None
    self._max_traj_tokens = max_trajectory_tokens or _DEFAULT_MAX_TRAJECTORY_TOKENS
    self._records: List[EvalRecord] = []
    self._episodes: List[List[EvalRecord]] = []
Functions
iter_episodes
iter_episodes() -> Iterable[List[EvalRecord]]

Yield grouped QA pairs per trajectory for episode mode.

Source code in src/openjarvis/evals/datasets/ama_bench.py
def iter_episodes(self) -> Iterable[List[EvalRecord]]:
    """Yield grouped QA pairs per trajectory for episode mode."""
    return iter(self._episodes)