Skip to content

types

types

Backward-compatibility shim -- optimize.types moved to learning.optimize.types.

Classes

SearchDimension dataclass

SearchDimension(name: str, dim_type: str, values: List[Any] = list(), low: Optional[float] = None, high: Optional[float] = None, description: str = '', primitive: str = '')

One tunable dimension in the config space.

SearchSpace dataclass

SearchSpace(dimensions: List[SearchDimension] = list(), fixed: Dict[str, Any] = dict(), constraints: List[str] = list())

The full space of configs the optimizer can propose.

Functions
to_prompt_description
to_prompt_description() -> str

Render search space as structured text for the LLM optimizer.

Source code in src/openjarvis/learning/optimize/types.py
def to_prompt_description(self) -> str:
    """Render search space as structured text for the LLM optimizer."""
    lines: List[str] = []
    lines.append("# Search Space")
    lines.append("")

    # Group dimensions by primitive
    by_primitive: Dict[str, List[SearchDimension]] = {}
    for dim in self.dimensions:
        key = dim.primitive or "other"
        by_primitive.setdefault(key, []).append(dim)

    for primitive, dims in sorted(by_primitive.items()):
        lines.append(f"## {primitive.title()}")
        for dim in dims:
            lines.append(f"- **{dim.name}** ({dim.dim_type})")
            if dim.description:
                lines.append(f"  Description: {dim.description}")
            if dim.dim_type in ("categorical", "subset"):
                lines.append(f"  Options: {dim.values}")
            elif dim.dim_type in ("continuous", "integer"):
                lines.append(f"  Range: [{dim.low}, {dim.high}]")
            elif dim.dim_type == "text":
                lines.append("  Free-form text")
        lines.append("")

    if self.fixed:
        lines.append("## Fixed Parameters")
        for k, v in sorted(self.fixed.items()):
            lines.append(f"- {k} = {v}")
        lines.append("")

    if self.constraints:
        lines.append("## Constraints")
        for c in self.constraints:
            lines.append(f"- {c}")
        lines.append("")

    return "\n".join(lines)

BenchmarkScore dataclass

BenchmarkScore(benchmark: str, accuracy: float = 0.0, mean_latency_seconds: float = 0.0, total_cost_usd: float = 0.0, total_energy_joules: float = 0.0, total_tokens: int = 0, samples_evaluated: int = 0, errors: int = 0, weight: float = 1.0, summary: Optional[Any] = None, sample_scores: List['SampleScore'] = list())

Per-benchmark metrics from a multi-benchmark evaluation trial.

SampleScore dataclass

SampleScore(record_id: str, is_correct: Optional[bool] = None, score: Optional[float] = None, latency_seconds: float = 0.0, prompt_tokens: int = 0, completion_tokens: int = 0, cost_usd: float = 0.0, error: Optional[str] = None, ttft: float = 0.0, energy_joules: float = 0.0, power_watts: float = 0.0, gpu_utilization_pct: float = 0.0, throughput_tok_per_sec: float = 0.0, mfu_pct: float = 0.0, mbu_pct: float = 0.0, ipw: float = 0.0, ipj: float = 0.0, energy_per_output_token_joules: float = 0.0, throughput_per_watt: float = 0.0, mean_itl_ms: float = 0.0)

Per-sample metrics from an evaluation trial.

TrialFeedback dataclass

TrialFeedback(summary_text: str = '', failure_patterns: List[str] = list(), primitive_ratings: Dict[str, str] = dict(), suggested_changes: List[str] = list(), target_primitive: str = '')

Structured feedback from trial analysis.

ObjectiveSpec dataclass

ObjectiveSpec(metric: str, direction: str, weight: float = 1.0)

A single optimization objective.

TrialConfig dataclass

TrialConfig(trial_id: str, params: Dict[str, Any] = dict(), reasoning: str = '')

A single candidate configuration proposed by the optimizer.

Functions
to_recipe
to_recipe() -> Recipe

Map params back to Recipe fields.

Source code in src/openjarvis/learning/optimize/types.py
def to_recipe(self) -> Recipe:
    """Map params back to Recipe fields."""
    kwargs: Dict[str, Any] = {}
    for dotted_key, value in self.params.items():
        recipe_field = _PARAM_TO_RECIPE.get(dotted_key)
        if recipe_field is not None:
            kwargs[recipe_field] = value

    return Recipe(
        name=f"trial-{self.trial_id}",
        **kwargs,
    )

TrialResult dataclass

TrialResult(trial_id: str, config: TrialConfig, accuracy: float = 0.0, mean_latency_seconds: float = 0.0, total_cost_usd: float = 0.0, total_energy_joules: float = 0.0, total_tokens: int = 0, samples_evaluated: int = 0, analysis: str = '', failure_modes: List[str] = list(), per_sample_feedback: List[Dict[str, Any]] = list(), summary: Optional[RunSummary] = None, sample_scores: List[SampleScore] = list(), structured_feedback: Optional[TrialFeedback] = None, per_benchmark: List[BenchmarkScore] = list())

Result of evaluating a trial, with both scalar and textual feedback.

OptimizationRun dataclass

OptimizationRun(run_id: str, search_space: SearchSpace, trials: List[TrialResult] = list(), best_trial: Optional[TrialResult] = None, best_recipe_path: Optional[str] = None, status: str = 'running', optimizer_model: str = '', benchmark: str = '', benchmarks: List[str] = list(), pareto_frontier: List[TrialResult] = list(), objectives: List[ObjectiveSpec] = (lambda: list(DEFAULT_OBJECTIVES))())

Complete optimization session.