llm_optimizer
llm_optimizer
¶
LLM-based optimizer for OpenJarvis configuration tuning.
Uses a cloud LLM to propose optimal OpenJarvis configs, inspired by DSPy's GEPA approach: textual feedback from execution traces rather than just scalar rewards guides the optimizer toward better configurations.
Classes¶
LLMOptimizer
¶
LLMOptimizer(search_space: SearchSpace, optimizer_model: str = 'claude-sonnet-4-6', optimizer_backend: Optional[InferenceBackend] = None)
Uses a cloud LLM to propose optimal OpenJarvis configs.
Inspired by DSPy's GEPA: uses textual feedback from execution traces rather than just scalar rewards.
Source code in src/openjarvis/learning/optimize/llm_optimizer.py
Functions¶
propose_initial
¶
propose_initial() -> TrialConfig
Propose a reasonable starting config from the search space.
Source code in src/openjarvis/learning/optimize/llm_optimizer.py
propose_next
¶
propose_next(history: List[TrialResult], traces: Optional[List[Trace]] = None, frontier_ids: Optional[set] = None) -> TrialConfig
Ask the LLM to propose the next config to evaluate.
Source code in src/openjarvis/learning/optimize/llm_optimizer.py
analyze_trial
¶
analyze_trial(trial: TrialConfig, summary: RunSummary, traces: Optional[List[Trace]] = None, sample_scores: Optional[List[SampleScore]] = None, per_benchmark: Optional[List[BenchmarkScore]] = None) -> TrialFeedback
Ask the LLM to analyze a completed trial. Returns structured feedback.
Source code in src/openjarvis/learning/optimize/llm_optimizer.py
propose_targeted
¶
propose_targeted(history: List[TrialResult], base_config: TrialConfig, target_primitive: str, frontier_ids: Optional[set] = None) -> TrialConfig
Propose a config that only changes one primitive.
Source code in src/openjarvis/learning/optimize/llm_optimizer.py
propose_merge
¶
propose_merge(candidates: List[TrialResult], history: List[TrialResult], frontier_ids: Optional[set] = None) -> TrialConfig
Combine best aspects of frontier members into one config.