Skip to content

config

config

Configuration loading, hardware detection, and engine recommendation.

User configuration lives at ~/.openjarvis/config.toml. load_config() detects hardware, fills sensible defaults, then overlays any user overrides found in the TOML file.

Classes

GpuInfo dataclass

GpuInfo(vendor: str = '', name: str = '', vram_gb: float = 0.0, compute_capability: str = '', count: int = 0)

Detected GPU metadata.

HardwareInfo dataclass

HardwareInfo(platform: str = '', cpu_brand: str = '', cpu_count: int = 0, ram_gb: float = 0.0, gpu: Optional[GpuInfo] = None)

Detected system hardware.

OllamaEngineConfig dataclass

OllamaEngineConfig(host: str = '')

Per-engine config for Ollama.

VLLMEngineConfig dataclass

VLLMEngineConfig(host: str = 'http://localhost:8000')

Per-engine config for vLLM.

SGLangEngineConfig dataclass

SGLangEngineConfig(host: str = 'http://localhost:30000')

Per-engine config for SGLang.

LlamaCppEngineConfig dataclass

LlamaCppEngineConfig(host: str = 'http://localhost:8080', binary_path: str = '')

Per-engine config for llama.cpp.

MLXEngineConfig dataclass

MLXEngineConfig(host: str = 'http://localhost:8080')

Per-engine config for MLX.

LMStudioEngineConfig dataclass

LMStudioEngineConfig(host: str = 'http://localhost:1234')

Per-engine config for LM Studio.

ExoEngineConfig dataclass

ExoEngineConfig(host: str = 'http://localhost:52415')

Per-engine config for Exo.

NexaEngineConfig dataclass

NexaEngineConfig(host: str = 'http://localhost:18181', device: str = '')

Per-engine config for Nexa.

UzuEngineConfig dataclass

UzuEngineConfig(host: str = 'http://localhost:8000')

Per-engine config for Uzu.

AppleFmEngineConfig dataclass

AppleFmEngineConfig(host: str = 'http://localhost:8079')

Per-engine config for Apple Foundation Models.

GemmaCppEngineConfig dataclass

GemmaCppEngineConfig(model_path: str = '', tokenizer_path: str = '', model_type: str = '', num_threads: int = 0)

Per-engine config for gemma.cpp.

LemonadeEngineConfig dataclass

LemonadeEngineConfig(host: str = 'http://localhost:8000')

Per-engine config for Lemonade.

EngineConfig dataclass

Inference engine settings with nested per-engine configs.

Attributes
ollama_host property writable
ollama_host: str

Deprecated: use engine.ollama.host.

vllm_host property writable
vllm_host: str

Deprecated: use engine.vllm.host.

llamacpp_host property writable
llamacpp_host: str

Deprecated: use engine.llamacpp.host.

llamacpp_path property writable
llamacpp_path: str

Deprecated: use engine.llamacpp.binary_path.

sglang_host property writable
sglang_host: str

Deprecated: use engine.sglang.host.

mlx_host property writable
mlx_host: str

Deprecated: use engine.mlx.host.

lmstudio_host property writable
lmstudio_host: str

Deprecated: use engine.lmstudio.host.

exo_host property writable
exo_host: str

Deprecated: use engine.exo.host.

nexa_host property writable
nexa_host: str

Deprecated: use engine.nexa.host.

uzu_host property writable
uzu_host: str

Deprecated: use engine.uzu.host.

apple_fm_host property writable
apple_fm_host: str

Deprecated: use engine.apple_fm.host.

lemonade_host property writable
lemonade_host: str

Deprecated: use engine.lemonade.host.

IntelligenceConfig dataclass

IntelligenceConfig(default_model: str = '', fallback_model: str = '', model_path: str = '', checkpoint_path: str = '', quantization: str = 'none', preferred_engine: str = '', provider: str = '', temperature: float = 0.7, max_tokens: int = 1024, top_p: float = 0.9, top_k: int = 40, repetition_penalty: float = 1.0, stop_sequences: str = '')

The model — identity, paths, quantization, and generation defaults.

RoutingLearningConfig dataclass

RoutingLearningConfig(policy: str = 'heuristic', min_samples: int = 5)

Routing sub-policy config within Learning.

SFTConfig dataclass

SFTConfig(model_name: str = 'Qwen/Qwen3-1.7B', max_seq_length: int = 4096, num_epochs: int = 3, batch_size: int = 8, learning_rate: float = 2e-05, weight_decay: float = 0.01, warmup_ratio: float = 0.1, max_grad_norm: float = 1.0, gradient_checkpointing: bool = True, use_lora: bool = True, lora_rank: int = 16, lora_alpha: int = 32, lora_dropout: float = 0.05, target_modules: str = 'q_proj,v_proj', use_4bit: bool = False, checkpoint_dir: str = 'checkpoints/sft', min_pairs: int = 10, agent_filter: str = '')

General-purpose SFT training config. Maps to [learning.intelligence.sft].

GRPOConfig dataclass

GRPOConfig(model_name: str = 'Qwen/Qwen3-1.7B', max_seq_length: int = 4096, max_response_length: int = 2048, num_epochs: int = 10, batch_size: int = 16, learning_rate: float = 1e-06, max_grad_norm: float = 1.0, gradient_checkpointing: bool = True, num_samples_per_prompt: int = 8, temperature: float = 1.0, kl_coef: float = 0.0001, clip_ratio: float = 0.2, use_8bit_ref: bool = True, checkpoint_dir: str = 'checkpoints/grpo', save_every_n_epochs: int = 1, keep_last_n: int = 3, min_prompts: int = 10, agent_filter: str = '')

General-purpose GRPO training config. Maps to [learning.intelligence.grpo].

DSPyOptimizerConfig dataclass

DSPyOptimizerConfig(optimizer: str = 'BootstrapFewShotWithRandomSearch', task_lm: str = '', teacher_lm: str = '', max_bootstrapped_demos: int = 4, max_labeled_demos: int = 4, num_candidate_programs: int = 10, max_rounds: int = 1, optimize_system_prompt: bool = True, optimize_few_shot: bool = True, optimize_tool_descriptions: bool = True, min_traces: int = 20, metric_threshold: float = 0.7, agent_filter: str = '', config_dir: str = '')

DSPy agent optimizer config. Maps to [learning.agent.dspy].

GEPAOptimizerConfig dataclass

GEPAOptimizerConfig(reflection_lm: str = '', max_metric_calls: int = 150, population_size: int = 10, optimize_system_prompt: bool = True, optimize_tools: bool = True, optimize_max_turns: bool = True, optimize_temperature: bool = True, min_traces: int = 20, assessment_batch_size: int = 10, agent_filter: str = '', config_dir: str = '')

GEPA agent optimizer config. Maps to [learning.agent.gepa].

IntelligenceLearningConfig dataclass

IntelligenceLearningConfig(policy: str = 'none', sft: SFTConfig = SFTConfig(), grpo: GRPOConfig = GRPOConfig())

Intelligence sub-policy config within Learning.

AgentLearningConfig dataclass

AgentLearningConfig(policy: str = 'none', dspy: DSPyOptimizerConfig = DSPyOptimizerConfig(), gepa: GEPAOptimizerConfig = GEPAOptimizerConfig())

Agent sub-policy config within Learning.

SkillsLearningConfig dataclass

SkillsLearningConfig(auto_optimize: bool = False, optimizer: str = 'dspy', min_traces_per_skill: int = 20, optimization_interval_seconds: int = 86400, overlay_dir: str = '~/.openjarvis/learning/skills/')

Configuration for the skills learning loop (Plan 2A).

MetricsConfig dataclass

MetricsConfig(accuracy_weight: float = 0.6, latency_weight: float = 0.2, cost_weight: float = 0.1, efficiency_weight: float = 0.1)

Reward / optimization metric weights.

LearningConfig dataclass

LearningConfig(enabled: bool = False, update_interval: int = 100, auto_update: bool = False, routing: RoutingLearningConfig = RoutingLearningConfig(), intelligence: IntelligenceLearningConfig = IntelligenceLearningConfig(), agent: AgentLearningConfig = AgentLearningConfig(), skills: SkillsLearningConfig = SkillsLearningConfig(), metrics: MetricsConfig = MetricsConfig(), training_enabled: bool = False, training_schedule: str = '', min_improvement: float = 0.02)

Learning system settings with per-primitive sub-policies.

Attributes
default_policy property writable
default_policy: str

Deprecated: use learning.routing.policy.

intelligence_policy property writable
intelligence_policy: str

Deprecated: use learning.intelligence.policy.

agent_policy property writable
agent_policy: str

Deprecated: use learning.agent.policy.

reward_weights property writable
reward_weights: str

Deprecated: use learning.metrics.*.

StorageConfig dataclass

StorageConfig(default_backend: str = 'sqlite', db_path: str = str(DEFAULT_CONFIG_DIR / 'memory.db'), context_top_k: int = 5, context_min_score: float = 0.1, context_max_tokens: int = 2048, chunk_size: int = 512, chunk_overlap: int = 64)

Storage (memory) backend settings.

MCPConfig dataclass

MCPConfig(enabled: bool = True, servers: str = '')

MCP (Model Context Protocol) settings.

BrowserConfig dataclass

BrowserConfig(headless: bool = True, timeout_ms: int = 30000, viewport_width: int = 1280, viewport_height: int = 720)

Browser automation settings (Playwright).

ToolsConfig dataclass

ToolsConfig(storage: StorageConfig = StorageConfig(), mcp: MCPConfig = MCPConfig(), browser: BrowserConfig = BrowserConfig(), enabled: str = '')

Tools primitive settings — wraps storage and MCP configuration.

AgentConfig dataclass

AgentConfig(default_agent: str = 'simple', max_turns: int = 10, tools: str = '', objective: str = '', system_prompt: str = '', system_prompt_path: str = '', context_from_memory: bool = True, default_system_prompt: str = "You are a helpful AI assistant running locally on the user's own hardware through OpenJarvis. You are not a cloud service. Respond helpfully, concisely, and accurately.")

Agent harness settings — orchestration, tools, system prompt.

Attributes
default_tools property writable
default_tools: str

Deprecated: use agent.tools.

ServerConfig dataclass

ServerConfig(host: str = '127.0.0.1', port: int = 8000, agent: str = 'orchestrator', model: str = '', workers: int = 1, cors_origins: list = (lambda: ['http://localhost:3000', 'http://localhost:5173', 'http://127.0.0.1:3000', 'http://127.0.0.1:5173', 'tauri://localhost'])())

API server settings.

TelemetryConfig dataclass

TelemetryConfig(enabled: bool = True, db_path: str = str(DEFAULT_CONFIG_DIR / 'telemetry.db'), gpu_metrics: bool = False, gpu_poll_interval_ms: int = 50, energy_vendor: str = '', warmup_samples: int = 0, steady_state_window: int = 5, steady_state_threshold: float = 0.05)

Telemetry persistence settings.

TracesConfig dataclass

TracesConfig(enabled: bool = True, db_path: str = str(DEFAULT_CONFIG_DIR / 'traces.db'))

Trace system settings.

TelegramChannelConfig dataclass

TelegramChannelConfig(bot_token: str = '', allowed_chat_ids: str = '', parse_mode: str = 'Markdown')

Per-channel config for Telegram.

DiscordChannelConfig dataclass

DiscordChannelConfig(bot_token: str = '')

Per-channel config for Discord.

SlackChannelConfig dataclass

SlackChannelConfig(bot_token: str = '', app_token: str = '')

Per-channel config for Slack.

WebhookChannelConfig dataclass

WebhookChannelConfig(url: str = '', secret: str = '', method: str = 'POST')

Per-channel config for generic webhooks.

EmailChannelConfig dataclass

EmailChannelConfig(smtp_host: str = '', smtp_port: int = 587, imap_host: str = '', imap_port: int = 993, username: str = '', password: str = '', use_tls: bool = True)

Per-channel config for email (SMTP/IMAP).

WhatsAppChannelConfig dataclass

WhatsAppChannelConfig(access_token: str = '', phone_number_id: str = '')

Per-channel config for WhatsApp Cloud API.

SignalChannelConfig dataclass

SignalChannelConfig(api_url: str = '', phone_number: str = '')

Per-channel config for Signal (via signal-cli REST API).

GoogleChatChannelConfig dataclass

GoogleChatChannelConfig(webhook_url: str = '')

Per-channel config for Google Chat webhooks.

IRCChannelConfig dataclass

IRCChannelConfig(server: str = '', port: int = 6667, nick: str = '', password: str = '', use_tls: bool = False)

Per-channel config for IRC.

WebChatChannelConfig dataclass

WebChatChannelConfig()

Per-channel config for in-memory webchat.

TeamsChannelConfig dataclass

TeamsChannelConfig(app_id: str = '', app_password: str = '', service_url: str = '')

Per-channel config for Microsoft Teams (Bot Framework).

MatrixChannelConfig dataclass

MatrixChannelConfig(homeserver: str = '', access_token: str = '')

Per-channel config for Matrix.

MattermostChannelConfig dataclass

MattermostChannelConfig(url: str = '', token: str = '')

Per-channel config for Mattermost.

FeishuChannelConfig dataclass

FeishuChannelConfig(app_id: str = '', app_secret: str = '')

Per-channel config for Feishu (Lark).

BlueBubblesChannelConfig dataclass

BlueBubblesChannelConfig(url: str = '', password: str = '')

Per-channel config for BlueBubbles (iMessage bridge).

WhatsAppBaileysChannelConfig dataclass

WhatsAppBaileysChannelConfig(auth_dir: str = '', assistant_name: str = 'Jarvis', assistant_has_own_number: bool = False)

Per-channel config for WhatsApp via Baileys protocol.

CapabilitiesConfig dataclass

CapabilitiesConfig(enabled: bool = False, policy_path: str = '')

RBAC capability system settings.

SecurityConfig dataclass

SecurityConfig(enabled: bool = True, scan_input: bool = True, scan_output: bool = True, mode: str = 'redact', secret_scanner: bool = True, pii_scanner: bool = True, audit_log_path: str = str(DEFAULT_CONFIG_DIR / 'audit.db'), enforce_tool_confirmation: bool = True, merkle_audit: bool = True, signing_key_path: str = '', ssrf_protection: bool = True, rate_limit_enabled: bool = True, rate_limit_rpm: int = 60, rate_limit_burst: int = 10, local_engine_bypass: bool = False, local_tool_bypass: bool = False, profile: str = '', vault_key_path: str = str(DEFAULT_CONFIG_DIR / '.vault_key'), capabilities: CapabilitiesConfig = CapabilitiesConfig())

Security guardrails settings.

SandboxConfig dataclass

SandboxConfig(enabled: bool = False, image: str = 'openjarvis-sandbox:latest', timeout: int = 300, workspace: str = '', mount_allowlist_path: str = '', max_concurrent: int = 5, runtime: str = 'docker', wasm_fuel_limit: int = 1000000, wasm_memory_limit_mb: int = 256)

Container sandbox settings.

SchedulerConfig dataclass

SchedulerConfig(enabled: bool = False, poll_interval: int = 60, db_path: str = '')

Task scheduler settings.

WorkflowConfig dataclass

WorkflowConfig(enabled: bool = False, max_parallel: int = 4, default_node_timeout: int = 300)

Workflow engine settings.

SessionConfig dataclass

SessionConfig(enabled: bool = False, max_age_hours: float = 24.0, consolidation_threshold: int = 100, db_path: str = str(DEFAULT_CONFIG_DIR / 'sessions.db'))

Cross-channel session settings.

A2AConfig dataclass

A2AConfig(enabled: bool = False)

Agent-to-Agent protocol settings.

OperatorsConfig dataclass

OperatorsConfig(enabled: bool = False, manifests_dir: str = '~/.openjarvis/operators', auto_activate: str = '')

Operator lifecycle settings.

SpeechConfig dataclass

SpeechConfig(backend: str = 'auto', model: str = 'base', language: str = '', device: str = 'auto', compute_type: str = 'float16')

Speech-to-text settings.

OptimizeConfig dataclass

OptimizeConfig(max_trials: int = 20, early_stop_patience: int = 5, optimizer_model: str = 'claude-sonnet-4-6', optimizer_provider: str = 'anthropic', benchmark: str = '', max_samples: int = 50, judge_model: str = 'gpt-5-mini-2025-08-07', db_path: str = str(DEFAULT_CONFIG_DIR / 'optimize.db'))

Configuration optimization settings.

AgentManagerConfig dataclass

AgentManagerConfig(enabled: bool = True, db_path: str = str(DEFAULT_CONFIG_DIR / 'agents.db'))

Persistent agent manager settings.

MemoryFilesConfig dataclass

MemoryFilesConfig(soul_path: str = '~/.openjarvis/SOUL.md', memory_path: str = '~/.openjarvis/MEMORY.md', user_path: str = '~/.openjarvis/USER.md', nudge_interval: int = 10)

Persistent memory-file paths and nudge settings.

SystemPromptConfig dataclass

SystemPromptConfig(soul_max_chars: int = 4000, memory_max_chars: int = 2500, user_max_chars: int = 1500, skill_desc_max_chars: int = 60, truncation_strategy: str = 'head_tail')

Limits and strategy for system-prompt assembly.

CompressionConfig dataclass

CompressionConfig(enabled: bool = True, threshold: float = 0.5, strategy: str = 'session_consolidation')

Configuration for context compression.

SkillSourceConfig dataclass

SkillSourceConfig(source: str = '', url: str = '', filter: Dict[str, Any] = dict(), auto_update: bool = False)

Configuration for a single skill source (Hermes, OpenClaw, GitHub).

SkillsConfig dataclass

SkillsConfig(enabled: bool = True, skills_dir: str = '~/.openjarvis/skills/', active: str = '*', auto_discover: bool = True, auto_sync: bool = False, nudge_interval: int = 15, index_repo: str = 'https://github.com/openjarvis/skill-index.git', index_dir: str = '~/.openjarvis/skill-index/', max_depth: int = 5, sandbox_dangerous: bool = True, sources: List[SkillSourceConfig] = list())

Configuration for agent-authored procedural skills.

DigestSectionConfig dataclass

DigestSectionConfig(sources: List[str] = list(), max_items: int = 10, priority_contacts: List[str] = list())

Configuration for a single digest section.

DigestConfig dataclass

DigestConfig(enabled: bool = False, schedule: str = '0 6 * * *', timezone: str = 'America/Los_Angeles', persona: str = 'jarvis', sections: List[str] = (lambda: ['messages', 'calendar', 'health', 'world'])(), optional_sections: List[str] = (lambda: ['github', 'financial', 'music', 'fitness'])(), honorific: str = 'sir', voice_id: str = '', voice_speed: float = 1.0, tts_backend: str = 'cartesia', messages: DigestSectionConfig = (lambda: DigestSectionConfig(sources=['gmail', 'slack', 'google_tasks']))(), calendar: DigestSectionConfig = (lambda: DigestSectionConfig(sources=['gcalendar']))(), health: DigestSectionConfig = (lambda: DigestSectionConfig(sources=['oura', 'apple_health']))(), world: DigestSectionConfig = (lambda: DigestSectionConfig(sources=[]))())

Configuration for the morning digest feature.

JarvisConfig dataclass

Top-level configuration for OpenJarvis.

Attributes
memory property writable
memory: StorageConfig

Backward-compatible accessor — canonical location is tools.storage.

Functions

detect_hardware

detect_hardware() -> HardwareInfo

Auto-detect hardware capabilities with graceful fallbacks.

Source code in src/openjarvis/core/config.py
def detect_hardware() -> HardwareInfo:
    """Auto-detect hardware capabilities with graceful fallbacks."""
    gpu = _detect_nvidia_gpu() or _detect_amd_gpu() or _detect_apple_gpu()
    return HardwareInfo(
        platform=platform.system().lower(),
        cpu_brand=_detect_cpu_brand(),
        cpu_count=os.cpu_count() or 1,
        ram_gb=_total_ram_gb(),
        gpu=gpu,
    )

recommend_engine

recommend_engine(hw: HardwareInfo) -> str

Suggest the best inference engine for the detected hardware.

Source code in src/openjarvis/core/config.py
def recommend_engine(hw: HardwareInfo) -> str:
    """Suggest the best inference engine for the detected hardware."""
    gpu = hw.gpu
    if gpu is None:
        return "llamacpp"
    if gpu.vendor == "apple":
        return "mlx"
    if gpu.vendor == "nvidia":
        # Datacenter cards (A100, H100, L40, etc.) → vllm; consumer → ollama
        datacenter_keywords = ("A100", "H100", "H200", "L40", "A10", "A30")
        if any(kw in gpu.name for kw in datacenter_keywords):
            return "vllm"
        return "ollama"
    if gpu.vendor == "amd":
        # Datacenter cards (MI300, MI325, MI350, MI355) → vllm; consumer → lemonade
        amd_datacenter_keywords = ("MI300", "MI325", "MI350", "MI355")
        if any(kw in gpu.name for kw in amd_datacenter_keywords):
            return "vllm"
        return "lemonade"
    return "llamacpp"

recommend_model

recommend_model(hw: HardwareInfo, engine: str) -> str

Suggest the best Qwen3.5 model that fits the detected hardware.

Uses an explicit tier table mapping available memory to model size. Falls back to scanning the full catalog if the tiered model is not compatible with the selected engine.

Source code in src/openjarvis/core/config.py
def recommend_model(hw: HardwareInfo, engine: str) -> str:
    """Suggest the best Qwen3.5 model that fits the detected hardware.

    Uses an explicit tier table mapping available memory to model size.
    Falls back to scanning the full catalog if the tiered model is not
    compatible with the selected engine.
    """
    from openjarvis.intelligence.model_catalog import BUILTIN_MODELS

    available_gb = _available_memory_gb(hw)
    if available_gb <= 0:
        return ""

    # Build a lookup for quick engine-compatibility checks
    catalog = {spec.model_id: spec for spec in BUILTIN_MODELS}

    # Try explicit tier mapping first
    model_id = _MODEL_TIER_FALLBACK
    for max_ram, tier_model in _MODEL_TIERS:
        if available_gb <= max_ram:
            model_id = tier_model
            break

    spec = catalog.get(model_id)
    if spec and engine in spec.supported_engines:
        return model_id

    # Fallback: scan all Qwen3.5 models for engine compatibility
    candidates = [
        s
        for s in BUILTIN_MODELS
        if s.provider == "alibaba"
        and s.model_id.startswith("qwen3.5:")
        and engine in s.supported_engines
    ]
    candidates.sort(key=lambda s: s.parameter_count_b, reverse=True)
    for s in candidates:
        estimated_gb = s.parameter_count_b * 0.5 * 1.1
        if estimated_gb <= available_gb:
            return s.model_id

    return ""

estimated_download_gb

estimated_download_gb(parameter_count_b: float) -> float

Estimate download size in GB for Q4_K_M quantized model.

Source code in src/openjarvis/core/config.py
def estimated_download_gb(parameter_count_b: float) -> float:
    """Estimate download size in GB for Q4_K_M quantized model."""
    return parameter_count_b * 0.5 * 1.1

apply_security_profile

apply_security_profile(security_cfg: 'SecurityConfig', server_cfg: 'ServerConfig | None', *, overrides: 'set[str] | None' = None) -> None

Expand a named security profile into config fields.

Fields in overrides (explicitly set by the user in TOML) are not overwritten by the profile.

Source code in src/openjarvis/core/config.py
def apply_security_profile(
    security_cfg: "SecurityConfig",
    server_cfg: "ServerConfig | None",
    *,
    overrides: "set[str] | None" = None,
) -> None:
    """Expand a named security profile into config fields.

    Fields in *overrides* (explicitly set by the user in TOML) are
    not overwritten by the profile.
    """
    profile = security_cfg.profile
    if not profile:
        return

    if profile not in _SECURITY_PROFILES:
        raise ValueError(
            f"Unknown security profile '{profile}'. "
            f"Valid profiles: {', '.join(_SECURITY_PROFILES)}"
        )

    _overrides = overrides or set()
    pdef = _SECURITY_PROFILES[profile]

    for key, value in pdef.get("security", {}).items():
        if key not in _overrides and hasattr(security_cfg, key):
            setattr(security_cfg, key, value)

    if server_cfg is not None:
        for key, value in pdef.get("server", {}).items():
            if key not in _overrides and hasattr(server_cfg, key):
                setattr(server_cfg, key, value)

validate_config_key

validate_config_key(dotted_key: str) -> type

Validate a dotted config key and return the leaf field's Python type.

Raises :class:ValueError when the key does not map to a known field. The function walks the JarvisConfig dataclass hierarchy using dataclasses.fields().

Examples::

validate_config_key("engine.ollama.host")      # -> str
validate_config_key("intelligence.temperature") # -> float
Source code in src/openjarvis/core/config.py
def validate_config_key(dotted_key: str) -> type:
    """Validate a dotted config key and return the leaf field's Python type.

    Raises :class:`ValueError` when the key does not map to a known field.
    The function walks the ``JarvisConfig`` dataclass hierarchy using
    ``dataclasses.fields()``.

    Examples::

        validate_config_key("engine.ollama.host")      # -> str
        validate_config_key("intelligence.temperature") # -> float
    """
    from dataclasses import fields as dc_fields

    parts = dotted_key.split(".")
    if len(parts) < 2:
        raise ValueError(
            f"Config key must have at least two segments (e.g. engine.default), "
            f"got: {dotted_key!r}"
        )

    if parts[0] not in _SETTABLE_SECTIONS:
        raise ValueError(
            f"Unknown config key: {dotted_key!r} "
            f"(valid top-level sections: {sorted(_SETTABLE_SECTIONS)})"
        )

    # Walk the dataclass tree
    current_cls = JarvisConfig
    for i, part in enumerate(parts):
        field_map = {f.name: f for f in dc_fields(current_cls)}
        if part not in field_map:
            path_so_far = ".".join(parts[: i + 1])
            raise ValueError(
                f"Unknown config key: {dotted_key!r} "
                f"(no field {part!r} at {path_so_far}; "
                f"valid fields: {sorted(field_map.keys())})"
            )
        fld = field_map[part]
        # Resolve the type — unwrap Optional, etc.
        fld_type = fld.type
        if isinstance(fld_type, str):
            # Evaluate forward references in the config module namespace
            import openjarvis.core.config as _cfg_mod

            fld_type = eval(fld_type, vars(_cfg_mod))  # noqa: S307

        if i == len(parts) - 1:
            # Leaf — return the primitive type
            return fld_type
        else:
            # Must be a nested dataclass
            if not hasattr(fld_type, "__dataclass_fields__"):
                path_so_far = ".".join(parts[: i + 1])
                raise ValueError(
                    f"Unknown config key: {dotted_key!r} "
                    f"({path_so_far} is a leaf of type {fld_type.__name__}, "
                    f"not a section)"
                )
            current_cls = fld_type

    # Should not reach here, but satisfy type checker
    raise ValueError(f"Unknown config key: {dotted_key!r}")

load_config cached

load_config(path: Optional[Path] = None) -> JarvisConfig

Detect hardware, build defaults, overlay TOML overrides.

PARAMETER DESCRIPTION
path

Explicit config file. If not set, uses OPENJARVIS_CONFIG when set, otherwise ~/.openjarvis/config.toml.

TYPE: Optional[Path] DEFAULT: None

Source code in src/openjarvis/core/config.py
@functools.lru_cache(maxsize=1)
def load_config(path: Optional[Path] = None) -> JarvisConfig:
    """Detect hardware, build defaults, overlay TOML overrides.

    Parameters
    ----------
    path:
        Explicit config file. If not set, uses ``OPENJARVIS_CONFIG`` when set,
        otherwise ``~/.openjarvis/config.toml``.
    """
    _ensure_config_dir()
    hw = detect_hardware()
    cfg = JarvisConfig(hardware=hw)
    cfg.engine.default = recommend_engine(hw)

    if path is not None:
        config_path = Path(path)
    elif os.environ.get("OPENJARVIS_CONFIG"):
        config_path = Path(os.environ["OPENJARVIS_CONFIG"]).expanduser().resolve()
    else:
        config_path = DEFAULT_CONFIG_PATH
    if config_path.exists():
        with open(config_path, "rb") as fh:
            data = tomllib.load(fh)

        # Run backward-compat migrations before applying
        _migrate_toml_data(data, cfg)

        # All top-level sections — recursive _apply_toml_section handles
        # nested sub-configs (engine.ollama, learning.routing, channel.*, etc.)
        top_sections = (
            "engine",
            "intelligence",
            "learning",
            "agent",
            "server",
            "telemetry",
            "traces",
            "security",
            "channel",
            "tools",
            "sandbox",
            "scheduler",
            "workflow",
            "sessions",
            "a2a",
            "operators",
            "speech",
            "optimize",
            "agent_manager",
            "digest",
        )
        for section_name in top_sections:
            if section_name in data:
                _apply_toml_section(
                    getattr(cfg, section_name),
                    data[section_name],
                )

        # Memory: accept [memory] (old) → maps to tools.storage
        if "memory" in data:
            _apply_toml_section(cfg.tools.storage, data["memory"])

        # Expand security profile (user TOML overrides take precedence)
        _user_security_keys = set(data.get("security", {}).keys())
        apply_security_profile(cfg.security, cfg.server, overrides=_user_security_keys)

    # Apply profile even without a config file (in case defaults set one)
    if not config_path.exists() and cfg.security.profile:
        apply_security_profile(cfg.security, cfg.server)

    return cfg

generate_minimal_toml

generate_minimal_toml(hw: HardwareInfo, engine: str | None = None, *, host: str | None = None) -> str

Render a minimal TOML config with only essential settings.

Source code in src/openjarvis/core/config.py
def generate_minimal_toml(
    hw: HardwareInfo, engine: str | None = None, *, host: str | None = None
) -> str:
    """Render a minimal TOML config with only essential settings."""
    engine = engine or recommend_engine(hw)
    model = recommend_model(hw, engine)
    gpu_comment = ""
    if hw.gpu:
        mem_label = "unified memory" if hw.gpu.vendor == "apple" else "VRAM"
        gpu_comment = f"\n# GPU: {hw.gpu.name} ({hw.gpu.vram_gb} GB {mem_label})"
    if host:
        engine_host_section = f'\n[engine.{engine}]\nhost = "{host}"\n'
    else:
        engine_host_section = (
            f"\n[engine.{engine}]\n"
            f'# host = "http://localhost:11434"  '
            f"# set to remote URL if engine runs elsewhere\n"
        )
    return f"""\
# OpenJarvis configuration
# Hardware: {hw.cpu_brand} ({hw.cpu_count} cores, {hw.ram_gb} GB RAM){gpu_comment}
# Full reference config: jarvis init --full

[engine]
default = "{engine}"
{engine_host_section}
[intelligence]
default_model = "{model}"

[agent]
default_agent = "simple"

[tools]
enabled = ["code_interpreter", "web_search", "file_read", "shell_exec"]
"""

generate_default_toml

generate_default_toml(hw: HardwareInfo, engine: str | None = None, *, host: str | None = None) -> str

Render a commented TOML string suitable for ~/.openjarvis/config.toml.

Source code in src/openjarvis/core/config.py
def generate_default_toml(
    hw: HardwareInfo, engine: str | None = None, *, host: str | None = None
) -> str:
    """Render a commented TOML string suitable for ``~/.openjarvis/config.toml``."""
    engine = engine or recommend_engine(hw)
    model = recommend_model(hw, engine)
    gpu_line = ""
    if hw.gpu:
        gpu_line = f"# Detected GPU: {hw.gpu.name} ({hw.gpu.vram_gb} GB VRAM)"

    model_comment = ""
    if model:
        model_comment = "  # recommended for your hardware"

    result = f"""\
# OpenJarvis configuration
# Generated by `jarvis init`
#
# Hardware: {hw.cpu_brand} ({hw.cpu_count} cores, {hw.ram_gb} GB RAM)
{gpu_line}

[engine]
default = "{engine}"

[engine.ollama]
host = "http://localhost:11434"

[engine.vllm]
host = "http://localhost:8000"

[engine.sglang]
host = "http://localhost:30000"

# [engine.llamacpp]
# host = "http://localhost:8080"
# binary_path = ""

[engine.mlx]
host = "http://localhost:8080"

# [engine.lmstudio]
# host = "http://localhost:1234"

# [engine.exo]
# host = "http://localhost:52415"

# [engine.nexa]
# host = "http://localhost:18181"
# device = ""  # cpu, gpu, npu

# [engine.uzu]
# host = "http://localhost:8080"

# [engine.apple_fm]
# host = "http://localhost:8079"

[intelligence]
default_model = "{model}"{model_comment}
fallback_model = ""
# model_path = ""              # Local weights (HF repo, GGUF file, etc.)
# checkpoint_path = ""         # Checkpoint/adapter path
# quantization = "none"        # none, fp8, int8, int4, gguf_q4, gguf_q8
# preferred_engine = ""        # Override engine for this model (e.g., "vllm")
# provider = ""                # local, openai, anthropic, google
temperature = 0.7
max_tokens = 1024
# top_p = 0.9
# top_k = 40
# repetition_penalty = 1.0
# stop_sequences = ""

[agent]
default_agent = "simple"
max_turns = 10
# tools = ""                   # Comma-separated tool names
# objective = ""               # Concise purpose string
# system_prompt = ""           # Inline system prompt
# system_prompt_path = ""      # Path to system prompt file
context_from_memory = true

[tools.storage]
default_backend = "sqlite"

[tools.mcp]
enabled = true

# [tools.browser]
# headless = true
# timeout_ms = 30000
# viewport_width = 1280
# viewport_height = 720

[server]
host = "0.0.0.0"
port = 8000
agent = "orchestrator"

[learning]
enabled = false
update_interval = 100
# auto_update = false

[learning.routing]
policy = "heuristic"
# min_samples = 5

# [learning.intelligence]
# policy = "none"              # "sft" to learn from traces

# [learning.agent]
# policy = "none"              # "agent_advisor" | "icl_updater"

# [learning.metrics]
# accuracy_weight = 0.6
# latency_weight = 0.2
# cost_weight = 0.1
# efficiency_weight = 0.1

[telemetry]
enabled = true
# gpu_metrics = false
# gpu_poll_interval_ms = 50

[traces]
enabled = false

[channel]
enabled = false
default_agent = "simple"

# [channel.telegram]
# bot_token = ""  # Or set TELEGRAM_BOT_TOKEN env var

# [channel.discord]
# bot_token = ""  # Or set DISCORD_BOT_TOKEN env var

# [channel.slack]
# bot_token = ""  # Or set SLACK_BOT_TOKEN env var

# [channel.webhook]
# url = ""

# [channel.whatsapp]
# access_token = ""      # Or set WHATSAPP_ACCESS_TOKEN env var
# phone_number_id = ""   # Or set WHATSAPP_PHONE_NUMBER_ID env var

# [channel.signal]
# api_url = ""            # signal-cli REST API URL
# phone_number = ""       # Or set SIGNAL_PHONE_NUMBER env var

# [channel.google_chat]
# webhook_url = ""        # Or set GOOGLE_CHAT_WEBHOOK_URL env var

# [channel.irc]
# server = ""
# port = 6667
# nick = ""
# use_tls = false

# [channel.teams]
# app_id = ""             # Or set TEAMS_APP_ID env var
# app_password = ""       # Or set TEAMS_APP_PASSWORD env var

# [channel.matrix]
# homeserver = ""         # Or set MATRIX_HOMESERVER env var
# access_token = ""       # Or set MATRIX_ACCESS_TOKEN env var

# [channel.mattermost]
# url = ""                # Or set MATTERMOST_URL env var
# token = ""              # Or set MATTERMOST_TOKEN env var

# [channel.feishu]
# app_id = ""             # Or set FEISHU_APP_ID env var
# app_secret = ""         # Or set FEISHU_APP_SECRET env var

# [channel.bluebubbles]
# url = ""                # Or set BLUEBUBBLES_URL env var
# password = ""           # Or set BLUEBUBBLES_PASSWORD env var

[security]
enabled = true
mode = "warn"
scan_input = true
scan_output = true
secret_scanner = true
pii_scanner = true
enforce_tool_confirmation = true
ssrf_protection = true
# rate_limit_enabled = false
# rate_limit_rpm = 60
# rate_limit_burst = 10

# [sandbox]
# enabled = false
# image = "openjarvis-sandbox:latest"
# timeout = 300
# max_concurrent = 5
# runtime = "docker"

# [scheduler]
# enabled = false
# poll_interval = 60
# db_path = ""                # Defaults to ~/.openjarvis/scheduler.db

# [channel.whatsapp_baileys]
# auth_dir = ""               # Defaults to ~/.openjarvis/whatsapp_auth
# assistant_name = "Jarvis"
# assistant_has_own_number = false
"""
    if host:
        import re as _re

        pattern = _re.escape(f"[engine.{engine}]") + r"\nhost = \"[^\"]*\""
        replacement = f'[engine.{engine}]\\nhost = "{host}"'
        result = _re.sub(pattern, replacement, result)
    return result