Skip to content

types

types

Canonical data types shared across all OpenJarvis primitives.

Classes

Role

Bases: str, Enum

Chat message roles (OpenAI-compatible).

Quantization

Bases: str, Enum

Model quantization formats.

StepType

Bases: str, Enum

Types of steps within an agent trace.

ToolCall dataclass

ToolCall(id: str, name: str, arguments: str)

A single tool invocation request embedded in an assistant message.

Message dataclass

Message(role: Role, content: str = '', name: Optional[str] = None, tool_calls: Optional[List[ToolCall]] = None, tool_call_id: Optional[str] = None, metadata: Dict[str, Any] = dict())

A single chat message (OpenAI-compatible structure).

Conversation dataclass

Conversation(messages: List[Message] = list(), max_messages: Optional[int] = None)

Ordered list of messages with an optional sliding-window cap.

Functions
add
add(message: Message) -> None

Append a message, trimming oldest if max_messages is set.

Source code in src/openjarvis/core/types.py
def add(self, message: Message) -> None:
    """Append a message, trimming oldest if *max_messages* is set."""
    self.messages.append(message)
    if self.max_messages is not None and len(self.messages) > self.max_messages:
        self.messages = self.messages[-self.max_messages :]
window
window(n: int) -> List[Message]

Return the last n messages.

Source code in src/openjarvis/core/types.py
def window(self, n: int) -> List[Message]:
    """Return the last *n* messages."""
    if n <= 0:
        return []
    return self.messages[-n:]

ModelSpec dataclass

ModelSpec(model_id: str, name: str, parameter_count_b: float, context_length: int, active_parameter_count_b: Optional[float] = None, quantization: Quantization = NONE, min_vram_gb: float = 0.0, supported_engines: Sequence[str] = (), provider: str = '', requires_api_key: bool = False, metadata: Dict[str, Any] = dict())

Metadata describing a language model.

ToolResult dataclass

ToolResult(tool_name: str, content: str, success: bool = True, usage: Dict[str, Any] = dict(), cost_usd: float = 0.0, latency_seconds: float = 0.0, metadata: Dict[str, Any] = dict())

Result returned by a tool invocation.

TelemetryRecord dataclass

TelemetryRecord(timestamp: float, model_id: str, prompt_tokens: int = 0, completion_tokens: int = 0, total_tokens: int = 0, latency_seconds: float = 0.0, ttft: float = 0.0, cost_usd: float = 0.0, energy_joules: float = 0.0, power_watts: float = 0.0, gpu_utilization_pct: float = 0.0, gpu_memory_used_gb: float = 0.0, gpu_temperature_c: float = 0.0, throughput_tok_per_sec: float = 0.0, energy_per_output_token_joules: float = 0.0, throughput_per_watt: float = 0.0, prefill_latency_seconds: float = 0.0, decode_latency_seconds: float = 0.0, prefill_energy_joules: float = 0.0, decode_energy_joules: float = 0.0, mean_itl_ms: float = 0.0, median_itl_ms: float = 0.0, p90_itl_ms: float = 0.0, p95_itl_ms: float = 0.0, p99_itl_ms: float = 0.0, std_itl_ms: float = 0.0, is_streaming: bool = False, engine: str = '', agent: str = '', energy_method: str = '', energy_vendor: str = '', batch_id: str = '', is_warmup: bool = False, cpu_energy_joules: float = 0.0, gpu_energy_joules: float = 0.0, dram_energy_joules: float = 0.0, tokens_per_joule: float = 0.0, metadata: Dict[str, Any] = dict())

Single telemetry observation recorded after an inference call.

TraceStep dataclass

TraceStep(step_type: StepType, timestamp: float, duration_seconds: float = 0.0, input: Dict[str, Any] = dict(), output: Dict[str, Any] = dict(), metadata: Dict[str, Any] = dict())

A single step within an agent trace.

Each step records what the agent did (route, retrieve, generate, tool_call, respond), its inputs and outputs, and timing.

Trace dataclass

Trace(trace_id: str = _trace_id(), query: str = '', agent: str = '', model: str = '', engine: str = '', steps: List[TraceStep] = list(), result: str = '', outcome: Optional[str] = None, feedback: Optional[float] = None, started_at: float = 0.0, ended_at: float = 0.0, total_tokens: int = 0, total_latency_seconds: float = 0.0, metadata: Dict[str, Any] = dict())

Complete trace of an agent handling a query.

A trace captures the full sequence of steps an agent took to handle a query — which model was selected, what memory was retrieved, which tools were called, and the final response. Traces are the primary input to the learning system: by analyzing which decisions led to good outcomes, the system can improve routing, tool selection, and memory strategies.

Functions
add_step
add_step(step: TraceStep) -> None

Append a step and update running totals.

Source code in src/openjarvis/core/types.py
def add_step(self, step: TraceStep) -> None:
    """Append a step and update running totals."""
    self.steps.append(step)
    self.total_latency_seconds += step.duration_seconds
    self.total_tokens += step.output.get("tokens", 0)

RoutingContext dataclass

RoutingContext(query: str = '', query_length: int = 0, has_code: bool = False, has_math: bool = False, language: str = 'en', urgency: float = 0.5, metadata: Dict[str, Any] = dict())

Context describing a query for model routing decisions.