Skip to content

litellm

litellm

LiteLLM inference engine — unified access to 100+ LLM providers.

Classes

LiteLLMEngine

LiteLLMEngine(*, api_base: str | None = None, default_model: str | None = None)

Bases: InferenceEngine

Inference via LiteLLM — routes to any supported provider.

LiteLLM normalizes all providers (OpenAI, Anthropic, Google, DeepSeek, Groq, Together, Fireworks, OpenRouter, Mistral, Cohere, xAI, Perplexity, etc.) to OpenAI-format input/output. Model selection uses LiteLLM's provider/model convention, e.g. anthropic/claude-sonnet-4-20250514.

API keys are read from environment variables following each provider's convention (OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY, etc.).

Source code in src/openjarvis/engine/litellm.py
def __init__(
    self,
    *,
    api_base: str | None = None,
    default_model: str | None = None,
) -> None:
    self._api_base = api_base
    self._default_model = default_model

Functions