Downloads¶
OpenJarvis runs entirely on your hardware. Choose the interface that fits your workflow.
Desktop App¶
The desktop app is a native window for the OpenJarvis chat UI. All inference and backend processing happens on your local machine — the app connects to the backend you start locally.
Backend required
Start the backend before opening the desktop app. The quickstart script handles everything:
Download¶
| Platform | Download | Notes |
|---|---|---|
| macOS (Apple Silicon) | OpenJarvis.dmg | M1/M2/M3/M4 Macs |
| Windows (64-bit) | OpenJarvis-setup.exe | Windows 10+ |
| Linux (DEB) | OpenJarvis.deb | Ubuntu, Debian |
| Linux (RPM) | OpenJarvis.rpm | Fedora, RHEL |
| Linux (AppImage) | OpenJarvis.AppImage | Any distro |
All releases
Browse all versions on the GitHub Releases page.
macOS: "app is damaged" fix¶
macOS Gatekeeper quarantines apps downloaded from the internet that aren't notarized by Apple. If you see "OpenJarvis is damaged and can't be opened", run this in Terminal to clear the quarantine flag:
Then open the app normally. If you installed from the DMG but haven't moved it to
/Applications yet, point the command at wherever the .app bundle is:
Note
This is standard for open-source macOS apps distributed outside the App Store. The command removes the quarantine extended attribute — it does not modify the app.
What's included¶
The desktop app provides:
- Full chat UI — same interface as the browser app, in a native window
- Energy monitoring — real-time power consumption tracking
- Telemetry dashboard — token throughput, latency, and cost comparison vs. cloud models
- System tray — quick access without keeping a terminal open
The backend (Ollama, Python API server, inference) runs separately on your machine.
Build from source¶
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis/desktop
npm install
npm run tauri build
The built installer will be in desktop/src-tauri/target/release/bundle/.
Browser App¶
Run the full chat UI in your browser. Everything stays local — the backend runs on
your machine and the frontend connects via localhost.
One-command setup¶
The script handles everything:
- Checks for Python 3.10+ and Node.js 18+
- Installs Ollama if not present and pulls a starter model
- Installs Python and frontend dependencies
- Starts the backend API server and frontend dev server
- Opens
http://localhost:5173in your browser
Manual setup¶
If you prefer to run each step yourself:
Then open http://localhost:5173.
What you get¶
- Chat interface — markdown rendering, streaming responses, conversation history
- Tool use — calculator, web search, code interpreter, file I/O
- System panel — live telemetry, energy monitoring, cost comparison vs. cloud models
- Dashboard — energy graphs, trace debugging, cost breakdown
- Settings — model selection, agent configuration, theme toggle
CLI¶
The command-line interface is the fastest way to interact with OpenJarvis programmatically. Every feature is accessible from the terminal.
Install¶
Verify¶
First commands¶
# Ask a question
jarvis ask "What is the capital of France?"
# Use an agent with tools
jarvis ask --agent orchestrator --tools calculator "What is 137 * 42?"
# Start the API server
jarvis serve --port 8000
# Run diagnostics
jarvis doctor
# List available models
jarvis model list
# Interactive chat
jarvis chat
Inference backend required
The CLI requires a running inference backend (e.g., Ollama). See the Installation guide for setup instructions.
Python SDK¶
For programmatic access, the Jarvis class provides a high-level sync API.
Install¶
Quick example¶
from openjarvis import Jarvis
j = Jarvis()
print(j.ask("Explain quicksort in two sentences."))
j.close()
With agents and tools¶
result = j.ask_full(
"What is the square root of 144?",
agent="orchestrator",
tools=["calculator", "think"],
)
print(result["content"]) # "12"
print(result["tool_results"]) # tool invocations
print(result["turns"]) # number of agent turns
Composition layer¶
For full control, use the SystemBuilder:
from openjarvis import SystemBuilder
system = (
SystemBuilder()
.engine("ollama")
.model("qwen3:8b")
.agent("orchestrator")
.tools(["calculator", "web_search", "file_read"])
.enable_telemetry()
.enable_traces()
.build()
)
result = system.ask("Summarize the latest AI news.")
system.close()
See the Python SDK guide for the full API reference.