OpenJarvis: Local-first AI agents that run entirely on-device

0
23

OpenJarvis: Local-first AI agents that run entirely on-device

Stanford University researchers have released OpenJarvis, an open-source framework designed for building personal artificial intelligence agents that operate entirely on-device.

The framework aims to reduce latency, recurring costs, and data exposure concerns associated with cloud-based AI solutions by prioritizing local execution. This approach positions local AI as a default, with cloud usage becoming an optional component.

OpenJarvis originates from Stanford’s Scaling Intelligence Lab. It functions as both a research platform and a deployment infrastructure for local-first AI systems.

The project emphasizes the complete software stack needed for on-device agents, including usability, measurement, and long-term adaptability. The research cites prior work, “Intelligence Per Watt,” which found local language models could handle 88.7% of chat and reasoning queries at interactive latencies. Efficiency improved 5.3x between 2023 and 2025 according to the team.

OpenJarvis employs a “Five-Primitives” architecture: Intelligence, Engine, Agents, Tools & Memory, and Learning. These primitives function as composable abstractions for independent benchmarking and optimization.

The Intelligence primitive acts as the model layer, providing a unified catalog for various local model families. This abstraction allows developers to select models without manually tracking parameter counts or hardware fit.

The Engine primitive serves as the inference runtime, offering a common interface for backends such as Ollama, vLLM, SGLang, llama.cpp, and cloud APIs. It includes commands like “jarvis init” to detect hardware and recommend configurations, and “jarvis doctor” for maintenance.

The Agents primitive forms the behavior layer, translating model capabilities into structured actions under device constraints. It supports composable roles, including an Orchestrator for task breakdown and an Operative for personal workflows.

The Tools & Memory primitive constitutes the grounding layer. This includes support for MCP (Model Context Protocol) for tool use, Google A2A for agent-to-agent communication, and semantic indexing for local retrieval. It also connects local models to tools and persistent personal context.

The Learning primitive provides a closed-loop improvement mechanism. It uses local interaction traces to generate training data, refine agent behavior, and enhance model selection. Optimization occurs across model weights, LM prompts, agentic logic, and the inference engine.

OpenJarvis prioritizes efficiency, treating energy, FLOPs, latency, and cost as key constraints alongside task quality. It incorporates a hardware-agnostic telemetry system for profiling energy on NVIDIA GPUs, AMD GPUs, and Apple Silicon, with 50 ms sampling intervals. The “jarvis bench” command standardizes benchmarking for latency, throughput, and energy per query.

Developer interfaces for OpenJarvis include a browser application, a desktop application for macOS, Windows, and Linux, a Python SDK, and a command-line interface (CLI). All core functionality operates without a network connection.

The “jarvis serve” command starts a FastAPI server with SSE streaming, which the developers state can serve as a drop-in replacement for OpenAI clients. This feature is intended to lower the migration cost for developers prototyping with an API-shaped interface while maintaining local inference.

Featured image credit