Getting Started

From zero to a running agent brain in 5 minutes.

Prerequisites

Install

Option A: pip install (simplest)

Two commands. Gives you the CLI, MCP server, and memory system.

$ pip install aibrain
$ aibrain setup

No Node.js or Docker needed. This is the fastest path.

Option B: Docker (full stack)

Runs the backend, frontend dashboard, and all services in containers.

$ pip install aibrain && aibrain setup
$ cp config.json.example config.json
# Edit config.json with your settings
$ docker compose up --build

Open http://localhost:5173 in your browser. Done.

Option C: Local install (full control)

Run each service manually for maximum flexibility.

$ pip install aibrain && aibrain setup
$ cp config.json.example config.json

Configure

Edit config.json — the only required field is user_name. Everything else has sensible defaults:

FieldWhat it doesDefault
user_nameYour name (shown in dashboard)""
llm_providerWhich LLM to use"auto"
anthropic_api_keyClaude API key"" (skip for Ollama)
openai_api_keyOpenAI API key"" (skip for Ollama)
ollama_urlLocal Ollama endpointlocalhost:11434

The auto provider tries Ollama first, then Claude, then GPT. For zero-cost local inference, install Ollama and pull a model:

$ ollama pull llama3.2

Environment Variables (optional)

$ cp .env.example .env
# Edit .env with any overrides
VariablePurposeDefault
AGENT_IDAgent identity nameagent
AUTH_MODEAuth type: none, api_key, jwtnone
AIBRAIN_CORS_ORIGINSAllowed CORS originslocalhost

Start

1 Launch the services

# Windows
> start.bat

# Unix / macOS
$ ./start.sh

Or start manually:

# Terminal 1 — Backend
$ cd backend
$ pip install -r ../requirements.txt
$ python -m uvicorn main:app --host 0.0.0.0 --port 8001 --reload

# Terminal 2 — Frontend (optional)
$ cd frontend
$ npm install && npm run dev -- --host

2 Open the dashboard

Navigate to http://localhost:5173. The onboarding tour walks you through the interface.

Connect via MCP

AIBrain includes an MCP server that gives any compatible AI agent persistent memory. Works with Claude Code, Cursor, Copilot, or any MCP-compatible tool.

Add to your MCP configuration:

{
  "mcpServers": {
    "aibrain-memory": {
      "command": "python",
      "args": ["/path/to/aibrain/mcp_server.py"]
    }
  }
}

Install MCP dependencies:

$ pip install mcp sentence-transformers sqlite-vec

All optional — the server gracefully degrades without them. Once connected, your agent gets three tools:

ToolDescription
memory_storeSave a memory (auto-enriched with embeddings)
memory_searchSearch with selective routing (auto-detects query type)
memory_recallLoad top memories by importance score

Embedding Modes

ModeConfigSize
No MLAIBRAIN_EMBEDDING_MODEL=none0 deps
Default (MiniLM)(no config needed)22 MB
bge-baseAIBRAIN_EMBEDDING_MODEL=BAAI/bge-base-en-v1.5110 MB

Brain Packs

Brain packs are domain specialization bundles. Activate a pack and your agent instantly gains a curated set of workflows for that domain.

$ aibrain packs                       # Browse all packs
$ aibrain packs activate developer     # Activate the developer pack
$ aibrain packs active                 # See what's active

Available packs include Productivity (free), Developer, Content Creator, Business, Security Pro, Research, Multi-Agent Ops, and Job Hunter. See the User Guide for full details.

What's Next

Troubleshooting

Backend won't start? Check Python version (python --version must be 3.10+). Install deps with pip install -r requirements.txt.

Frontend won't start? Check Node version (node --version must be 18+). Run npm install in the frontend directory.

MCP not connecting? Verify the path in your MCP config is absolute. Check that the mcp package is installed.

No LLM responses? Configure at least one provider in config.json. For zero-cost local inference, install Ollama.

Next
User Guide →
Reference
Workflow Reference →