Getting Started
From zero to a running agent brain in 5 minutes.
Prerequisites
- Python 3.10+
- Node.js 18+ (optional, for the dashboard)
- Docker (optional, for containerized deployment)
Install
Option A: pip install (simplest)
Two commands. Gives you the CLI, MCP server, and memory system.
$ pip install aibrain
$ aibrain setup
No Node.js or Docker needed. This is the fastest path.
Option B: Docker (full stack)
Runs the backend, frontend dashboard, and all services in containers.
$ pip install aibrain && aibrain setup
$ cp config.json.example config.json
# Edit config.json with your settings
$ docker compose up --build
Open http://localhost:5173 in your browser. Done.
Option C: Local install (full control)
Run each service manually for maximum flexibility.
$ pip install aibrain && aibrain setup
$ cp config.json.example config.json
Configure
Edit config.json — the only required field is user_name. Everything else has sensible defaults:
| Field | What it does | Default |
|---|---|---|
user_name | Your name (shown in dashboard) | "" |
llm_provider | Which LLM to use | "auto" |
anthropic_api_key | Claude API key | "" (skip for Ollama) |
openai_api_key | OpenAI API key | "" (skip for Ollama) |
ollama_url | Local Ollama endpoint | localhost:11434 |
The auto provider tries Ollama first, then Claude, then GPT. For zero-cost local inference, install Ollama and pull a model:
$ ollama pull llama3.2
Environment Variables (optional)
$ cp .env.example .env
# Edit .env with any overrides
| Variable | Purpose | Default |
|---|---|---|
AGENT_ID | Agent identity name | agent |
AUTH_MODE | Auth type: none, api_key, jwt | none |
AIBRAIN_CORS_ORIGINS | Allowed CORS origins | localhost |
Start
1 Launch the services
# Windows
> start.bat
# Unix / macOS
$ ./start.sh
Or start manually:
# Terminal 1 — Backend
$ cd backend
$ pip install -r ../requirements.txt
$ python -m uvicorn main:app --host 0.0.0.0 --port 8001 --reload
# Terminal 2 — Frontend (optional)
$ cd frontend
$ npm install && npm run dev -- --host
2 Open the dashboard
Navigate to http://localhost:5173. The onboarding tour walks you through the interface.
Connect via MCP
AIBrain includes an MCP server that gives any compatible AI agent persistent memory. Works with Claude Code, Cursor, Copilot, or any MCP-compatible tool.
Add to your MCP configuration:
{
"mcpServers": {
"aibrain-memory": {
"command": "python",
"args": ["/path/to/aibrain/mcp_server.py"]
}
}
}
Install MCP dependencies:
$ pip install mcp sentence-transformers sqlite-vec
All optional — the server gracefully degrades without them. Once connected, your agent gets three tools:
| Tool | Description |
|---|---|
memory_store | Save a memory (auto-enriched with embeddings) |
memory_search | Search with selective routing (auto-detects query type) |
memory_recall | Load top memories by importance score |
Embedding Modes
| Mode | Config | Size |
|---|---|---|
| No ML | AIBRAIN_EMBEDDING_MODEL=none | 0 deps |
| Default (MiniLM) | (no config needed) | 22 MB |
| bge-base | AIBRAIN_EMBEDDING_MODEL=BAAI/bge-base-en-v1.5 | 110 MB |
Brain Packs
Brain packs are domain specialization bundles. Activate a pack and your agent instantly gains a curated set of workflows for that domain.
$ aibrain packs # Browse all packs
$ aibrain packs activate developer # Activate the developer pack
$ aibrain packs active # See what's active
Available packs include Productivity (free), Developer, Content Creator, Business, Security Pro, Research, Multi-Agent Ops, and Job Hunter. See the User Guide for full details.
What's Next
- Explore the dashboard — 17 pages covering memory, workflows, approvals, chat, costs, and more
- Enable workflows —
aibrain workflows enable --recommendedfor the starter set - Try the chat — built-in commands like
/status,/approvals,/schedule - Connect agents — register peers in the Agent Mesh for multi-agent communication
Troubleshooting
Backend won't start? Check Python version (python --version must be 3.10+). Install deps with pip install -r requirements.txt.
Frontend won't start? Check Node version (node --version must be 18+). Run npm install in the frontend directory.
MCP not connecting? Verify the path in your MCP config is absolute. Check that the mcp package is installed.
No LLM responses? Configure at least one provider in config.json. For zero-cost local inference, install Ollama.