Skip to content

Ranger Docker Quickstart Guide

This guide will help you get a local Ranger instance up and running quickly via Docker Compose.

Requirements

  • Docker and Docker Compose v2+
  • At least 16GB RAM recommended
  • Internet access (initial image pull only)

Clone the Repo

Start the stack:

git clone https://github.com/CMU-SEI/RangerAI.git
cd RangerAI

Update Configuration

Create an .env file in the root of the repo with the following content:

BASEROW_PUBLIC_URL="" set to your machine’s IP or domain
OLLAMA_HOST="http://localhost:11434" #Ollama server URL
N8N_API_URL="http://localhost:5678" #n8n API URL
N8N_API_KEY="" #set to your n8n API key
GHOSTS_HOST="" #set to your ghosts api url

Edit any port conflicts by changing host:container mappings in docker-compose.yml if needed.

Start the Stack

docker compose up -d

This launches:

  • Open-WebUI (chat frontend) on port 3001
  • Ranger API on port 5076
  • n8n (workflow engine) on port 5678
  • Qdrant (vector database) on port 6333
  • Baserow (data backend) on port 80
  • Ghosts MCP (tool server) on port 8000

Start Ollama

If you haven't already, install Ollama and run it:

ollama serve

You may need to pull models. For example, to pull the Mistral model:

ollama pull mistral:latest

Configure the Workflow Interface

  1. Open a browser and navigate to: http://localhost:5678. If it does not render, run docker restart n8n.
  2. Create your admin account. This is your main interface for interacting with Ranger as an Admin.
  3. Click on your account name and go to settings. Create an n8n API Key. Copy it into your .env file.
  4. Import any workflows and complete their necessary accounts for access to second-level tools such as Qdrant, etc.
  5. Rebuild Ranger: docker compose up -d --force-recreate ranger - Ranger will poll n8n for active workflows now.

Settings Up Client Access to Test the System

Open a browser and navigate to: http://localhost:3001

  1. Setup a new user account.
  2. Go to admin settings and add a connection to both Ollama and Ranger, using the urls http://host.docker.internal:11434 for Ollama and http://host.docker.internal:5076 for Ranger.
  3. Now return to the main page and switch the chat model to a Ranger model (e.g., ranger:mistral).

Try a prompt like:

ghosts machines list

Ranger will parse, plan, and execute using GHOSTS or other tools via MCP.

Stop the Stack

docker compose down

Volumes persist between runs. To clean everything:

docker compose down -v