Skip to main content

Docker Compose Deployment

Deploy MemoryRelay on your own infrastructure with Docker Compose. This setup includes the API server, web dashboard, PostgreSQL with pgvector, and Redis.

Prerequisites

  • Docker 20.10+ and Docker Compose v2
  • 2 GB RAM minimum (4 GB recommended for production)
  • A Linux, macOS, or Windows host with Docker installed

Quick Start

1. Clone the repository

git clone https://github.com/memoryrelay/ai-memory-service.git
cd ai-memory-service

2. Configure environment variables

cp .env.example .env

Edit .env and set the required values:

# Required: generate a secure random key
SECRET_KEY=your-secret-key-at-least-32-characters

# Required: database connection (use the service name "db" as host)
DATABASE_URL=postgresql+asyncpg://memoryrelay:your-db-password@db:5432/memory_service

# Optional: Redis (defaults to the redis service)
REDIS_URL=redis://redis:6379/0
warning

Always change the default SECRET_KEY and database password before deploying. Use a cryptographically random string of at least 32 characters for the secret key.

3. Start the services

docker compose up -d

This starts four containers:

ServicePortDescription
api8000FastAPI application server
web3000Next.js web dashboard
db5432PostgreSQL 16 with pgvector extension
redis6379Redis for caching and rate limiting

4. Verify the deployment

curl http://localhost:8000/v1/health

Expected response:

{
"status": "healthy",
"version": "1.0.0",
"database": "connected",
"redis": "connected"
}

5. Create your first API key

Register a user and generate an API key:

# Create a user
curl -X POST http://localhost:8000/v1/auth/register \
-H "Content-Type: application/json" \
-d '{"email": "admin@example.com", "password": "your-secure-password"}'

# The response includes your API key (starts with mem_)
tip

Save your API key immediately. It is only shown once at creation time.

Service Management

# View logs
docker compose logs -f api

# Restart a single service
docker compose restart api

# Stop all services
docker compose down

# Stop and remove volumes (deletes all data)
docker compose down -v

Embedding Model

MemoryRelay ships with all-MiniLM-L6-v2, a lightweight sentence-transformers model that generates 384-dimensional vectors. It runs locally inside the API container -- no external API calls needed.

To switch to OpenAI embeddings, update your .env:

EMBEDDING_PROVIDER=openai
OPENAI_API_KEY=sk-your-openai-key
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_DIMENSION=1536
info

Changing the embedding model or dimension after storing memories requires re-embedding all existing content. Plan this change before storing production data.

Production Considerations

SSL Termination

Use a reverse proxy like nginx to terminate SSL in front of the Docker services:

server {
listen 443 ssl;
server_name memory.your-domain.com;

ssl_certificate /etc/letsencrypt/live/memory.your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/memory.your-domain.com/privkey.pem;

location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
tip

Use Let's Encrypt with certbot for free, auto-renewing certificates.

Database Backups

Schedule regular PostgreSQL backups:

# Manual backup
docker compose exec db pg_dump -U memoryrelay memory_service > backup_$(date +%Y%m%d).sql

# Restore from backup
docker compose exec -T db psql -U memoryrelay memory_service < backup_20260317.sql

For production, set up automated daily backups with a cron job:

# Add to crontab: daily backup at 2 AM, retain 30 days
0 2 * * * cd /path/to/ai-memory-service && docker compose exec -T db pg_dump -U memoryrelay memory_service | gzip > /backups/memoryrelay_$(date +\%Y\%m\%d).sql.gz && find /backups -name "memoryrelay_*.sql.gz" -mtime +30 -delete

Log Management

The API writes structured logs to stdout. In production, configure Docker's logging driver to manage log rotation:

# In docker-compose.yml, add to the api service:
services:
api:
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "5"

Resource Limits

Set memory and CPU limits to prevent a single service from consuming all host resources:

services:
api:
deploy:
resources:
limits:
memory: 2G
cpus: "2.0"
db:
deploy:
resources:
limits:
memory: 1G

Upgrading

# Pull the latest images
git pull
docker compose build

# Apply database migrations
docker compose run --rm api alembic upgrade head

# Restart with new images
docker compose up -d
warning

Always back up your database before applying migrations. Review migration files in alembic/versions/ before running alembic upgrade head.