This document describes how to build, run, and deploy the IMAS Codex Server container.
# Build and run the container
docker-compose up -d
# View logs
docker-compose logs -f imas-codex
# Stop the container
docker-compose down
# Build the image
docker build -t imas-codex .
# Run the container
docker run -d \
--name imas-codex \
-p 8000:8000 \
-v ./index:/app/index:ro \
imas-codex
The container is automatically built and pushed to GitHub Container Registry on tagged releases.
# Pull the latest image
docker pull ghcr.io/iterorganization/imas-codex:latest
# Pull a specific version
docker pull ghcr.io/iterorganization/imas-codex:v1.0.0
# Run the pulled image
docker run -d \
--name imas-codex \
-p 8000:8000 \
ghcr.io/iterorganization/imas-codex:latest
latest - Latest build from main branchmain - Latest build from main branch (same as latest)v* - Tagged releases (e.g., v1.0.0, v1.1.0)pr-* - Pull request builds| Variable | Description | Default |
|---|---|---|
PYTHONPATH |
Python path | /app |
PORT |
Port to run the server on | 8000 |
| Path | Description |
|---|---|
/app/index |
Index files directory (mount as read-only) |
/app/logs |
Application logs (optional) |
The container includes a health check that verifies the server is responding correctly. The server uses streamable-http transport by default, which exposes a dedicated health endpoint that checks both server availability and search index functionality. The server runs in stateful mode to support MCP sampling functionality:
# Check container health status
docker ps
# Look for "healthy" status in the STATUS column
# Manual health check using the dedicated endpoint
curl -f http://localhost:8000/health
# Example health response
{
"status": "healthy",
"service": "imas-codex-server",
"version": "4.0.1.dev164",
"index_stats": {
"total_paths": 15420,
"index_name": "lexicographic_4.0.1.dev164"
},
"transport": "streamable-http"
}
The health check is configured in docker-compose.yml:
healthcheck:
test:
[
"CMD",
"python",
"-c",
"import urllib.request; urllib.request.urlopen('http://localhost:8000/health')",
]
interval: 30s # Check every 30 seconds
timeout: 10s # 10 second timeout per check
retries: 3 # Mark unhealthy after 3 consecutive failures
start_period: 40s # Wait 40 seconds before starting checks
Note: The health endpoint is available when using streamable-http transport (default). For other transports (stdio, sse), the health check will verify port connectivity only.
# Use the production profile
docker-compose --profile production up -d
This will start both the IMAS Codex Server and an Nginx reverse proxy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: imas-codex
spec:
replicas: 2
selector:
matchLabels:
app: imas-codex
template:
metadata:
labels:
app: imas-codex
spec:
containers:
- name: imas-codex
image: ghcr.io/iterorganization/imas-codex:latest
ports:
- containerPort: 8000
env:
- name: PYTHONPATH
value: "/app"
volumeMounts:
- name: index-data
mountPath: /app/index
readOnly: true
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: index-data
persistentVolumeClaim:
claimName: imas-index-pvc
---
apiVersion: v1
kind: Service
metadata:
name: imas-codex-service
spec:
selector:
app: imas-codex
ports:
- protocol: TCP
port: 80
targetPort: 8000
type: LoadBalancer
# Build the image
docker build -t imas-codex:dev .
# Run with development settings
docker run -it --rm \
-p 8000:8000 \
-v $(pwd):/app \
-e PYTHONPATH=/app \
imas-codex:dev
# Run with interactive shell
docker run -it --rm \
-p 8000:8000 \
-v $(pwd):/app \
ghcr.io/iterorganization/imas-codex:latest \
/bin/bash
# View logs
docker logs -f imas-codex
Container fails to start
docker-compose logs imas-codexIndex files not found
Memory issues
--memory=2g# Run with increased memory
docker run -d \
--name imas-codex \
--memory=2g \
--cpus=2 \
-p 8000:8000 \
ghcr.io/iterorganization/imas-codex:latest
The project includes GitHub Actions workflows for:
Testing (.github/workflows/test.yml)
Container Build (.github/workflows/docker-build-push.yml)
Releases (.github/workflows/release.yml)
The OPENAI_API_KEY secret is optional for Docker builds.
What runs locally (no API key needed):
sentence-transformers model all-MiniLM-L6-v2)What uses the API key (optional):
Fallback behavior without API key:
imas_codex/definitions/clusters/labels.json (version-controlled)Configuring the secret (optional):
Settings → Secrets and variables → ActionsOPENAI_API_KEYManual Docker Build:
# Build without API key (uses cached/fallback labels)
docker build -t imas-codex .
# Build with API key for fresh LLM-generated labels
docker build --secret id=OPENAI_API_KEY,env=OPENAI_API_KEY -t imas-codex .
# Build with minimal IDS for faster iteration
docker build --build-arg IDS_FILTER="equilibrium" -t imas-codex:test .