Skip to content

Docker Compose with Agent Sandbox

This guide deploys Stib with per-project agent sandbox isolation enabled. When a project opts in (Settings → Isolation des agents), every spawned agent (Claude Code, etc.) runs inside its own short-lived Docker container, isolated from:

  • Other Stib projects on the same host
  • The Stib database (stib.db)
  • The host filesystem outside the project directory

TIP

Looking for the simpler setup without sandbox isolation? See the standard Docker Compose guide.

When to use this setup

Sandbox isolation is recommended when:

  • Multiple users / organizations share the same Stib instance
  • Projects contain code from sources you don't fully trust
  • You need compliance-grade per-project isolation

For single-user, single-project deployments, the standard setup is sufficient.

Prerequisites

  • Docker Engine 20+ (or Docker Desktop) with the Compose plugin
  • A Linux host OR macOS / Windows running Docker Desktop with WSL2 backend (native Windows Docker without WSL2 is not supported — see Compatibility)

The setup at a glance

Three containers run side by side:

  1. stib-server — the main API server (the one you interact with).
  2. docker-socket-proxy — a sidecar (tecnativa/docker-socket-proxy) that exposes a filtered subset of the Docker API. The Stib server talks to it instead of the host's Docker socket directly. This keeps a compromised server from running privileged containers.
  3. agent-session-<card_id> — created on demand by the Stib server when a card spawns an agent. Lives only for the duration of the session, then is auto-removed (--rm).

Configuration

Create compose.yaml in your project directory:

yaml
services:
  docker-socket-proxy:
    image: tecnativa/docker-socket-proxy:0.3.0
    restart: unless-stopped
    privileged: true
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      # Allowlist: the Stib server only needs to manage containers + read images.
      CONTAINERS: 1
      IMAGES: 1
      POST: 1
      DELETE: 1
      ALLOW_START: 1
      ALLOW_STOP: 1
      # Block everything else (networks, volumes, exec, etc.)
      NETWORKS: 0
      VOLUMES: 0
      EXEC: 0
      ALLOW_RESTARTS: 0
    networks:
      - stib-internal

  stib:
    image: enixion/stib-server:latest
    container_name: stib
    restart: unless-stopped
    depends_on:
      - docker-socket-proxy
    ports:
      - "50505:50505"
    volumes:
      - stib-data:/app/data
      # Mount the projects root at the SAME path the host uses. The Stib server
      # passes this path to spawned agent containers, which expect it to resolve
      # to the same files. Adjust /Users/me/stib-projects to your actual root.
      - /Users/me/stib-projects:/Users/me/stib-projects:rw
    environment:
      RUST_LOG: info
      DOCKER_HOST: tcp://docker-socket-proxy:2375
      # Path the agent containers reach back to this server on. host-gateway
      # resolves to the host's Docker bridge IP automatically on Linux; macOS
      # / Windows Docker Desktop already resolve host.docker.internal natively.
      STIB_API_URL: http://host.docker.internal:50505
      # Pin the agent runtime image to match this server's version.
      # For a private registry: registry.example.com:5000/stib-agent-runtime:0.3.0
      STIB_AGENT_IMAGE: enixion/stib-agent-runtime:0.3.0
    extra_hosts:
      - "host.docker.internal:host-gateway"
    networks:
      - stib-internal
      - default
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:50505/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s

networks:
  stib-internal:
    internal: true  # The proxy is NOT exposed publicly.

volumes:
  stib-data:

Required environment variables

VariableRequired?Description
DOCKER_HOSTYes (when using the proxy)Points the Stib server at the socket-proxy. Set to tcp://docker-socket-proxy:2375.
STIB_AGENT_IMAGERecommendedImage used for agent containers. Default: enixion/stib-agent-runtime:<server-version>. Pin a specific tag in CI.
STIB_API_URLRecommendedURL the agent container uses to reach the host server. Default: http://host.docker.internal:50505.

Pull the agent image

Sandboxed projects use a separate image that ships Claude Code + the Stib runner alongside Node.js / git / build tooling.

TIP

The agent image must match your server image's protocol version. Pin to the same tag as your server (e.g. :0.4.0 on both) in production.

From a private registry (beta / self-hosted)

If you host the image on your own registry (Portainer registry, Harbor, GitLab Container Registry, GHCR private, etc.):

bash
# 1. Authenticate against the registry
docker login registry.example.com:5000 -u <user> -p <password>

# 2. Pull
docker pull registry.example.com:5000/stib-agent-runtime:0.3.0

# 3. Tell the Stib server to use it (in compose.yaml, see below)

Set STIB_AGENT_IMAGE on the stib service to point at the full reference:

yaml
services:
  stib:
    environment:
      STIB_AGENT_IMAGE: registry.example.com:5000/stib-agent-runtime:0.3.0

TLS notes:

  • If your registry uses a public/Let's Encrypt cert, no extra config is needed.
  • If it uses a self-signed cert or runs on plain HTTP, add it to the Docker daemon's insecure-registries list (/etc/docker/daemon.json):
    json
    { "insecure-registries": ["registry.example.com:5000"] }
    Then restart the Docker daemon.

WARNING

The Stib server container itself does NOT need to log in to the registry. The docker pull happens on the host's Docker daemon (via the socket-proxy). The image just needs to be either pulled in advance OR pullable by the daemon (daemon-level credentials in ~/.docker/config.json).

From Docker Hub (when you publish a public release)

bash
docker pull enixion/stib-agent-runtime:latest
yaml
services:
  stib:
    environment:
      # Default value — can be omitted.
      STIB_AGENT_IMAGE: enixion/stib-agent-runtime:0.3.0

Start the stack

bash
docker compose up -d

Verify everything is wired up:

bash
curl http://localhost:50505/api/health

Expected response (note "sandboxAvailable": true):

json
{
  "data": {
    "status": "ok",
    "version": "0.4.0",
    "deploymentMode": "Docker",
    "sandboxAvailable": true
  }
}

If sandboxAvailable is false, check:

  1. The stib container has DOCKER_HOST=tcp://docker-socket-proxy:2375 set.
  2. The docker-socket-proxy service is healthy (docker compose logs docker-socket-proxy).
  3. The two services are on the same network.

Enable sandboxing per project

  1. Open the Stib UI (http://localhost:50505).
  2. Create or open a project.
  3. Settings → General → Isolation des agents → toggle ON.
  4. Optionally adjust memory / CPU limits.

The toggle is disabled when sandboxAvailable is false — there's no silent downgrade. If you flip it ON and later remove the Docker socket access, agent spawns will return HTTP 422 SANDBOX_UNAVAILABLE instead of running unsandboxed.

Compatibility

HostSandbox supported?Notes
LinuxNative Docker; recommended setup.
macOS Docker DesktopBind-mounts go through the Docker Desktop VM (slower I/O on large repos).
Windows + WSL2Run the Stib server inside WSL2; place projects in WSL2 (/home/$USER/stib-projects).
Windows nativeBind-mount paths must be POSIX-compatible — not possible without WSL2.

What is NOT isolated (V1 limits)

  • Network egress — agent containers have full outbound network access. The host's firewall, an external proxy, or --network=none (configured manually) must enforce egress allowlists if needed.
  • MCP servers running on the host — agents inside containers can't reach MCP servers defined on the host's ~/.claude/settings.json. Project-local MCP definitions (under .stib/mcps/, when supported) are accessible. (V2: proxy via the host server.)
  • Resource limit changes mid-session — limits are applied at container creation. Changing them in the UI takes effect on the next spawn.

OAuth refresh carry-over

When a sandboxed Claude session lasts more than ~1 hour and the access token expires, Claude inside the container refreshes against Anthropic. Because Anthropic rotates the refresh token on every refresh, the new token would be lost when the container exits — and the host's stored token would now be revoked.

To avoid this, every sandbox container starts a small credential watcher that polls ~/.stib-claude-profiles/<id>/.credentials.json every 30 seconds. On change, it POSTs the refreshed contents back to the host server's /api/internal/credential-refresh endpoint, which updates the source profile under the same per-credential mutex used by ensure_valid_token. The next session sees the freshest token, no re-login required.

The carry-over POST is authenticated by the per-spawn nonce (the same one the agent uses to acquire its scoped Stib API token). It only succeeds for oauth_profile credentials; api_key credentials never rotate.

Operational notes

Cleanup of orphan containers

The Stib server tags every sandbox container with its own server_uuid and removes orphans (from a crashed previous run) at startup. Containers from a sibling Stib instance on the same Docker daemon are left alone.

To inspect or clean up manually:

bash
docker ps -a --filter "label=stib.sandbox=true"
docker rm -f $(docker ps -aq --filter "label=stib.sandbox=true")

Updating

bash
docker compose pull
docker compose up -d

Update the agent image alongside the server (they MUST match versions):

bash
docker pull enixion/stib-agent-runtime:<new-version>

Logs

Each agent session's stdout/stderr is forwarded to the Stib server (visible in the card's conversation thread). For container-level logs (e.g. OOM events):

bash
docker logs $(docker ps -aq --filter "label=stib.card_id=<id>") --tail 100

Troubleshooting

"SANDBOX_UNAVAILABLE" when toggling the project setting

The server returns sandboxAvailable: false. Check:

  1. DOCKER_HOST env var is set on the stib service.
  2. docker-socket-proxy is reachable (docker compose exec stib curl http://docker-socket-proxy:2375/version).
  3. The proxy has CONTAINERS=1 (the most common omission).

Agent fails to start: "agent image not found"

Pull the image:

bash
docker pull enixion/stib-agent-runtime:latest

Or set STIB_AGENT_IMAGE to an image that exists locally.

File permissions inside the project look wrong

The agent container runs as the host user (--user=$(host_uid):$(host_gid)). On macOS Docker Desktop, the UID mapping may not match the macOS user 1:1 (known Docker Desktop limitation). Files created during a session should still be readable by your normal user; if not, run chown -R $USER:staff on the project directory.

Agent can't reach the Stib API

Check the agent container's env:

bash
docker inspect <container_id> | jq '.[0].Config.Env' | grep STIB_API_URL

It should be http://host.docker.internal:50505. On Linux, the extra_hosts: ["host.docker.internal:host-gateway"] line in the compose file is required.

Manual test scenarios

After enabling sandbox on a test project, verify isolation:

  1. Filesystem inter-projects — ask the agent to read a file from a different project (e.g., /path/to/other-project/.env). Should fail with "no such file" (the other path is not bind-mounted).

  2. Database access — ask the agent to dump stib.db:

    sqlite3 /app/data/stib.db ".dump"

    Should fail (the data volume is not mounted in the agent container).

  3. Process isolation — ask the agent to list other agent processes:

    ps -ef | grep claude

    Should only see its own process (separate PID namespace).

  4. Resource limits — ask the agent to allocate a lot of memory. With the default 4 GB limit, allocations beyond that get OOM-killed.

  5. File ownership — ask the agent to create a file. On the host, run ls -la /path/to/project/<file>. Owner should be your host user, not root or a random UID.

  6. Cleanup on session end — close the card mid-conversation. The container should disappear within seconds (docker ps shows nothing labeled with that card_id).

  7. Cleanup on server restart — kill the Stib server while a session is running. Restart it. Check the orphan container is reaped:

    docker ps -a --filter "label=stib.sandbox=true"

Next steps