Deploy Clash Meta With Docker Compose: Full Steps and Image Updates
If you already run Docker on a VPS, home lab, or NAS, packaging Clash Meta (the open Mihomo core) in Docker Compose gives you repeatable installs, pinned versions, and clean upgrades without adopting a desktop client. This guide walks through image choice, a practical compose file, volume layout so subscriptions and provider caches survive restarts, network modes, and a disciplined update and rollback workflow—complementary to our bare-metal systemd tutorial, not a replacement for understanding YAML and rules.
Why Docker for headless Clash Meta
Bare-metal installs excel when you want the thinnest stack and tight integration with systemd. Containers add a different value: the runtime is identical everywhere your compose file lands, you can keep multiple stacks isolated on one host, and you can snapshot or copy a directory tree to another machine without chasing distribution packages. For operators who already standardize on Compose for monitoring, reverse proxies, or media stacks, running Clash Meta beside them avoids a parallel maintenance culture.
Docker does not remove the need for solid configuration. You still author config.yaml, still decide how proxy-providers refresh, and still expose only the ports you intend. What changes is where the binary lives (inside the image), how you upgrade it (pull a new tag, recreate the container), and how you persist state (bind mounts or named volumes). Treat the container as a packaging layer, not a substitute for network design.
- Reproducibility: a pinned image digest or semver tag documents exactly which core build you run.
- Isolation: fewer host packages; rollbacks swap images instead of downgrading system-wide binaries.
- Portability: the same compose file often moves from x86 VPS to ARM NAS with an image platform tweak.
Homelab and small-office operators also appreciate that a compose project travels well in git: you can store redacted compose files, document port choices, and keep environment-specific overrides in small compose.override.yml files without touching the shared base. That discipline matters when you clone the stack to a secondary VPS for failover testing or hand the setup to a colleague who needs the same topology on different hardware.
Finally, remember compliance and policy: running a proxy on infrastructure you administer still binds you to acceptable-use rules from your provider, applicable laws, and any contracts that cover that network. Containers make deployment easier; they do not change those obligations. Document who can change the compose file, who holds subscription credentials, and how you rotate secrets when people leave the team.
Choosing an image and pinning tags
Community-maintained images vary by update cadence, default user ID, bundled files, and whether they track Meta nightlies or stable channels. Regardless of vendor naming, prefer images that:
- Publish clear tags (
v1.x.y,meta,latestexplained in their README). - Document the default command, config path, and volume expectations.
- Receive security updates when base images move.
Pin a specific tag in Compose for production. Floating :latest is convenient in a lab; in production it turns every docker compose pull into an unreviewed core upgrade. Many teams pin semver and schedule intentional upgrades. Optionally record the image digest after a successful deploy so you can return to that exact layer if a later tag misbehaves.
When comparing images, skim issue trackers for recurring themes: broken default permissions, unexpected architecture builds, or stale base images. An image that tracks Meta within hours of upstream releases helps if you need bleeding-edge rule features; a slower cadence may suit you if you prefer fewer moving parts. Either way, treat the Dockerfile or build pipeline as part of the supply chain—official-looking names are not a substitute for maintenance signals.
Verifying authenticity matters on public registries. Prefer signed images where available, pin by digest in security-sensitive environments, and avoid copying compose snippets from forums without reconciling them against the maintainer’s current README. A one-line image name swap can silently point you at an abandoned fork.
Practical default
Start from a well-documented Meta-focused image, pin image: registry/namespace/clash-meta:1.19.0 (example), and only advance the tag after reading upstream release notes for breaking changes in TUN, DNS, or API behavior.
Host directory layout
Before the first up, create a working directory on the host—for example /opt/clash-meta or a bind path your NAS UI maps to SSD storage. A minimal layout keeps secrets and large caches predictable:
./config/config.yaml— your main configuration file../data/— optional: persistentproviderdownloads, rule-set caches, or GeoIP files if your YAML references local paths under that tree../secrets/— optional: subscription URLs or TLS material injected read-only if you split secrets from the main YAML.
Many images expect the config at /root/.config/mihomo/config.yaml or /etc/mihomo/config.yaml. Read the image documentation once and map your host ./config to that exact path so file references inside YAML stay portable.
Separate immutable bits from high-churn bits on purpose. Keep hand-edited YAML under version control (private repo or encrypted backup), while treating downloaded provider payloads as disposable caches that can be rebuilt—except when your workflow snapshots them for air-gapped review. That split keeps diffs readable and prevents giant binary blobs from polluting git history.
A practical Compose service
Below is a pattern you must adapt to your chosen image and paths. It uses bridge networking, publishes the mixed inbound port, restarts unless stopped, and sets a simple health probe if the image exposes an HTTP health or metrics endpoint—remove or adjust the healthcheck if your image does not ship one.
services:
clash-meta:
image: example/clash-meta:1.19.0
container_name: clash-meta
restart: unless-stopped
volumes:
- ./config:/root/.config/mihomo
- ./data:/data
ports:
- "7890:7890"
- "9090:9090"
environment:
- TZ=UTC
# healthcheck: optional; depends on image
# healthcheck:
# test: ["CMD", "wget", "-qO-", "http://127.0.0.1:9090/version"]
# interval: 30s
# timeout: 5s
# retries: 3
Replace volume targets with the paths required by your image. If the container runs as non-root, align host folder permissions with the documented UID and GID to avoid silent write failures in ./data.
Add explicit logging options when your host uses default json-file logging and you expect verbose access logs—unbounded log growth on small SSD root partitions has surprised more than one VPS operator. A simple logging: { driver: "json-file", options: { max-size: "10m", max-file: "3" } } pattern keeps rotation predictable. For systemd-journald or remote log collectors, follow your platform’s Docker logging driver guidance.
Consider depends_on only when you genuinely orchestrate multiple services—many single-container Clash stacks do not need it. If you front the proxy with Traefik or Nginx for TLS termination, express that relationship explicitly so startup order and health checks align.
First boot sequence
- Place a known-good
config.yamlunder./config. - Run
docker compose up -dand follow logs withdocker compose logs -f. - Verify listeners from another container or the host with
curlthrough the mixed port. - Only then tighten firewall rules and expose the controller port beyond localhost.
Subscriptions, proxy-providers, and persistence
Clash Meta refreshes remote proxy-providers and rule providers on intervals you set in YAML. Without writable paths, those downloads disappear on every container recreate if they land only in ephemeral storage. Mount a host directory (for example ./data) and point path: in your providers to locations under that mount.
If your subscription URL contains a long-lived token, storing it only inside the container image is wrong—keep it in config.yaml on the host volume or inject it via Docker secrets and an entrypoint script, depending on your comfort level. For most homelab setups, a permission-restricted config.yaml on disk remains the straightforward approach; just exclude that directory from backups you share publicly.
Align DNS settings with how the container resolves names. If you forward everything to public resolvers from inside the bridge network, behavior matches many VPS defaults. When you need the host’s resolver or a local AdGuard instance, pass dns: in Compose or attach the service to a custom Docker network where that resolver is reachable. Misaligned DNS is a frequent cause of “provider download failed” errors that look like bad nodes but are actually name resolution.
Schedule realism: ultra-short interval values on large subscription lists hammer your provider and waste CPU on TLS handshakes. Reasonable intervals—often tens of minutes to hours for stable residential setups—reduce churn and make logs easier to read. Pair intervals with sane health-check settings inside YAML when you use provider groups so dead nodes fall out of rotation without manual babysitting.
When you mount a directory that holds both cached providers and GeoIP or rule-set files, snapshot backup size grows. Exclude volatile caches from nightly backups if you can rebuild them, or accept longer backup windows. For compliance-sensitive environments, the opposite may apply: retain provider snapshots with timestamps to show what rule data was active on a given day.
Networking: bridge, host, and LAN access
Bridge mode (default) maps ports with ports: and keeps the container on Docker’s NAT. It is the easiest model on cloud VPS instances and matches most tutorials. Host networking removes port mapping and lets Clash bind directly to host interfaces—useful when you need transparent proxy or TUN-adjacent setups that expect to see real host routes, but it couples the service tightly to one machine and complicates running multiple Clash instances.
For sharing the proxy with other devices on the LAN, you mirror the same ideas as bare metal: set allow-lan: true, bind to the correct address, and open the host firewall. In bridge mode, published ports listen on the host; your NAS or router must still allow TCP to that host from client subnets. See the LAN proxy guide for mixed-port and bind-address details—the YAML knobs do not change in Docker, only the surrounding network path does.
IPv6 deserves an explicit decision. If your VPS or NAS advertises IPv6 globally but your YAML assumes IPv4-only listeners, clients may half-connect or fall back oddly. Either configure dual-stack listeners and matching firewall rules or disable IPv6 consistently at the host and document why. Mixed stacks without intention are a rich source of “works on Wi‑Fi, fails on cellular” reports.
Advanced sites sometimes attach Clash to a macvlan or ipvlan network so the container receives a real LAN IP. That pattern can simplify per-device routing on complex VLANs but raises operational overhead—only pursue it when bridge port publishing genuinely blocks your topology.
| Mode | Strengths | Trade-offs |
|---|---|---|
| Bridge + port publish | Predictable isolation, works on most hosts | Extra NAT layer; map every needed port explicitly |
| Host network | Lower surprise for some advanced transparent setups | Port conflicts; harder multi-instance layout |
external-controller and secrets
The REST API on external-controller is powerful: it can switch profiles, reload configs, and expose runtime state. Never publish 9090 to the public Internet without authentication, reverse-proxy TLS, and IP allowlists. Prefer binding the controller to 127.0.0.1 inside the container and reaching it through an SSH tunnel or a private admin VPN when you manage a remote VPS.
If you must expose the API to the LAN for a dashboard container, place both services on the same user-defined Docker network, keep the controller off the WAN-facing edge, and set a strong secret in YAML. Rotate that secret when you rotate compose files. These practices mirror the lockdown advice in the headless Linux guide; Docker adds network namespaces but not automatic safety.
Putting Caddy, Nginx, or Traefik in front of the controller for HTTPS is reasonable on internal networks when you terminate TLS and enforce client certificates or OAuth at the edge. Do not confuse “TLS on the controller” with “safe on the public Internet”—your threat model still includes credential stuffing and application-layer abuse if the port is reachable worldwide.
For secrets rotation, treat the compose file like application config: store subscription URLs in a secrets manager when possible, render a temporary env file during deploy, and avoid echoing secrets into shell history. Docker Secrets work smoothly on Swarm; on plain Compose, many teams use systemd drop-ins or small wrapper scripts that export variables for a single up invocation.
Public VPS checklist
Assume scanners hit every open port. Firewall allowlists, fail2ban, or cloud security groups should default-deny inbound proxy and controller ports unless you have a documented reason to expose them.
Image updates, rollbacks, and verification
Upgrade deliberately: read upstream release notes, adjust your pinned tag, pull, and recreate. A typical sequence:
docker compose pull
docker compose up -d
docker compose ps
docker compose logs --tail=100 clash-meta
If a new tag breaks DNS, TUN, or provider parsing, rollback by reverting the tag in Compose to the last known-good value, then up -d again. Because configuration lives on the host volume, rolling the image back does not erase your YAML unless you changed it in tandem—another reason to separate config edits from image bumps.
After each upgrade, run a short validation: HTTP through the mixed port, DNS resolution for a test domain, and a provider refresh cycle. If symptoms resemble “connected but no traffic,” revisit DNS and fake-ip troubleshooting before blaming the container layer.
Keep a short internal changelog: image tag from → to, date, operator, and one-line symptom check. When something regresses three months later, that row saves hours of diff archaeology. Pair it with exported docker inspect output stored alongside if you need to prove which capabilities and mounts were active during an incident review.
Blue-green style swaps are possible on a single host: run a second compose project on different ports, validate, then flip clients or reverse-proxy upstreams. The extra complexity pays off when downtime must stay near zero or when you test a new Meta major on production-like traffic without touching the primary container.
Day-two operations: logs, reloads, and backups
Once the container runs, most day-to-day work happens through logs and the controller API. Tail logs with docker compose logs -f when debugging handshake failures; grep for provider errors separately from connection errors so you do not chase the wrong subsystem. When you enable debug-level logging temporarily, remember to revert verbosity—verbose logs on busy hosts fill disks and obscure signal.
Configuration reload patterns differ slightly from bare metal. If your image ships mihomo with API reload support, you can POST to the appropriate endpoint after editing config.yaml on the host bind mount—often without restarting the whole container. Verify against your image docs because entrypoints sometimes symlink configs or preprocess files on boot.
Back up the small stuff that matters: config.yaml, any custom rule snippets, and documentation of which image tag you run. Provider caches can usually be regenerated, but a broken subscription URL without an offline copy turns a simple restore into an outage. Encrypt backups that contain subscription tokens and test restores quarterly—an untested backup is a wish, not a plan.
NAS and low-power hardware notes
Synology, QNAP, and similar platforms often wrap Compose in a GUI. The same principles apply: place bind mounts on fast storage, avoid spinning disks for high-churn cache directories if your rule providers update frequently, and watch CPU scheduling when TLS-heavy rule sets download on short intervals. ARM devices should use images built for linux/arm64; set platform: linux/amd64 only when you accept emulation overhead.
When the NAS already provides system-level VPN or firewall scripts, ensure they do not double-NAT or hijack DNS in ways that confuse Clash. A single coherent DNS story—whether fake-ip or redir-host—saves hours of log reading.
Container Station and similar UIs sometimes recreate networks during updates. If your compose project references a custom network name, pin that network in documentation and verify after NAS firmware upgrades—otherwise the service may come up on a default bridge with different NAT behavior. Thermal throttling on fanless boxes can also stretch TLS handshakes during mass rule downloads; stagger provider intervals after cold boot.
Troubleshooting common Compose failures
Container exits immediately: read docker compose logs first. Mis-mounted config paths, unreadable config.yaml, or wrong architecture images produce fast crashes with clear errors if you look before restarting in a loop.
Port already allocated: another service grabbed 7890 or 9090 on the host. Use ss -lntp or your OS equivalent, change the left side of the port mapping, or stop the conflicting stack. Host-network mode clashes are especially common on all-in-one media servers.
Works from host, not from other containers: check Docker networks. Containers on different user-defined networks need explicit attachment or published ports to talk to each other; localhost inside one container is not localhost in another.
Permission denied on volume: UID or GID mismatch. Align ownership on the host path or pass user namespace settings consistent with the image README. SELinux and AppArmor profiles on hardened hosts may also block bind mounts until you apply the right context labels.
FAQ
Should I use Docker or systemd on Linux?
Use systemd when you want minimal moving parts and direct binary control. Use Docker when you already operate compose stacks, need identical deployments across machines, or want image-level versioning. Many advanced users run both patterns in different environments.
Does TUN mode work in Docker?
It can, but it requires elevated capabilities, correct device nodes, and often host networking or tailored compose settings. Many server deployments stick to HTTP/SOCKS inbound without TUN inside the container. Evaluate whether you truly need kernel TUN in-container or can terminate TUN on the host with a different tool chain.
Provider files never update on disk
Check volume mounts, path inside YAML, and UID or GID mapping. A read-only mount or wrong ownership is the usual cause.
Can I run two Clash containers on one host?
Yes, with different compose projects, distinct config directories, and non-overlapping host port mappings. Host networking generally prevents two instances from binding the same inbound ports—use bridge mode with explicit maps or separate IPs via advanced networking.
Do I need CPU or memory limits?
Meta is usually light, but rule providers and GeoIP parsing can spike RAM during refresh. Set conservative deploy.resources limits if you co-locate with memory-sensitive databases; otherwise defaults are fine on dedicated small VPS plans.
Should Watchtower auto-update this container?
Automatic image pulls can surprise you with breaking core changes. If you use Watchtower, scope it carefully or pin digests, and prefer manual upgrades for production proxy stacks where downtime is visible to users.
Deployment checklist
- Pin image tag; document digest after successful deploy.
- Map config and data volumes; verify writes inside
./data. - Publish only required ports; keep controller private or well authenticated.
- Align DNS with provider downloads and your rule design.
- Test upgrade on a staging host or snapshot before production pull.
Get a polished desktop client too
Docker covers servers; laptops and phones still benefit from maintained clients with UI affordances. Pick the workflow that matches each device class.
Compose once, deploy anywhere
Pin Clash Meta images, persist configs and providers, and upgrade on your schedule.
Download Clash