Why I Started Looking Past Docker
The licensing change hit us on a Thursday afternoon β Docker Inc. quietly updated their terms and suddenly Docker Desktop required a paid subscription for companies with more than 250 employees or $10M in revenue. We were squarely in that bracket. The per-seat cost wasn’t catastrophic, but it was enough to make engineering leadership ask: “what are we actually getting for this, and are there free alternatives we should have evaluated years ago?” That question kicked off three months of real testing on real workloads, not toy projects.
Honestly, the licensing thing was the push we needed, not the actual reason we left. The root daemon issue had been sitting in our threat model for a long time. Docker’s daemon runs as root by default, and every container you spin up is one misconfigured mount or CVE away from a full host compromise. Our security team flagged it repeatedly. We’d nod, say “we’ll look into rootless mode,” and then ship features instead. The licensing change gave us political cover to finally act on the security debt we’d been carrying.
Fair framing before we go further: Docker is still a completely reasonable choice for a lot of situations. Solo devs, small teams under the licensing threshold, teams where everyone is already fluent in Docker Compose β the switching cost is real and the benefits aren’t always worth it. Docker’s tooling is mature, StackOverflow has answers for every error message, and the ecosystem integrations (GitHub Actions, CI providers, most cloud platforms) assume Docker by default. I’m not here to tell you Docker is broken. I’m here to tell you it stopped being the only reasonable option.
What I actually tested: Podman 4.x running rootless on RHEL 9 and Ubuntu 22.04, containerd with nerdctl as a drop-in CLI replacement, and Lima for macOS where Docker Desktop was the most painful. All three ran against our actual microservices stack β a Go API, a Python ML inference service, a Postgres 16 sidecar pattern, and multi-stage builds that take 8β12 minutes on cold cache. Not hello-world. Not a single-service tutorial. Real compose files with volumes, networks, build args, and secrets.
- Podman 4.x β rootless by default, daemonless architecture, Docker CLI-compatible enough that most muscle memory transfers over
- containerd + nerdctl β the runtime that Kubernetes actually uses under the hood, with a CLI that mimics Docker’s surface area surprisingly well
- Lima β a macOS VM manager that made the Docker Desktop replacement story on Apple Silicon actually usable
The thing that caught me off guard across all three: compatibility is almost never the problem. The real friction is in the edges β compose spec gaps, volume permission behavior differences under rootless, and the fact that your team’s mental model of “how containers work” is deeply Docker-shaped. Swapping the binary is the easy part. Retraining the instinct to docker exec or check docker stats takes longer than you’d expect.
Quick Comparison Table Before You Read Further
Side-by-Side Comparison: The Honest Version
Before reading ten pages of nuance, most people just need a table. Here it is β I’ve kept the columns to things that actually matter when you’re choosing a daily driver, not marketing checkboxes. One warning: Docker Desktop’s pricing tiers change more often than I’d like, so verify the current thresholds at docker.com/pricing before making a business decision. Personal use and small businesses (under 250 employees AND under $10M revenue as of writing) are free, but confirm that yourself.
Feature Docker Desktop Podman Desktop nerdctl+containerd Lima
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Daemonless No Yes No (containerd) N/A (VM layer)
Rootless by default No Yes Optional Yes
Docker CLI compat Native High (aliases) High (drop-in) Depends on guest
Compose support Yes (built-in) Yes (podman-comp) Yes (nerdctl compose) Via guest tool
Kubernetes Yes (built-in) Yes (podman play) No native / use k3s No native
macOS support Yes (GUI) Yes (GUI) Yes (manual setup) Yes (primary use)
Windows support Yes (GUI) Yes (preview) Partial (WSL2 only) No
License Proprietary Apache 2.0 Apache 2.0 Apache 2.0
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Dealbreaker Paid for teams No Windows GUI No GUI, steep setup macOS only, raw
The “dealbreaker” row is doing a lot of work here, so let me explain each one. Docker Desktop’s licensing is genuinely fine for personal use, but the moment your company crosses the revenue or headcount threshold, you’re looking at $21/user/month (Pro) or $35/user/month (Business) β which adds up fast across an engineering team. Podman Desktop is solid on macOS and Linux but the Windows GUI is still marked as a technology preview; I wouldn’t hand it to a Windows-heavy team and expect a smooth onboarding. nerdctl with containerd gives you the most faithful Docker CLI behavior outside of Docker itself, but there’s zero GUI β if your team includes people who aren’t comfortable editing CNI configs by hand, expect support tickets. Lima is almost exclusively a macOS tool for running Linux VMs with containerd; it’s powerful and free, but nobody should be setting it up on Windows.
Kafka Is Overkill for Most of Us β Hereβs What I Actually Run in Production Instead
Reading the Compose Support Row Correctly
Every tool in this table claims Compose support, but they’re not all equal. Docker Desktop ships with the docker compose plugin and it just works. Podman’s compose story used to require podman-compose (a community Python reimplementation), but newer Podman 4.x+ versions have native podman compose support that delegates to docker-compose v2 under the hood if it’s installed. nerdctl ships its own nerdctl compose that covers maybe 80% of common Compose syntax β if your docker-compose.yml uses obscure options like profiles or extends, test it before committing. Lima’s Compose support depends entirely on what you’ve installed in the guest VM.
Kubernetes Integration Is Not All the Same Either
Docker Desktop’s Kubernetes is a single-click toggle that spins up a real single-node cluster using kubeadm. It’s slow to start and occasionally gets into broken states that require a full reset, but it’s dead simple. Podman’s equivalent is podman play kube, which reads Kubernetes YAML and runs pods β but that’s not a real cluster, it’s closer to a Kubernetes-flavored runtime. If you need actual kubectl with API server, RBAC, and CRDs, Podman’s built-in option isn’t it. For nerdctl users who need local Kubernetes, the practical path is running k3s alongside containerd, which nerdctl already talks to natively since both use containerd as the runtime.
# nerdctl talking to k3s's containerd socket directly
export CONTAINERD_ADDRESS=/run/k3s/containerd/containerd.sock
export CONTAINERD_NAMESPACE=k8s.io
nerdctl --namespace k8s.io images # lists images k3s actually sees
The rootless column matters more than it sounds. Running rootless by default means container breakouts don’t give an attacker root on the host β a non-trivial security boundary in shared environments or CI machines. Podman is the only tool here that ships rootless as the default with no configuration needed. Docker Desktop on Mac runs in a VM so the risk is partially mitigated anyway, but on Linux where people sometimes install Docker Engine directly, rootful Docker is still the path of least resistance. That gap is real and worth factoring in if you’re in a regulated environment.
Podman: The Drop-In Replacement That Almost Works
The thing that surprised me most about Podman wasn’t the rootless default β it was that alias docker=podman genuinely holds up for most daily work. Pull images, build from Dockerfiles, run containers, manage volumes. I ran that alias for two weeks before hitting anything that broke. Then I hit three things in the same afternoon.
Installation story varies wildly by OS. On Fedora or RHEL, it’s a single command:
# Fedora / RHEL 8+
sudo dnf install podman
# Verify β note it ships rootless by default
podman info | grep -E "rootless|cgroupVersion"
On macOS, you’re not getting a native binary β Podman spins up a lightweight QEMU VM under the hood. The setup is painless but adds ~30 seconds to your first start:
# macOS via Homebrew
brew install podman
podman machine init # creates the VM (~900MB)
podman machine start # boots it, sets up socket
# Point your DOCKER_HOST at Podman's socket if needed
export DOCKER_HOST="unix://${HOME}/.local/share/containers/podman/machine/qemu/podman.sock"
That DOCKER_HOST export matters if you have tooling that reads the socket directly instead of shelling out to docker CLI. Testcontainers, for example, needs this set explicitly or it falls back to Docker Desktop detection and fails.
The 10% Where the Alias Breaks
Here’s where the alias falls apart in practice. First, Docker Compose networking: Podman Compose exists but treats each service as an independent container rather than putting them on a shared bridge by default. Service discovery by container name β db:5432 from your app container β works in Docker Compose out of the box but silently fails in some Podman Compose versions unless you explicitly configure a pod or set --network. Second, BuildKit features: anything using --mount=type=cache or advanced BuildKit syntax needs Buildah under the hood, which Podman calls automatically for basic builds but not always for cache mount syntax across older Podman versions (pre-4.0). Third, multi-platform builds via docker buildx β Podman has podman build --platform, but the emulation layer and manifest handling work differently enough that CI pipelines using buildx-specific flags will choke.
Rootless in Practice: Bind Mounts and Port Binding
Rootless sounds great until you try to bind-mount a host directory and the container can’t write to it. What’s happening: Podman maps your UID into a user namespace, so inside the container you might appear as UID 0, but on the host you’re still your regular user. If the directory is owned by root on the host, writes fail. Fix it with :Z (SELinux relabeling) or explicitly set ownership:
# Relabel for SELinux β works on Fedora/RHEL
podman run -v /host/data:/data:Z myimage
# Or run as your actual UID inside the container
podman run --user $(id -u):$(id -g) -v /host/data:/data myimage
Port binding under 1024 is the other rootless wall. You can’t bind port 80 or 443 as a non-root user by default. The practical fix isn’t running as root β it’s either using ports β₯1024 and fronting with a reverse proxy, or tweaking the kernel parameter:
# Lower the unprivileged port minimum (persists across reboots)
echo "net.ipv4.ip_unprivileged_port_start=80" | sudo tee /etc/sysctl.d/99-podman-ports.conf
sudo sysctl --system
The crun Mount Error
The specific error I hit β Error: crun: mount `/proc/sys`: permission denied β shows up when a container tries to write to /proc/sys from a rootless context. It bit me running a Redis container that was trying to set vm.overcommit_memory on startup. The container was logging a warning and then dying.
# The error looks like this in podman logs:
# ERRO[0001] crun: mount `/proc/sys`: Operation not permitted: OCI permission denied
# Fix: add the specific sysctls you need as container parameters
podman run --sysctl net.core.somaxconn=1024 redis:7
# Or suppress the check entirely if the app handles the failure gracefully
podman run -e REDIS_DISABLE_WARNINGS=yes redis:7
The root cause is that rootless containers can’t modify host kernel parameters β they’re not privileged enough to touch /proc/sys. If the app requires those settings to actually be applied (not just attempted), you either need to set them on the host before running the container, or run with --privileged, which defeats the rootless security model anyway.
Podman Pods and the Kubernetes Workflow
This is where Podman earns genuine respect. A Podman pod groups containers with a shared network namespace β exactly what Kubernetes pods do. I use this when I’m prototyping multi-container workloads that will eventually land on K8s:
# Create a pod, add containers to it
podman pod create --name myapp -p 8080:80
podman run -d --pod myapp --name app nginx:alpine
podman run -d --pod myapp --name sidecar busybox sh -c "while true; do sleep 10; done"
# Generate production-ready Kubernetes YAML from the running pod
podman generate kube myapp > myapp-deployment.yaml
The generated YAML isn’t always clean enough to ship without edits β resource limits, liveness probes, and persistent volume claims don’t transfer automatically β but it’s a massive head start compared to writing K8s manifests from scratch. The networking model inside a pod (localhost between containers) matches what you’d get on Kubernetes exactly, so you catch config issues locally rather than in a staging cluster.
Podman wins clearly in three situations: rootless CI environments where you want to run containers without giving the CI runner elevated privileges; RHEL and Fedora shops where it’s the system default and has full enterprise support from Red Hat; and teams on the path to Kubernetes who want the pod abstraction locally without running a full K8s cluster. For pure Docker Compose workflows with complex multi-service networking, the friction is real enough that I’d stay on Docker or move to Compose-native tooling instead.
containerd + nerdctl: The Kubernetes-Native Path
The thing that caught me off guard when I first looked at this setup: containerd isn’t something you install to try out an alternative β it’s already running on your Kubernetes nodes right now. Every kubeadm cluster, every EKS worker node, every GKE node pool since Kubernetes 1.24 dropped dockershim has containerd as the CRI. So when you run nerdctl locally, you’re not adopting a new runtime. You’re just removing the wrapper you were using to talk to the one that was already there.
Getting nerdctl running on a Mac is a two-step thing because containerd doesn’t run natively on Darwin β you need a Linux VM. Lima is the least painful path:
# On macOS β Lima gives you a lightweight Linux VM
brew install lima
# Start the default VM with containerd (not Docker)
limactl start --name=nerdctl template://nerdctl
# Shell into the VM
limactl shell nerdctl
# Now you're running inside Linux with containerd available
nerdctl run -it --rm alpine sh
The Lima template for nerdctl pre-configures the containerd socket and sets up the right namespaces so you don’t have to fiddle with /etc/containerd/config.toml manually. Once you’re in, the CLI feels deliberately familiar β nerdctl build -t myapp ., nerdctl compose up -d, nerdctl ps all work. The compatibility is high enough that I’ve swapped it into shell scripts without changing a single line. The gap shows up in the plugin ecosystem: Docker Desktop has volume plugins, credential helpers, and extensions built up over years. With nerdctl, you’re on your own for anything beyond the basics.
The snapshotter story is where you actually get tangible performance wins over Docker. By default, nerdctl uses overlayfs β same as Docker, no difference there. But flip to stargz (Seekable tar.gz) and you get lazy-pulling: the container starts running while the image is still downloading, pulling only the layers it actually needs at that moment. For a CI job that pulls a 4GB ML image on every run, this is not a minor tweak β I’ve seen cold-start times drop significantly because the job starts executing before the full image lands. Enable it like this:
# /etc/containerd/config.toml (on the Lima VM or your Linux host)
[proxy_plugins]
[proxy_plugins.stargz]
type = "snapshot"
address = "/run/containerd-stargz-grpc/containerd-stargz-grpc.sock"
# Then run with the stargz snapshotter explicitly
nerdctl run --snapshotter=stargz \
ghcr.io/stargz-containers/alpine:3.15-esgz \
sh
The namespace concept is the thing that trips people up most. Docker has one implicit namespace β your containers and images all live in the same flat space. containerd namespaces are explicit, and they matter. Kubernetes uses the k8s.io namespace. If you run nerdctl images and see nothing, but you know images exist on the node, that’s what’s happening β run nerdctl -n k8s.io images instead. Your default nerdctl namespace is just called default. Think of it like separate Docker contexts that don’t share image caches. It’s a clean design, but if you’re used to Docker’s one-big-pool model, the first time you can’t find an image you definitely just pulled, it’s confusing.
The real friction hit me with Testcontainers. By default, Testcontainers looks for a Docker socket at /var/run/docker.sock. With containerd, that socket doesn’t exist. Your tests fail immediately and the error message isn’t particularly helpful. The fix is either running a Docker socket shim (dockerd-rootless or nerdctl serve in experimental mode), or setting the TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE and DOCKER_HOST environment variables to point at the containerd socket via a compatibility layer. It’s solvable, but it’s a half-day of yak-shaving you should budget for if your test suite depends on it.
I’d push nerdctl specifically when your team is already operating Kubernetes and debugging prod issues that involve container runtime behavior β layer caching mismatches, OCI spec edge cases, image pull policies. Running the exact same runtime locally as what’s in your cluster closes an entire class of “works on my machine” issues that come from Docker’s layer handling differing subtly from containerd’s. If you’re a solo developer building a web app, the friction isn’t worth it. If you’re maintaining a platform team and your devs are debugging why a container behaves differently in the cluster than locally, this is the right move.
Lima: The macOS-Specific Option Worth Knowing
The thing that surprised me about Lima is how little setup it actually needs. Most VM-based container tools require you to manage the VM lifecycle manually, configure networking, set up mounts. Lima does all of that automatically β you run limactl start and within a minute or two you have a Linux VM with bi-directional file sharing and port forwarding already configured. I kept waiting for something to break. It mostly didn’t.
# Install and spin up the default VM (uses containerd, not Docker)
brew install lima
limactl start
# Shell into the VM
lima
# If you want Docker specifically, use the Docker template
limactl start --name=docker template://docker
# Point your local Docker CLI at Lima's socket
export DOCKER_HOST=unix://$HOME/.lima/docker/sock/docker.sock
# Verify it works
docker ps
That DOCKER_HOST export is the key step most tutorials gloss over. Your local docker binary doesn’t know about Lima’s socket by default. I’d add that export to your .zshrc so it persists. Once it’s set, every Docker command you already know works as-is β no aliasing, no wrapper scripts, no shim. The Docker CLI just talks to Lima’s daemon.
The VirtioFS vs SSHFS question matters more than you’d think. Lima’s default file sharing uses SSHFS, which is noticeably slow for workloads that do heavy file I/O β think node_modules, large build caches, anything touching hundreds of small files. VirtioFS cuts that overhead significantly, but it only works on macOS 13 Ventura and up. To enable it, edit your Lima config:
# ~/.lima/docker/lima.yaml (or generate a new config with this flag)
mountType: virtiofs
# Also make sure you're on a recent Lima version β virtiofs support
# improved substantially in Lima 0.18+
limactl --version
Even with VirtioFS on, Lima’s mount performance still doesn’t quite match Docker Desktop’s implementation. Docker Desktop has tighter integration with the Apple Virtualization framework and the macOS I/O stack. I ran a simple benchmark mounting a React project and running npm install inside the container β Docker Desktop finished a few seconds faster with warm cache. Not a dealbreaker for CI-style workflows, but if you’re doing active development with hot-reloading against mounted volumes, you’ll feel it on bigger projects.
Here’s my honest take: Lima is the best free Docker Desktop replacement on macOS if you live in the terminal. The free part is real β no license nags, no subscription tiers, no “personal use only” asterisks. It runs clean, uses fewer background resources than Docker Desktop, and integrates with nerdctl natively if you want to use containerd directly instead of Docker. The tradeoff is purely UX. There’s no GUI for inspecting containers, no dashboard for resource usage, no drag-and-drop volume management. If your team includes people who rely on Docker Desktop’s visual interface, Lima isn’t a drop-in for them.
- Use Lima when: you’re a solo developer or team of terminal-comfortable engineers, you need a zero-cost alternative after Docker Desktop’s licensing changes, or you’re on Apple Silicon and want tight ARM64 support without paying for Orbstack.
- Skip Lima when: your workflow depends on volume-mounted hot-reload with sub-second latency, you need a GUI that non-engineers can operate, or you’re running Windows (Lima is macOS-only, full stop).
- Watch out for: Lima VMs don’t auto-start after reboot. You need
limactl start dockereach time, or set up a launchd service yourself β which Lima doesn’t configure for you out of the box.
The Compose Problem: Where All Alternatives Still Struggle
The thing that keeps pulling teams back to Docker isn’t the runtime β it’s Compose. Almost every alternative handles docker run equivalents just fine. But once you’ve got a compose.yml with health checks, conditional dependencies, and profiles, the gaps become real and painful.
Podman Compose is a separate Python project maintained independently from the Podman team. I want to be clear: it’s genuinely useful and handles the majority of real-world files. But “majority” is the problem word. It lags behind the official Compose spec, and you’ll discover this at the worst possible time β when a teammate adds a depends_on condition or a develop.watch block and your CI pipeline silently does the wrong thing. The project tracks issues with spec compliance openly on GitHub, which is honest, but it doesn’t make the gaps less frustrating on a deadline.
nerdctl compose is closer to the real thing because it actually uses the compose-go library that Docker Compose v2 itself is built on. That alignment matters. But I hit a specific edge case that cost me two hours: depends_on with condition: service_healthy doesn’t always behave identically when the healthcheck polling interval interacts with containerd’s lower-level event system. The container starts, the health probe runs, but the dependent service occasionally races ahead before healthy status is confirmed. With docker compose this just works. With nerdctl compose you end up adding a manual sleep or a retry wrapper, which defeats the whole point.
# This works perfectly in docker compose v2
# nerdctl compose handles it, but you may see race conditions
# under fast-starting services with tight healthcheck intervals
services:
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 2s
timeout: 2s
retries: 5
start_period: 5s
app:
image: myapp:latest
depends_on:
db:
condition: service_healthy
My honest recommendation if Compose is central to your workflow: don’t fight it. Run Docker Engine (not Docker Desktop) on Linux servers and CI β it’s free, it’s the reference implementation, and there’s no license complexity for server-side use. The BSL concerns people have are specifically about Docker Desktop, not the engine. On macOS dev machines, Lima with the Docker template gives you Docker Engine in a VM with near-native performance, no subscription required.
# Install Lima and start a Docker-compatible VM on macOS
brew install lima
# Pull the Docker template β this gives you full Docker Engine, not a shim
limactl start --name=docker template://docker
# Point your local docker CLI at it
export DOCKER_HOST=unix://${HOME}/.lima/docker/sock/docker.sock
# Now docker compose behaves exactly like Linux Docker Engine
docker compose up --wait
The --wait flag in Compose v2 (not available in v1 or Podman Compose) waits for all services to be healthy before returning. It’s a small thing but it’s the kind of first-class feature that makes scripting deployments clean. Until the alternatives catch up on the full Compose spec β especially depends_on conditions, include, and develop.watch β Docker Engine on Linux plus Lima on macOS is the pragmatic middle ground that gives you escape from Desktop without giving up Compose reliability.
CI/CD: Where the Alternatives Actually Shine
The Docker socket problem in CI is something that burned me the first time I set up a self-hosted runner. Mounting /var/run/docker.sock into a container gives that container root-level access to the entire host. It’s not a theoretical risk β any compromised build step can read secrets from other containers, kill sibling processes, or escape to the host entirely. Switching to Podman or containerd in your CI runners eliminates this attack surface completely because neither requires a daemon socket at all.
Here’s a GitHub Actions workflow using Red Hat’s official actions that I actually run in production. The key difference is buildah bud instead of docker build β it reads the same Dockerfile syntax, outputs a standard OCI image, and runs entirely rootless:
jobs:
build:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Log in to registry
uses: redhat-actions/podman-login@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build image with Buildah
run: |
# buildah bud is a drop-in for docker build β same Dockerfile, no daemon
buildah bud \
--layers \
--cache-from ghcr.io/${{ github.repository }}:cache \
--cache-to ghcr.io/${{ github.repository }}:cache \
-t ghcr.io/${{ github.repository }}:${{ github.sha }} \
.
- name: Push image
run: buildah push ghcr.io/${{ github.repository }}:${{ github.sha }}
Buildah’s design philosophy is worth understanding: it separates the build tool from the run tool. You build with Buildah, run with Podman, and push with Skopeo. In Docker’s model those are all one binary sharing one daemon. That separation means a compromised build step in CI can’t reach running containers, because there are no running containers in Buildah’s world. The --layers flag enables caching similar to Docker’s layer cache β without it, every build is cold.
For containerd-native pipelines, Kaniko and standalone BuildKit are the two real options. Kaniko runs as a container itself and doesn’t need any host privileges β Google uses it in Cloud Build. BuildKit standalone is what I’d recommend if you need speed, because it’s the same engine Docker has used internally since Docker Engine 23.x, just without the Docker wrapper around it:
# Kaniko: runs as a container, no daemon needed
# Mount your context and credentials, it does the rest
docker run \
-v $(pwd):/workspace \
-v ~/.docker/config.json:/kaniko/.docker/config.json:ro \
gcr.io/kaniko-project/executor:latest \
--context /workspace \
--dockerfile /workspace/Dockerfile \
--destination ghcr.io/myorg/myapp:latest
# BuildKit standalone: start the daemon once per runner, reuse for all builds
docker run -d --name buildkitd --privileged moby/buildkit:latest
buildctl --addr tcp://localhost:1234 build \
--frontend dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--output type=image,name=ghcr.io/myorg/myapp:latest,push=true
The performance difference between Kaniko and BuildKit is real: BuildKit’s parallel stage execution and aggressive cache reuse make it noticeably faster on multi-stage Dockerfiles. Kaniko is simpler to set up in Kubernetes (just a Pod spec, no privileged daemon) but it serializes layers. I use Kaniko when I need the simplest possible security story inside a k8s cluster, and BuildKit when build time matters. One thing that tripped me up with BuildKit standalone: the --privileged flag on the buildkitd container is still required unless you’re on a kernel that supports user namespaces with overlay β that’s available on Linux kernel 5.11+ with the right settings, but most CI hosts don’t configure it out of the box.
When to Stick With Docker
The honest take: I switched most of my personal infrastructure away from Docker, but I kept recommending Docker Desktop to two of my clients last year. The reason wasn’t sentiment β it was that their specific setups would’ve gotten objectively worse with anything else. There are real situations where Docker is still the right call, and pretending otherwise just wastes your team’s time.
Solo dev or small team under the free tier limit? Docker Desktop is free for personal use, open-source projects, and companies with fewer than 250 employees and under $10M in annual revenue. If you’re nowhere near those thresholds, the cost argument for switching evaporates. The tooling is mature, the Stack Overflow answers actually match your version, and your time is better spent shipping. Don’t fix what isn’t broken.
Docker Extensions and Docker Scout are genuinely hard to replace. If your team has built workflows around Docker Scout for image vulnerability scanning β the CVE diffing between image versions is particularly good β there’s no drop-in replacement in Podman Desktop or nerdctl land. Same with Extensions: things like the Disk Usage extension or the Portainer integration hook into Docker’s API surface in ways that don’t translate. I’ve seen teams underestimate this, rip out Docker, and spend two weeks reimplementing tooling they already had for free.
Complex Docker Compose networking is where the alternatives show real cracks. Simple stuff β web app plus database, maybe a Redis sidecar β works fine in Podman Compose or the nerdctl compose path. But once you start doing multi-network setups, custom bridge configurations, or anything with network_mode: service: for sharing network namespaces between containers, you’re going to hit edge cases. I spent an afternoon debugging this exact scenario with Podman Compose 1.1.0 and a setup that Compose v2 handled without a single config change. The bug was known, the fix was pending. That’s the cost of being early.
# This works fine in Docker Compose but has known issues in Podman Compose
# as of early 2025 β network_mode: service: sharing
services:
app:
image: myapp:latest
networks:
- frontend
- backend
sidecar:
image: envoy:v1.29
network_mode: "service:app" # shares app's network namespace
# Podman Compose handles this inconsistently depending on version
networks:
frontend:
backend:
internal: true
Windows containers (not WSL2 Linux containers) are a different story entirely. If you’re actually running Windows Server containers β legacy .NET Framework apps, COM-dependent services, anything that needs a real Windows kernel β Podman’s Windows container support is still marked experimental. Docker Desktop with the Windows Containers backend is the only production-grade option here. I’ve seen this catch people off guard: they assume “Windows support” means Windows containers, but every alternative is really shipping Linux containers running on Hyper-V or WSL2. That’s fine for most workloads, but not if your app calls Win32 APIs.
The GUI argument is real when your colleagues aren’t terminal-native. Docker Desktop’s UI is genuinely useful for people who need to restart a container, tail logs, or check port bindings without remembering CLI flags. Rancher Desktop exists and is improving, but the UX gap is still noticeable β especially for the “I just need to run this thing for local testing” crowd. If you have designers, PMs, or QA engineers who occasionally need to poke at containers, keeping Docker Desktop removes a whole category of support tickets. That’s worth something.
My Current Setup and What I’d Recommend
The honest answer I give every team that asks me this: stop looking for a universal replacement and start asking “what is this machine actually doing?” My setup is fragmented on purpose, and it hasn’t caused me any grief once I accepted that different environments deserve different tooling.
On Linux servers β and this is where most of my containers actually run β I stick with Docker Engine, not Docker Desktop. The daemon licensing concern that sent everyone scrambling in 2022 only applies to Docker Desktop. Docker Engine on a Linux server is still Apache 2.0, free, no seat count, no org size threshold. Compose v2 is baked in as a plugin (not the old Python binary), and it behaves identically whether I’m on Ubuntu 22.04 or Debian 12. The setup is three commands:
# Official install script β don't use the distro's package, it's always stale
curl -fsSL https://get.docker.com | sh
# Verify Compose v2 is present (not the old `docker-compose` binary)
docker compose version
# Docker Compose version v2.27.1
# Add your user to the docker group so you're not sudo'ing constantly
sudo usermod -aG docker $USER
On macOS dev machines, Lima with the Docker template gives me CLI parity without the Docker Desktop overhead. The first time I ran limactl start --name=docker template://docker and got a fully functional docker CLI pointing at a Lima VM, I deleted Docker Desktop that same afternoon. Cold start is slower than Desktop but memory usage is controllable β I cap it at 4GB in the Lima config versus Desktop’s habit of eating whatever it finds. When I specifically need rootless containers for testing security-sensitive builds, I switch to Podman Desktop. Podman’s daemonless model is genuinely useful there, not a gimmick.
# lima config (~/.lima/docker/lima.yaml) β the parts I actually change
memory: "4GiB"
cpus: 4
disk: "60GiB"
# Then expose the socket so your local docker CLI picks it up
export DOCKER_HOST="unix://$HOME/.lima/docker/sock/docker.sock"
For Kubernetes-focused CI pipelines, I dropped Docker entirely and run containerd + nerdctl + Buildah. The reason is boring and practical: my production clusters run containerd as the CRI. Building with Docker in CI and running with containerd in prod means you’re testing a slightly different stack than what ships. Nerdctl’s CLI flags are close enough to Docker’s that my team didn’t need a retraining session, and Buildah handles multi-stage builds without a daemon, which matters in constrained CI runners where privileged mode isn’t available. The combination matches production runtime exactly, which has eliminated an entire category of “works on my machine, fails on cluster” bugs.
One thing worth layering on top of your container runtime decision: if your team is exploring AI-assisted development, the runtimes above all have different integration stories with AI coding tools. The Best AI Coding Tools in 2026 guide breaks down which tools have first-class Dockerfile generation, devcontainer awareness, and nerdctl/Podman support β worth reading before you commit to a toolchain, because some of the AI assistants still assume Docker Desktop is running and generate context that breaks in a Lima or Podman environment.
FAQ
Can I really just alias docker=podman and forget about it?
Mostly yes, but “mostly” is doing real work in that sentence. The alias works for 90% of day-to-day commands β podman run, podman build, podman pull, podman exec. The thing that caught me off guard was anything involving docker-compose. You need podman-compose or the official podman compose subcommand (available since Podman 4.x), and the behavior isn’t identical. Specifically, networking between containers in compose stacks behaves differently because Podman uses a different network driver by default. On rootless Podman, CNI was replaced with Netavark as of Podman 4.0, and if your compose file assumes bridge networking semantics from Docker, you’ll hit subtle DNS resolution issues inside the stack.
The other gotcha: Docker contexts and Docker socket paths. Any tool that talks to /var/run/docker.sock β think CI runners, monitoring agents, local dev tools like Tilt or Skaffold β will need to be pointed at Podman’s socket instead. You can start it with:
# Start the Podman socket service (rootless)
systemctl --user enable --now podman.socket
# Verify socket path
podman info | grep -i sock
# Output: /run/user/1000/podman/podman.sock
# Then for Docker-socket-dependent tools:
export DOCKER_HOST=unix:///run/user/1000/podman/podman.sock
On macOS, Podman Desktop handles this for you, but you still need to explicitly enable the Docker compatibility socket in its settings β it’s off by default. So the alias is fine for your personal workflow, but any shared tooling or CI pipeline needs deliberate migration, not just a symlink.
Does Podman work with Testcontainers?
Yes, and this is one of those “finally” moments if you care about rootless containers in CI. Testcontainers officially supports Podman through its DOCKER_HOST mechanism. The setup requires two things: the Podman socket running (see above), and the ryuk resource reaper either disabled or running with elevated privileges. Ryuk is Testcontainers’ cleanup mechanism and it needs to bind-mount the Docker socket internally β which gets weird in rootless environments.
The practical approach I use is to disable Ryuk entirely in CI where container cleanup is handled by the ephemeral runner anyway:
# In your test environment or CI config
export TESTCONTAINERS_RYUK_DISABLED=true
export DOCKER_HOST=unix:///run/user/1000/podman/podman.sock
export TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/run/user/1000/podman/podman.sock
For local dev where you actually want cleanup, Podman 4.5+ introduced a rootless socket that Ryuk can use if you set RYUK_CONTAINER_PRIVILEGED=true. It’s not pretty but it works. The Testcontainers for Go library has been more reliable in my experience than the Java one here β the Java library has had more edge cases around image pull authentication with Podman’s credential store format.
Is containerd the same thing as Docker under the hood?
Docker has used containerd as its low-level container runtime since Docker 18.09, so in that sense yes β when you run Docker, containerd is doing the actual container lifecycle management. But containerd itself is a stripped-down daemon with no build system, no CLI for end users, no image registry auth management, and no compose tooling. Docker is the full product stack built on top of it; containerd is the engine block with no chassis around it.
Where this distinction matters practically: Kubernetes dropped the Docker shim in version 1.24 and now talks to containerd directly via CRI (Container Runtime Interface). So your nodes might already be running containerd with zero Docker involved. You can interact with containerd directly using nerdctl, which gives you a Docker-compatible CLI experience without running dockerd at all. If you’re managing nodes or want to understand what’s actually running your pods, crictl is the debug tool you want β it talks CRI directly:
# List running containers via CRI (on a k8s node)
crictl ps
# Pull and inspect an image through containerd directly
nerdctl pull nginx:alpine
nerdctl image inspect nginx:alpine
The confusion usually comes from conflating “container runtime” with “the whole Docker experience.” containerd is intentionally minimal β it does what Kubernetes needs and nothing more. If you want the full developer workflow without Docker Desktop, you’re adding nerdctl + BuildKit on top of containerd, which is basically reassembling Docker’s functionality piecemeal.
What about Rancher Desktop β is it worth considering?
Rancher Desktop is genuinely underrated for teams that want a free Docker Desktop replacement on macOS and Windows without giving up Kubernetes. It runs either containerd or Moby (dockerd) as the backend β you switch in the UI β and bundles a local single-node k3s cluster that actually works for testing Helm charts and k8s manifests locally. The Docker Desktop alternative framing is accurate: it provides the VM layer, the socket, and the CLI tools, all free under the Apache 2.0 license.
The honest trade-offs: startup time is slower than Docker Desktop on Apple Silicon, and I’ve hit occasional issues with volume mount performance on macOS that required tweaking the virtiofs settings. Resource usage is comparable β you’re still spinning up a Linux VM. The UI is less polished than Docker Desktop, but “less polished” here means “functional but sparse,” not broken. Where Rancher Desktop wins decisively is the built-in k3s cluster. You can deploy to it with kubectl and your existing kubeconfig gets automatically updated. For teams already invested in Kubernetes workflows, the ability to test against a real cluster locally without separate tooling like minikube or kind is worth the occasional rough edge.
- Use Rancher Desktop if: you need a free Docker Desktop drop-in, you want local k8s included, or your org has a policy against Docker Desktop’s subscription pricing.
- Stick with something else if: you only need container builds and runs with no k8s, in which case Podman Desktop or plain Podman CLI is less overhead.
- Watch out for: the default memory allocation (2GB) being too low for anything serious β bump it to at least 4β6GB in Preferences β Virtual Machine before you wonder why builds are dying.