Universal Packages vs Docker: When Does a Package Become a Container? (part 5)

Posted by Andrew Denner on March 24, 2026 · 23 mins read

Universal Packages vs Docker: When Does a Package Become a Container?

Linux Namespaces, cgroups, OverlayFS, and the OCI Spec Explained

Andrew Denner · March 2026 · denner.co Part 5 of the Linux Universal Packages series


CIALUG did a Docker internals deep-dive last month. This part is written assuming you came out of that session knowing what cgroups and namespaces are. If you didn’t — or if you want a refresher — the first half covers the kernel primitives. Then we do the comparison. If you want to skip to “how these relate to Flatpak/Snap” jump to the comparison section.


The Kernel Primitives: A Refresher

Linux containers — and by extension, Snap and Flatpak sandboxes — are built from a small set of kernel primitives. Not kernel modules, not hypervisors, not virtual machines. Primitives that have been part of the mainline kernel for over a decade.

Linux Namespaces

A namespace wraps a global resource and makes it appear private to processes in that namespace. There are currently eight namespace types in Linux:

Namespace Flag Wraps
Mount CLONE_NEWNS Filesystem mount points
PID CLONE_NEWPID Process IDs
Network CLONE_NEWNET Network stack (interfaces, routing, sockets)
UTS CLONE_NEWUTS Hostname and domain name
IPC CLONE_NEWIPC SystemV IPC, POSIX message queues
User CLONE_NEWUSER User/group IDs
Cgroup CLONE_NEWCGROUP cgroup root view
Time CLONE_NEWTIME System clocks (added in kernel 5.6)

These are not new. CLONE_NEWNS (mount namespaces) has been in the kernel since 2.4.19 (2002). PID namespaces since 3.8. User namespaces since 3.8 (fully complete in 3.12).

Creating a namespace is just a flag to clone() or unshare():

// Create a child process in new namespaces:
pid_t child = clone(child_function, stack,
    CLONE_NEWPID | CLONE_NEWNS | CLONE_NEWUSER, NULL);

// Or: enter a new namespace from within a process:
unshare(CLONE_NEWNS);   // Unshare mount namespace from parent

You can experiment without any container tooling:

# Enter a new mount namespace as an unprivileged user:
unshare --mount --pid --fork bash

# Inside: a new shell with its own mount namespace
# Mounts created here are invisible outside
mount --bind /tmp/mydir /mnt/test  # Only visible in this shell
exit  # Mount gone

# Enter a new user+mount namespace and see what you can do:
unshare --user --map-root-user --mount --pid --fork bash
whoami  # Shows "root" inside the namespace
id      # But actual UID is still yours

cgroups: Resource Accounting and Limits

cgroups (control groups) attach resource accounting and limits to groups of processes. Where namespaces control visibility, cgroups control resource consumption.

cgroups v1 (legacy, still widely used) and cgroups v2 (unified hierarchy, the current standard) both expose themselves through a virtual filesystem at /sys/fs/cgroup/.

# See cgroup v2 hierarchy:
ls /sys/fs/cgroup/
# cgroup.controllers  cgroup.max.depth  cpuacct.usage  memory.current ...

# Your user session has a cgroup slice:
cat /proc/self/cgroup
# 0::/user.slice/user-1000.slice/session-2.scope

# The cgroup files for memory limits:
ls /sys/fs/cgroup/user.slice/user-1000.slice/
# memory.current     # Current memory usage
# memory.max         # Hard limit (kill if exceeded)
# memory.high        # Soft limit (throttle before OOM)
# cpu.max            # CPU bandwidth limit: "100000 100000" = 100% of one core

What cgroups let you do:

# Create a cgroup and limit its memory:
mkdir /sys/fs/cgroup/myapp
echo $PID > /sys/fs/cgroup/myapp/cgroup.procs  # Add process to cgroup
echo "100M" > /sys/fs/cgroup/myapp/memory.max  # 100MB memory limit
echo "50000 100000" > /sys/fs/cgroup/myapp/cpu.max  # 50% CPU limit

cgroups v2 reorganizes this into a unified tree where every cgroup inherits from its parent. A process can only be in one leaf cgroup (vs v1 where it could be in different cgroups in different hierarchies). This unified model is cleaner but required updates in systemd, docker, and container runtimes — the migration took years.

cgroups in the context of packaging:

  • Docker uses cgroups extensively: every container gets its own cgroup with configurable resource limits
  • Snap uses systemd slices to assign each snap to a cgroup (for accounting, not typically hard limits on desktop)
  • Flatpak creates systemd scopes for tracked apps but doesn’t typically apply resource limits for desktop apps
  • AppImage: no cgroup involvement at all

OverlayFS: Layered Filesystems

OverlayFS merges multiple directories into a single unified view. This is the mechanism that makes Docker image layers work, and it’s the same mechanism used in various forms across all container technologies.

# OverlayFS basic structure:
# lowerdir: Read-only base layer(s)
# upperdir: Read-write layer (receives all writes)
# workdir:  Working directory (must be same filesystem as upperdir)
# merged:   The unified view presented to the process

mkdir /tmp/lower /tmp/upper /tmp/work /tmp/merged
echo "from lower" > /tmp/lower/existing-file.txt

mount -t overlay overlay -o \
  lowerdir=/tmp/lower,\
  upperdir=/tmp/upper,\
  workdir=/tmp/work \
  /tmp/merged

# Inside /tmp/merged:
ls /tmp/merged        # Shows existing-file.txt from lower
cat /tmp/merged/existing-file.txt   # "from lower"

# Write a new file:
echo "new content" > /tmp/merged/new-file.txt
# Goes to upper layer: /tmp/upper/new-file.txt
# Lower layer is unchanged

# Modify an existing file:
echo "modified" > /tmp/merged/existing-file.txt
# Creates a copy in upper layer (copy-on-write)
# /tmp/upper/existing-file.txt now exists with new content
# /tmp/lower/existing-file.txt is unchanged

Copy-on-write (CoW): When you modify a file that exists in the lower layer, OverlayFS creates a copy in the upper layer. The lower layer is never modified. This is how Docker containers have writable filesystems without copying the entire base image on creation.

Deletion (whiteout files): Deleting a file in the lower layer creates a whiteout entry in the upper layer — a character device with major/minor 0/0 at the same path. The overlay driver hides the lower layer’s file when it sees a whiteout.

# After deleting /tmp/merged/existing-file.txt:
ls -la /tmp/upper/existing-file.txt
# crw------- 1 root root 0, 0 ... existing-file.txt
# That's a whiteout device node — not a regular file

How Docker Puts It Together

Docker’s container model uses all these primitives together. Understanding the stack:

Container Process
      │
      │ runs in
      ▼
Linux Namespaces (mnt, pid, net, user, uts, ipc)
      │
      │ isolated filesystem via
      ▼
OverlayFS (image layers + writable upper layer)
      │
      │ resource-limited by
      ▼
cgroups (memory, CPU, block I/O limits)
      │
      │ defined by
      ▼
OCI Runtime (runc/crun)
      │
      │ images stored as
      ▼
OCI Image Spec (layers: tarballs with content-addressable storage)

The OCI Specifications

The Open Container Initiative (OCI) defined two specs:

OCI Image Spec: Defines how container images are structured and stored. An OCI image is:

  • A manifest (JSON listing layers and config)
  • An image config (environment, entrypoint, exposed ports, etc.)
  • Layer blobs (gzipped tarballs of filesystem deltas)
// Simplified OCI manifest:
{
  "schemaVersion": 2,
  "config": { "digest": "sha256:abc...", "mediaType": "application/vnd.oci.image.config.v1+json" },
  "layers": [
    { "digest": "sha256:def...", "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip" },
    { "digest": "sha256:ghi...", "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip" }
  ]
}

OCI Runtime Spec: Defines what a container runtime must do with an unpacked image. Specifies the config.json format that describes namespaces to create, mounts to set up, entrypoint to execute, cgroup configuration, and security constraints (AppArmor, seccomp).

runc is the reference OCI runtime — it reads config.json and produces a running container. Docker, Podman, containerd, and CRI-O all use runc (or crun, a compatible alternative) under the hood.

What Docker Adds on Top

runc gives you a running container from a config.json. Docker adds:

  • An image build system (Dockerfile / docker build)
  • An image registry protocol (Docker Hub and compatible registries)
  • A daemon (dockerd) that manages container lifecycle
  • Networking (bridge networks, port mapping, overlay networks for multi-host)
  • Volume management
  • Docker Compose for multi-container applications
  • A container registry client (pull/push)

Most of what developers think of as “Docker” is the tooling layer, not the kernel primitives. Those primitives are also available through Podman (daemonless), containerd, or direct runc usage.


The Comparison: Universal Packages vs Containers

Now the interesting part. How do AppImage, Snap, and Flatpak compare to Docker containers when examined through the same lens?

Namespace Usage

Technology Mount NS PID NS Network NS User NS cgroup NS
AppImage None None None None None
Snap Yes No No No No
Flatpak Yes Yes No Yes No
Docker Yes Yes Yes Optional Yes

AppImage: No namespace isolation of any kind. The app runs in your full user session. Complete visibility of all processes, all network interfaces, all mounts. The “isolation” is purely by convention — it can’t touch the squashfs (read-only), but that’s not kernel-enforced isolation.

Snap: Mount namespace isolation (the app has its own view of the filesystem). No PID, network, or user namespace. The app can see all processes (ps aux from a Snap process shows the full system), can use all network interfaces, and runs as your real UID. Snap’s security comes from AppArmor and seccomp, not namespace isolation.

Flatpak: Mount + PID + User namespaces. The app has a private PID space (can’t directly signal processes outside), filesystem isolation via bind mounts, and a remapped UID. No network namespace — Flatpak apps with network permission access your real network stack. The --share=network flag doesn’t create a new namespace, it allows access to the host network. No network isolation between Flatpak apps.

Docker: All namespaces except user (user namespaces in Docker are supported but opt-in due to compatibility concerns with bind mounts and file permissions). Full network isolation by default — each container gets its own network stack, own loopback, own IP address.

What This Means in Practice

Flatpak and Snap provide what security researchers call reduced attack surface isolation, not the kind of isolation you’d rely on for running untrusted code. If a Flatpak app is compromised, it might not be able to read your files (if filesystem access isn’t granted), but it can still:

  • Make arbitrary network connections (if --share=network)
  • See other running processes by PID (via /proc in Snap, slightly restricted in Flatpak)
  • Communicate via D-Bus with other services

Docker provides genuine network isolation, PID isolation, and resource limits. A compromised Docker container can’t directly reach your host network or see host processes (absent specific configurations or kernel vulnerabilities).

The intended use cases differ: Flatpak/Snap are for applications that need to run as your user with your data. Containers are for running services that shouldn’t have any access to the host.

Storage Layer Comparison

Technology Storage Format Deduplication Write Layer
AppImage squashfs (single file) None N/A (read-only)
Snap squashfs (loop-mounted) None between apps /var/snap/<app>/
Flatpak OSTree (hardlinked objects) Yes (between runtimes/versions) ~/.var/app/<app>/
Docker OverlayFS layers Yes (between images sharing layers) Container layer (tmpfs or volume)

Flatpak and Docker both use content-addressable storage with deduplication. AppImage and Snap don’t deduplicate between app instances. If you have 10 snaps that all bundle GTK4, you have 10 copies of GTK4. If you have 10 Flatpaks that all use GNOME Platform 45, you have one shared copy of GNOME Platform 45 (via OSTree hardlinks).

Docker goes further — if two images share the same base layers (e.g., both use the same Ubuntu 22.04 base), those layers are stored once. A 100-container deployment might have only a few GB of actual storage if the images share base layers.

Security Model Comparison

Technology Isolation Mechanism Strength Notes
AppImage None None Trust-based; runs as your user
Snap AppArmor + seccomp Medium-High Strong MAC, no namespace isolation
Flatpak Namespaces + portals Medium Good isolation, weaker than Docker
Docker Namespaces + seccomp + capabilities High Near-VM isolation for most purposes

“Stronger” doesn’t mean “better for all use cases.” AppImage’s lack of isolation is a feature when you want the app to have the same access as a native binary. Snap’s AppArmor model is actually more expressive than Docker’s isolation in some ways — you can define per-interface permissions at the D-Bus method level.

The cgroups Difference

This is where containers diverge most clearly from universal packages.

Docker (and Kubernetes, Podman, etc.) use cgroups to provide hard resource limits: a container can be configured to use no more than 2GB of RAM and 1 CPU core, regardless of what the code inside tries to do. The kernel enforces this.

Universal packages don’t typically set hard resource limits. A misbehaving Flatpak app can consume all available RAM. A runaway Snap process can spin a CPU core to 100%. The cgroup integration that Snap and Flatpak have (via systemd slices) is for accounting and throttling, not hard limits.

If you need hard resource limits on a desktop app — and sometimes you do, for background sync services, document processors, anything that could go rogue — containers are the right tool.


When These Technologies Actually Overlap

The comparison so far treats them as different categories, but there are real overlaps:

Flatpak on the Server?

You can run Flatpak apps headless — flatpak run doesn’t require a display. For build environments, reproducible testing, or distributing server-side GUI tools to heterogeneous Linux systems, Flatpak works. It doesn’t have network isolation, so it’s not a drop-in for containers, but for “reproducible builds that run on any distro” it’s a reasonable choice.

Snaps as Service Containers

On Ubuntu Core, snap daemons are effectively lightweight service containers. They have filesystem isolation, AppArmor confinement, automatic updates, and rollback. For IoT devices where you want managed services without the overhead of Docker, Snap fills a real niche.

Containers for Desktop?

Distrobox and toolbox (Fedora’s tool) run Flatpak-style home directory sharing with Docker/Podman containers to give you a mutable, different-distro environment on your desktop. Your GUI apps run in a container that has full access to ~ and the display server — deliberately collapsing the container’s isolation to get cross-distro compatibility. This is the same tradeoff as Flatpak’s --filesystem=home.

The OCI/Flatpak Convergence Experiments

There have been experiments combining the OCI image format with Flatpak’s portal/permission system. The flatpak-oci bundle format is one — it uses OCI image layers for distribution but Flatpak’s portal infrastructure for runtime. This hasn’t become mainstream but points to the technologies converging at the edges.


The Mental Model: A Decision Tree

When should you use which technology?

Is this a desktop GUI app?
  Yes → Flatpak (sandboxed) or apt/dnf (native)
  No → continue

Does it need to run on distros without knowing the package manager?
  Yes → AppImage (portable binary) or Docker (if services)
  No → continue

Is it a long-running service?
  Yes → Docker/Podman/Snap (depending on platform and isolation needs)
  No → continue

Does it need hard resource limits?
  Yes → Docker/Podman container
  No → Universal package or native

Does it run on Ubuntu and need auto-patching on a server?
  Yes → Snap
  No → Docker

Is it on a headless embedded system (RPi, IoT)?
  Yes → Snap (Ubuntu Core) or Docker
  No → Use your distro's native package manager first

The answer is almost always “native package manager first.” These tools exist for the cases where native doesn’t work.


The Shared Ancestry: It’s All clone() and mount()

The insight that ties this together: AppImage, Snap, Flatpak, Docker, LXC, systemd-nspawn — they’re all wrappers around the same small set of Linux syscalls.

Every isolation mechanism here ultimately comes down to:

  • clone() with namespace flags
  • mount() with bind/overlay options
  • write() to /proc/self/uid_map for user namespace remapping
  • prctl(PR_SET_SECCOMP, ...) for syscall filtering
  • aa_change_profile() for AppArmor profile transitions
  • Writes to /sys/fs/cgroup/ for resource limits

The differences are which of these syscalls get called, in what combination, with what policies. Docker calls all of them. Flatpak calls most of them. Snap calls some of them plus AppArmor extensively. AppImage calls none of them.

Understanding this shared foundation explains why:

  • Container vulnerabilities (like user namespace escapes) affect multiple technologies simultaneously
  • Security improvements in the kernel (improved cgroup v2 support, better seccomp audit) benefit all these tools
  • The tools can interoperate (Distrobox, Flatpak OCI experiments)

The packaging format and the security model are separable concerns. squashfs is just a filesystem. The security happens in the wrapping layer.


Practical Summary

  AppImage Snap Flatpak Docker
Isolation None AppArmor+seccomp Namespaces+portals Full namespaces
Network isolation No No No Yes
Resource limits No Soft (systemd) Soft (systemd) Hard (cgroups)
Auto-update Optional Yes (can hold) Yes (can hold) You manage
Root required No Install: yes No Daemon: yes
Storage dedup No No Yes (OSTree) Yes (OverlayFS)
Startup overhead Low (FUSE) Medium (squashfuse) Low (bind mount) Low (namespace setup)
Best for Portable single binaries Ubuntu server daemons Desktop GUI apps Services, CI, dev envs

The most important row: isolation. Choose based on how much isolation you actually need for your threat model, not based on what’s fashionable.


Resources


This concludes the Linux Universal Packages series. Full series index at denner.co/blog. Talk materials at denner.co/talks.

Andrew Denner — denner.co — @adenner