A project I’m putting together (more details to come!) has been outgrowing my budget lately, needing bigger server infrastructure to run than my wallet can comfortably afford. Luckily, Fosshost was able to save the day, providing me with a couple of reasonably beefy VMs at no cost so I could keep developing my project.
I chose to take advantage of their AARCH64 platform, since more resources were available and I felt the limitations of the platform (IPv6 only and running on, shocker, aarch64 architecture) weren’t a big deal for my project, which doesn’t need any type of AMD64 specifics. While this was fine in the end, this happened to also be multiple scenarios Docker doesn’t really love so, it took a while to get fine.
Docker and IPv6: The Worst Love Story since Twilight
Docker… does not really cooperate with IPv6 well. By default, it simply refuses to acknowledge its existence, assuming the entire world to be IPv4 only. It’s possible to change this though, with a bit of configuration file voodoo:
{
"experimental": true,
"ipv6": true,
"ip6tables": true,
"fixed-cidr-v6": "fd00::/64"
}
Breaking this down, this enables Docker’s “experimental” features, enables IPv6, enables support for ip6tables and adds a CIDR range of IPv6 addresses to use for running containers. This is all documented in various places throughout Docker’s documentation, but good luck finding exactly the right spots.
After enabling this, things mostly worked. In one Dockerfile, I still had to force curl
to use IPv6 instead of trying with IPv4, but for the most part, things worked out.
Docker and ARM: A Containerization Manifesto
Building My Own Images
With IPv6 figured out, my next problem arose. Most of my custom images were built for AMD64. I could have just recompiled them locally, but then I’d have to deal with different Compose files for AMD64 machines vs ARM64 machines, etc. I really didn’t want to do that, which started my dive into learning how multiarch builds work. The answer being, “questionably.”
Docker provides two mechanisms for setting up multiarch images: docker buildx
, which just never worked for me and, docker manifest
which.. basically involves compiling a bunch of images across multiple platforms, then telling Docker to make a list. As Buildx kept failing due to Qemu being really unhappy with me, I went with the manifest route.
Docker Manifests are actually pretty neat. The idea is nice and simple, “take a bunch of images and tell your container repository that they’re all the same thing for different environments.” The downside is that this basically triples the work; instead of a single docker build -t $tag . && docker push $tag
, it turns into a docker build -t $tag-$arch && docker push $tag-$arch
across multiple builders — one for each platform you need support for. After the individual builds are done, putting it all together is simple enough: docker manifest create $tag --amend $tag-amd64 --amend $tag-arm64v8
and so on, with an --amend $tag-$arch
for each arch you’ve built an image for, followed by a final docker manifest push $tag
.
Once your head is wrapped around it, it works pretty well. And it’s supported by at least Docker Hub and GitHub Container Registry, which means at least the common options for open source platforms are covered.
Using Existing Images
Building images is the biggest part of the problem with Docker on ARM (or on other more niche architectures), but it’s not the whole problem. The next problem showed its face after running docker-compose up -d
and noticing, “Huh, MariaDB keeps restarting. What’s up with that?”
It turns out that the MariaDB image I used, bitnami/mariadb
actually doesn’t support ARM64 either! While recompiling my own images was doable enough, going through and rebuilding the entire (slightly convoluted) stack Bitnami’s images use was way too much of a pain. Luckily, the standard mariadb
image had full support for ARM64, and was similar enough that swapping from one to the other was almost seamless, but it highlights an important issue. When using niche setups, you can’t just assume 3rd party images actually even exist. Many of these will “start” and fail somewhat silently, throwing an exec error on launch but never actually starting, just restarting or dying in the background until you notice and poke at its logs a bit.
Where We’re At Now
Luckily, that’s the full list of issues. After some casual abuse of Docker’s settings for IPv6 and an impressive abuse of tagging and retagging images, I’ve managed to fully run my fleet of containers on ARM64. So far, everything’s smooth, but I’m sure more surprises are waiting around the corner for this venture into an ever so slightly niche Docker setup.
Leave a Reply