With the release of Apples M1 chip, ARM became accessible for a brought audience. AWS published their plans to spin up some clusters with ARM chips at their heart (better known as AWS Graviton3). They promise a 40% better price-performance compared to the current X86-based instances. We therefore assume a significant audience being attracted to the better price and higher performance ARM-based systems have to offer.
As a start-up we want to gain traction for our products and try to support as many developing systems and technologies on the market as possible. Although ARM-based systems are announced a while ago, we decided now to go for multi-arch since developers showed interest in Apples M1 based systems. In order to provide the freedom of choice to our team, we decided to use this interest as a kick-off for enabling multi-arch images for our products and operators.
Please notice that this article is written by somebody who chose an ARM64-based system.
Outline and separation
What do we do with Docker, why cross-compiling with Cargo? Cross-platform-builds require some prerequisites and requirements in terms of code, libraries and development environment. For clarity we would like to make some separations between techniques necessary to achieve multi-platform support.
Docker is the go-to tool for building images. Those images usually contain a base image (e.g. nginx, ubi8-minimal, …) and additional libraries or tools. Normally you go for something like:
docker build -f <Dockerfile> -t <tag-of-your-choice>
If you push those images to the repository, a manifest will be created storing all metadata such as digests, architecture, labels and so on. If images get pulled by Docker, it will check the manifest and pull the image if the provided metadata matches the existing metadata (such as tag and version). For multi-arch, Docker comes with some features to enable the build and the storage in your repository. The following will outline the most important ones.
Docker comes with an emulator called QEMU, which comes with architecture simulations and provides a architecture independent foundation to run software of different platforms within. Furthermore Docker provides a method called buildx which can be used for multi-arch docker-images. How to leverage buildx will be shown in section “Docker buildx”.
Manifest lists are basically the meta version of a manifest. As the name suggests, manifest list are a list of manifests with different architectures. Each manifest list has a digest, and stores the digests of the corresponding images of different architectures. Those lists can be created in several ways. In our case we directly exported a manifest list to our repository, since we simultaneously build both architectures via a builder instance. You may also build each architecture separately on a native machine, and leverage manifest create
docker manifest create <MANIFEST_LIST> --amend MANIFEST<Platform1> --amend MANIFEST<Platform2>
This little larger command will enable you to build all platforms locally, test them and bundle them up to push them into your repository. There is the downside, that the images bundled up in the new list have to be pushed into the same repository within the same organization. E.g. stackable/operatorA-arm, stackable/operatorA-amd bundled into stackable/operatorA.
The Rust-compiler Cargo comes by default with the option to define a target platform using:
cargo build --target=ARCH
Thus you can compile x86 executable binaries from an ARM64-based system. Important to notice, with Cargo you can not produce multi-arch binaries. In the full context of QEMU and multi-arch builds, you might ask the question why to use Cargo cross-compiling since the architecture can get emulated. Sadly we do not have a appropriate answer to this question, we only can tell that it is not working without it.
If you want to build for a certain architecture, it’s necessary to add the architecture specific toolchain. In terms of multi-arch, you have to rustup again for the architecture you are already in (QEMU emulated architecture). We believe at this point, that this is because QEMU is not coming with all the regular libraries provided by a native OS or has some infrastructure problems.
Why combine Cargo and Docker
As outlined before, Docker is required to build images. Since operators are shipped as those, we find an interplay between Cargo and Docker. On image-build-time, we compile our operator within a Docker container. As base image, we use ubi8-minimal hosted by RedHat. As another layer we have a builder (also an image) which executes everything necessary to compile the operator. This leads to the rather complicated interplay between an emulated base image via QEMU within a Docker build where Cargo is building a binary for a specific target.
Please note, that Cargos cross-compiling is necessary if and only if you want to compile code within Dockers QEMU.
Since images are build in layers, you might have at least one dependency for your custom build image such as a base OS. Note that every dependency in the sense of an image, has to be available in the architecture you target. If you tell Docker to build for different platforms, it will try to retrieve the image in the architecture specified. If it fails to do so, it will default back to any available architecture. This way, it will build mixed images and will not make it obvious to you. There are certain methods to reveal the architecture of your image, you can read more in “Testing and checking multi-architecture images”.
Gathered knowledge (Steps to success without further explanation)
This section is dedicated to some technical requirements and difficulties we found. We will outline the problems and limitations in a wider fashion.
We would like to give you a hand with the commands and methods we used with Docker and Cargo to build and publish multi-arch images of our products and operators. This section is dedicated to the technical implementation of multi architecture images.
Docker comes with buildx, which is intended to give a foundation for multi-arch images. This is a extension to the well known build command. Under the hood, buildx uses a build-kit which is empowered by QEMU to produce non-native images on a single architecture. There are certain flags and options you can go with, those will be shown in the following.
docker buildx build -f <Dockerfile> --platform <platform1>,<platform2> --push
This is the basic command to build a image with two different architectures. In principle, there is no further limitation on the platform1 or 2. It is not necessary that one of those platform is a native one. You might have stumbled over –push. This flag is going to publish your manifest list to repository. This is, because Docker is not supporting manifest lists in your local repository due to exporting from cache for the builder instance moby. If you compile exclusively for one non-native architecture, you can use –load to load the image from cache to the local Docker repository. As a prerequisite, you need a builder for each instance of a non-native architecture. Those builders run on the moby buildkit, this buildkit is basically providing each architecture you might want to target. Native builds (your current system) do not require emulation and therefore do not need a builder.
To evoke a builder, you can run:
docker create --name <builder-name> --use
This will produce a instance of moby and tell Docker to this as additional builder in parallel. If you want to have more than one you need to give an
--append <builder-name> flag (to do so, the instance must exist). With this you can create a arbitrary number of instances.
Cargo build –target
Cargo provides a –target option. Basically, you can instruct Cargo to use a target platform to compile the binary for. Run the following steps:
rustup target add <target architecture> # for ARM64 e.g. aarch64-unknown-linux-gnu
With the Cargo toolchain ready, you have to set cc, cxx and linker flags in your environment variables. Have a look at the following example for ARM64:
CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
Surely the compiler and linker have to be present on your machine and now you are ready to cross-compile for ARM64 on a non-native machine. After reading this, you may ask why we have to go the route with
cargo --target and the specifications of linker and c-compiler, since the baseOS will be pulled with the architecture of your machine or the one from your emulation. Exactly that’s the point, QEMU got a flaw when it comes to compiling with certain libraries such as unicode-bidi. If we compile within QEMU we will end up with an segmentation fault, which is a current and known issue.
Testing and checking multi-architecture images
This section is dedicated to how to test the architecture of your image. There are multiple ways to do this, however we will outline the most decent ones in our opinion here.
Once your Manifest List in your repository, you might wanna check if all your architectures where build. To do so you can leverage dockers inspect command:
docker manifest inspect <Tag>
This will output the manifest list with all the architectures referenced within. This will only check, which architectures where build compared to what was supposed to do. If you build in parallel, usually if one architecture fails to build, the whole building process is supposed to fail.
A better overview will be given when you pull all specific architectures from your repository one by one. To do so you can do
docker pull <Tag> --platform <Platform>
From there you can go and
This will output some metadata. The interesting part are the fields ‘Architecture’ and ‘architecture’. If both are showing the supposed tags, you should have a correct image. Before you keep progressing from this point and pull the next architecture of your image, don’t forget to delete the old one to avoid overlaps or problems.
docker image inspect <IMAGE-ID>
Another way to ensure that things are working as expected is to pull the image under test and run it with:
docker run -it --entrypoint bash --user root <Image-Tag>
You may need root access here since we want to install binutils via apt-get, microdnf or yum depending on what is available on your image. If you have installed the package you can run:
objdump -f /usr/lib/libssl.so objdump -f path-to-comnpiled-binary
objdump will give you information’s about the architecture of the OS and the compiled binary as well. This way you can ensure that binary and operating system have the architecture desired.
The following limitations have been encountered:
OpenSSL:: We fixed a compile error with OpenSSL by vendoring it issue on stackoverflow Local manifest lists:: Currently it is not possible to export manifest lists on a local repository. This is a limitation due to Docker.
In the following, we will show our current state with useful links to the direct issues and branches we were forced to adapt. Afterwards we would like to outline our target state how it should look like one day.
Currently, we have introduced a stackable-experimental repository. Within this repository, we are uploading multi-arch images for testing purposes and for our developers to work with it.
For products like Hive, Hadoop, Nifi, Kafka and so on we already realized an multi-arch CI build. You can find the build script for products here. The state is, that single architecture images will still be build in docker.stackable.tech/stackable/. On the other hand, the same images will be regulars published to stackable-experimental as multi-arch. This means, that up to date products are already available like in the main repository.
For operators, a proof of concept can be found in the experimental ubi8-rust-builder. We are now working on rolling out custom product images by operators, since currently only the Stackable repository is supported. We call this ticket Product image selection, which was proposed and accepted in ADR023.
Note: we use Architecture Decision Records (also known as AD or ADR) to document major tech decisions around Stackable and our products. All of them are public, if you’re curious!
However, we already tested those changes within the Kafka Operator. We plan to support all other products in the future as well.
The final goal is to merge the stackable-experimental repository with the stackable repository. From there, the standard will be to provide multi-arch images to all products and operators. This means, all merged PR’s are available to (currently) both platforms.
As shown above, our products have an overall desirable state. The only thing left is test it in a broader fashion and merge things into the stable stackable repository to be accessible for everyone.
Operators should have a flexible way of setting the repository of a product image. Additionally, we would like to have the multi-arch images for operators build automatically via our CI-pipeline.
We want to set up a test infrastructure with native ARM-nodes and test both sides of the product and operator images. We currently doubt that it is save to assume that if one side of things is working, the other one is fine too.