I have the privilege of having my desk located around a bunch of really intelligent people from the Mozilla Services team. They've been talking a lot about all the new technologies around server provisioning. One that interested me is Docker.
Docker is a pretty nifty piece of software. It's essentially a glorified wrapper around Linux Containers. But, calling it that is doing it an injustice.
Docker interests me because it allows simple environment isolation and repeatability. I can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple.
For my initial experimentation with Docker, I decided to create an environment for building Firefox.
Building Firefox with Docker
To build Firefox with Docker, you'll first need to install Docker. That's pretty simple.
Then, it's just a matter of creating a new container with our build environment:
curl https://gist.github.com/indygreg/5608534/raw/30704c59364ce7a8c69a02ee7f1cfb23d1ffcb2c/Dockerfile | docker build
The output will look something like:
FROM ubuntu:12.10 MAINTAINER Gregory Szorc "firstname.lastname@example.org" RUN apt-get update ===> d2f4faba3834 RUN dpkg-divert --local --rename --add /sbin/initctl && ln -s /bin/true /sbin/initctl ===> aff37cc837d8 RUN apt-get install -y autoconf2.13 build-essential unzip yasm zip ===> d0fc534feeee RUN apt-get install -y libasound2-dev libcurl4-openssl-dev libdbus-1-dev libdbus-glib-1-dev libgtk2.0-dev libiw-dev libnotify-dev libxt-dev mesa-common-dev uuid-dev ===> 7c14cf7af304 RUN apt-get install -y binutils-gold ===> 772002841449 RUN apt-get install -y bash-completion curl emacs git man-db python-dev python-pip vim ===> 213b117b0ff2 RUN pip install mercurial ===> d3987051be44 RUN useradd -m firefox ===> ce05a44dc17e Build finished. image id: ce05a44dc17e ce05a44dc17e
As you can see, it is essentially bootstrapping an environment to build Firefox.
When this has completed, you can activate a shell in the container by taking the image id printed at the end and running it:
docker run -i -t ce05a44dc17e /bin/bash # You should now be inside the container as root. su - firefox hg clone https://hg.mozilla.org/mozilla-central cd mozilla-central ./mach build
If you want to package up this container for distribution, you just find its ID then export it to a tar archive:
docker ps -a # Find ID of container you wish to export. docker export 2f6e0edf64e8 > image.tar # Distribute that file somewhere. docker import - < image.tar
Simple, isn't it?
Future use at Mozilla
I think it would be rad if Release Engineering used Docker for managing their Linux builder configurations. Want to develop against the exact system configuration that Mozilla uses in its automation - you could do that. No need to worry about custom apt repositories, downloading custom toolchains, keeping everything isolated from the rest of your system, etc: Docker does that all automatically. Mozilla simply needs to publish Docker images on the Internet and anybody can come along and reproduce the official environment with minimal effort. Once we do that, there are few excuses for someone breaking Linux builds because of an environment discrepancy.
Release Engineering could also use Docker to manage isolation of environments between builds. For example, it could spin up a new container for each build or test job. It could even save images from the results of these jobs. Have a weird build failure like a segmentation fault in the compiler? Publish the Docker image and have someone take a look! No need to take the builder offline while someone SSH's into it. No need to worry about the probing changing state because you can always revert to the state at the time of the failure! And, builds would likely start faster. As it stands, our automation spends minutes managing packages before builds begin. This lag would largely be eliminated with Docker. If nothing else, executing automation jobs inside a container would allow us to extract accurate resource usage info (CPU, memory, I/O) since the Linux kernel effectively gives containers their own namespace independent of the global system's.
I might also explore publishing Docker images that construct an ideal development environment (since getting recommended tools in the hands of everybody is a hard problem).
Maybe I'll even consider hooking up build system glue to automatically run builds inside containers.
Lots of potential here.
I encourage Linux users to play around with Docker. It enables some new and exciting workflows and is a really powerful tool despite its simplicity. So far, the only major faults I have with it are that the docs say it should not be used in production (yet) and it only works on Linux.