Visualizing Mozilla's release infrastructure machine efficiency

August 30, 2013 at 12:30 PM | categories: Mozilla

Have you ever wondered what the machines in Mozilla's automation infrastructure are doing all the time? I have. So, I decided to create a visualization of this data. You can find it at http://automation-dashboard.paas.allizom.org/.

When you start looking at the visualizations, you notice something: there's a lot of time when machines aren't doing anything! All that white space between jobs is time machines are not processing jobs. This is capacity Mozilla is failing to utilize.

While some may say Mozilla's automation infrastructure has a load or capacity problem, I say it has an efficiency problem. The average machine in our automation infrastructure is doing work less than 50% of the time during weekdays. Now, some of this might be VMs that are powered off (due to low demand). But considering physical machines are also under-utilized, I'd say it's a global problem.

Oh, and don't get too hung up with a machine-job efficiency metric. While important, it's only part of the problem. When jobs are running, they are typically only using a fraction of the CPU available to them. From data now available in mozharness, we know that many test suites only use 10-15% CPU. If you combine this with sub-50% machine utilization in terms of jobs, I estimate we're only utilizing somewhere between 5-10% of available CPU cycles in our automation infrastructure. We have a magnitude more capacity in the machines we already have. We don't have a capacity problem, we have an efficiency problem. In my opinion we should throw less time and money at new hardware and invest in maximizing the return from what we have.

Edit 2013-09-03

I started a thread about this data in another forum and others have pointed out that the data in this post is incomplete. Notably absent from this data is when on-demand EC2 instances are shut down, when there are or aren't jobs scheduled (if jobs aren't scheduled, poor machine utilization isn't such a big deal), and when some maintainence tasks are performed (e.g. Panda boards are checked for consistency between jobs). While we could probably hook job scheduling data up to the graph and numbers easily, I don't believe data on slave uptime or background tasks is publicly available. Perhaps we should publish this data to facilitate deeper analysis.

A goal of this post was to shed light on how little we utilize some of the machines in our automation infrastructure in order to inspire a conversation and ultimately to address the perceived problem. It was not a goal to point fingers and cast blame. If I inadvertently performed the latter or seemed to jump to conclusions based on incomplete data, I apologize.


Mercurial setup wizard for Firefox development

July 29, 2013 at 05:45 PM | categories: Mercurial, Mozilla

I'm a big fan of tools that encourage and/or enforce the following of best practices and that help people become more productive.

One of the tools that Firefox developers interact with nearly daily is Mercurial. As I've observed from coworkers and from community contributors, many don't have Mercurial configured for optimal development. For first-time contributors, this can manifest in patch rejection - an experience that can be embarassing and demotivating. This is frustrating to me because most issues are easily identifiable and correctable. And, when addressed, everyone wins.

Anyway, I'm pleased to announce that there is now a configuration wizard in the Firefox source tree to help with configuring Mercurial. To run it, just type:

./mach mercurial-setup

Currently, it's aimed for first-time contributors. So, it's missing things that more seasoned developers rely on. But you need to start somewhere, right?

Currently, the tool isn't advertised anywhere other than mach help. Please run it and report issues in bug 794580 or file a new report. Once things have baked in, I'd like to add some kind of notification/tips system to mach where it will encourage you to do things like automatically run mach mercurial-setup. Until then, I recommend trying to remember to run mach mercurial-setup every few weeks to ensure your Mercurial environment is up to date and properly configured.

I'd like to thank Nick Alexander for sharing my enthusiasm for helping contributors and for taking the time to review this work.


Track pushes and train riding with Mercurial

July 25, 2013 at 01:10 PM | categories: Mercurial, Mozilla

My Mercurial extension for Firefox development now has an initial implementation of pushlog aggregation and searching.

You first start by synchronizing the pushlog data on Mozilla's servers with the local client:

hg pushlogsync

This takes a while the first time you run it because it has to download all the data. On subsequent runs, it only downloads new data, so it should be much faster.

Then, you can search for the push history of a changeset:

$ hg changesetpushes -a b968708558b9
133967:b968708558b9 Bug 839809:  Make counter-increments and list counting that would go past our internal (int32_t) limit keep the counter at its current value rather than wrapping.  r=dholbert

Per CSS WG resolution regarding counter-styles-3, afternoon of 2013-02-05:
http://krijnhoetmer.nl/irc-logs/css/20130205#l-1590
http://lists.w3.org/Archives/Public/www-style/2013Feb/0392.html

Note that this patch depends on signed integer overflow behavior in C++,
which I believe is portable despite being unspecified.
Tree      Date                Username              Build Info
inbound   2013-02-21T18:12:57 dbaron@mozilla.com    https://tbpl.mozilla.org/?tree=Mozilla-Inbound&rev=85b91048c1cd
central   2013-02-22T09:43:12 ryanvm@gmail.com      https://tbpl.mozilla.org/?tree=Mozilla-Central&rev=3a7d4085787e
build     2013-02-22T14:31:42 gszorc@mozilla.com    https://tbpl.mozilla.org/?tree=Build-System&rev=3a7d4085787e
fx-team   2013-02-25T01:04:44 ttaubert@mozilla.com  https://tbpl.mozilla.org/?tree=Fx-Team&rev=31466fd86eb7
graphics  2013-02-25T20:02:57 mwoodrow@mozilla.com  https://tbpl.mozilla.org/?tree=Graphics&rev=dcf53b7140cd
ash       2013-02-26T13:50:41 armenzg@mozilla.com   https://tbpl.mozilla.org/?tree=Ash&rev=201b64ad48d8
services  2013-02-28T09:42:45 Ms2ger@gmail.com      https://tbpl.mozilla.org/?tree=Services-Central&rev=31466fd86eb7
aurora    2013-04-01T13:50:56 bbajaj@mozilla.com    https://tbpl.mozilla.org/?tree=Mozilla-Aurora&rev=60a3f369ccf0
beta      2013-05-13T09:59:38 lsblakk@mozilla.com   https://tbpl.mozilla.org/?tree=Mozilla-Beta&rev=60a3f369ccf0
release   2013-06-17T15:53:19 akeybl@mozilla.com    https://tbpl.mozilla.org/?tree=Mozilla-Release&rev=c54e3363712e

(The -a argument prints all trees instead of just the release trees).

I'd like to integrate bug tracking into the mix to facilitate answering questions like when did bug 123456 ride the trains.

I'd also like to integrate release versions and build IDs into the mix. For example, when I look up a changeset, I want to know the first build on the Nightly, Aurora, Beta, and Release channels that change was included in.


Mercurial Extension for Gecko Development

July 22, 2013 at 10:27 AM | categories: Mercurial, Mozilla

My weekend was spent hacking on Mercurial extensions. First, I worked on porting the pushlog extension off SQLite. This will eventually enable Mozilla to move Mercurial hosting off NFS and should make hg.mozilla.org much faster as a result!

But the main purpose of this blog post is to introduce a new Mercurial extension I wrote this weekend!

Gecko developers perform a number of common tasks with Mercurial, so I thought it would be handy to package them up in an extension.

To install the extension:

hg clone https://hg.mozilla.org/hgcustom/version-control-tools

Then add this extension to your hgrc file (either the global or per-repository will suffice):

[extensions]
mozext = /path/to/version-control-tools/hgext/mozext

Since I believe tools should be self-documenting, run the following for usage info:

$ hg help mozext

Here are some examples:

# Clone mozilla-central into the mc directory.
hg clone central mc
hg clone mc mc

# Create a unified Mercurial repository containing changesets
# from all the release repositories.
hg cloneunified gecko

# Pull changes from the central and inbound repositories.
hg pull central
hg pull inbound

# Update the working tree to the tip of inbound.
hg up inbound/default

# View the tree open/closed status.
hg treestatus

# Show a list of all known trees and their aliases.
hg moztrees

# Open TBPL for the push containing a changeset.
hg tbpl inbound 821e984ef423
hg tbpl inbound inbound/default

# Push the tip of inbound to mozilla-central
hg pushtree -r inbound/default central

I've only tested this extension with Mercurial 2.6 (which every Mozilla developer should be running). I'm not willing to support older versions. Upgrade already!

There are a number of features I'd like to implement:

  • hg importtry - Automatically import changesets for a Try push into the repository.
  • hg land - Automatically land patches on an integration tree (like inbound). Will handle rebasing automatically.
  • hg critic - Perform stlye checking and other analysis on a changeset or group of changesets.
  • Ability to integrate build status into changeset info. This will allow things such as pull only the last green changeset. I'd also like a build status field to appear in the log output. Unfortunately, I believe the latency of the build lookup API is prohibitively high to perform the kind of tight integration I'd like.
  • Move mozautomation Python package into a standalone package or integrate already existing code (did I reinvent the wheel?).
  • Log fetching. Specify a changeset and fetch build/test logs.
  • Possibly move code into mozilla-central.
  • Possibly add mach commands for some of this functionality.

There's no bug component for this extension (yet). If you find any issues or wish to add a feature, just email a patch to me at gps@mozilla.com.

Please let me know if you find this useful or if you have any questions.


Analysis of Firefox's Build Automation

July 16, 2013 at 06:15 PM | categories: Mozilla

Mozilla operates thousands of machines whose sole role is to build Firefox (and related applications), run tests, perform static analysis, etc. This is collectively referred to as the Firefox/Mozilla build automation or just automation. The output of all this automation can be seen at tbpl.mozilla.org.

In this post, I'll give an overview of how all this automation works followed by a critical analysis identifying what I like and what I feel should be improved.

How Firefox automation works

Let's take a journey through what happens when you push a new revision of mozilla-central (the main Firefox repository) to Mozilla's canonical Mercurial server. (For Mozilla people, this journey is roughly the same regardless of which automation-enabled project branch you push to.) While Firefox's automation infrastructure kicks off builds on several platforms and operating systems, for simplicity reasons, I'm going to limit low-level technical details to our 64-bit Linux builds.

Before I begin, a disclaimer: I'm not a subject expert in much of what I'm about to say. There are people who spend a magnitude more time than myself touching the systems I'm about to describe. If I get something wrong, please contact me and I'll update this post.

Let's begin.

Buildbot

The heart of Firefox's build automation is a piece of software called Buildbot. Buildbot is essentially a glorified job scheduling system. I find the buildbot basics covers, well, the basics pretty well. What you need to know is that Mozilla maintains a buildbot repository that appears to contain the buildbot core plus basic customization for Mozilla.

There are buildbot masters and slaves. masters do all the coordination and scheduling; slaves do all the real work (such as compiling Firefox). Mozilla operates a handful of masters and a few thousand slaves.

When you push code to a project branch (like mozilla-central), a buildbot master sees the push then figures out what needs to happen. For mozilla-central, the push gets translated to a request to build on several different platforms. These requests then go to a scheduler (possibly getting collapsed into a single request). These requests then get turned into jobs that run on slaves.

This logic mostly lives in the buildbot-configs repository. Of particular interest is the config.py file, which pretty much defines how buildbot is configured at Mozilla - at a high-level anyway.

When a scheduled job executes, the high-level job request is converted into low-level actions (or steps in buildbot parlance) that get executed on slaves. For example, a request to build might clone the source repository, run client.mk, package the results, etc. This logic lives in the buildbotcustom repository. It's worth highlighting the factory.py file. This file contains the beef of the logic for converting high-level jobs into actions on slaves. Start at the MozillaBuildFactory class class to see exactly what goes into performing a build. Then move on to addDoBuildSteps(), which contains the command for invoking the actual build system. As you can see, there's a lot that goes into building besides just invoking the build system (like most developers do)!

For many automation jobs, there is an additional component that comes into play: mozharness. mozharness is relatively new to the Firefox build automation landscape so you may not be familiar with it. A goal of mozharness is to largely migrate the low-level logic from buildbotcustom - the logic that converts a high-level job request into low-level buildbot steps (typically command invocations) - into a separate, standalone entity that doesn't depend on buildbot. A goal is to enable developers to run mozharness locally and run automation jobs just like the official automation infrastructure does. If you have time, I encourage you to read the mozharness FAQ to learn more. My understanding is mozharness will eventually power all of the jobs currently defined in buildbotcustom, so I recommend getting acquainted with mozharness.

In the mozharness world, automation jobs are defined as scripts. Here's the marionette script. You just execute a script (with ideally as few arguments as possible) and mozharness does the rest. In buildbot, instead of having a job with say 12 steps and this logic for configuring the steps live in buildbot, buildbot just says run the marionette mozharness script. Since very little business logic now lives in buildbot, this essentially reduces buildbot's role to just job scheduling.

And that is essentially how the automation determines what to run. Now let's talk about the machines automation runs on.

Machine provisioning

Earlier, I said Mozilla operates thousands of buildbot slaves. Let's talk about how those slaves come into existence.

A slave is just a fancy name for a machine, either physical or virtual. These machines are owned or operated by Mozilla. Mozilla either buys a physical machine or rents one from a cloud provider, like Amazon EC2.

For hopefully obvious reasons, it is important for the configuration of these machines to be consistent. Let's talk about how that is done. Keep in mind I'm talking about Linux machines. OS X and Windows machines go through a different procedure.

When a new machine is acquired, it needs an operating system. There is a kickstart config file that installs CentOS 6.2. At the end of the base OS flash/install, it configures Puppet to talk to a central Puppet master. This Puppet infrastructure is called PuppetAgain and its files are stored in the puppet repository.

Puppet is let loose on the fresh OS install and eventually the machine is configured so it is homogenous with other similar machines in the automation infrastructure. Presumably, Puppet continually polls the central Puppet master and applies the latest configuration.

Part of the puppetization of this machine involves installing the buildbot client. The client eventually registers with the buildbot master and waits for jobs to process.

So, we've described how machines are provisioned as buildbot slaves and how buildbot jobs are converted to actions/steps/commands to be performed on slaves. Let's examine a job in more detail.

Running a build job

Before I talk about the details of a build job, it's worth mentioning that nearly everything I described up until this point is largely hidden from view from most Firefox developers. As far as I know, things like Puppet logs are hidden from public view. And there shouldn't be anything terribly wrong with that: the Puppet configs are public, after all. Unless you are affiliated with Release Engineering or the Automation and Tools Team (A*Team) or hack on a component that warrants its own piece of automation, you probably aren't too concerned with how all of this works.

Anyway, it's finally time to start talking about something almost every Firefox developer has done: build Firefox from source.

As I mentioned above, code in buildbotcustom (to be replaced by mozharness someday) is responsible for turning a Firefox build job into a series of actions/steps/commands to run on a slave. And, lucky for us, the activity of a slave is captured and saved to text log files! If you've ever used TBPL, you've almost certainly clicked a link to view one of these logs.

In this section, I'll describe the steps performed in this log from a recent mozilla-central build on a 64-bit Linux machine. I will be paying particular attention to steps that affect the build environment (for reasons that will be revealed in my critique below).

If you load our log of interest next to factory.py from buildbotcustom, you can start to see how they are related. You may notice self.addStep() calls in factory.py correspond to ========= Started ... ========= Finished sections in the log. That's no accident: every addStep() call produces a section like that in the log.

Now let's look at some of those steps in detail.

The job/log contains a number of set property steps. Search for set props and you'll find them in the log. These steps define named properties in a hash map buildbot uses to represent the current configuration/state. Think of these properties as a way for buildbot to communicate metadata between masters and slaves.

One of the first interesting steps we see is the cloning of the tools repository. Search for Started clone build tools and you'll find it. This repository contains a lot of support tools and scripts used by all parts of automation. There's lots of useful tools in there!

Skipping over the steps that check whether to clobber the builder or purge old content from disk, the next build steps relevant to our interests involve the population of a mock environment. Search for Started mock-tgt mozilla-centos6-x86_64 to find it in the logs.

Mock is a piece of software that manages chroots. It was written by the Fedora project for creating isolated build environments for software packages. For reasons unknown to me, Mozilla runs a forked version of mock called mock_mozilla.

The build job creates a fresh mock environment on every build. (This is clearly indicated by the INFO: chroot (/builds/mock_mozilla/mozilla-centos6-x86_64) unlocked and deleted line in the log.) Later on, builds are performed inside this mock environment. This means that every build job is mostly isolated from both the underlying operating system and all build jobs that came before.

You can see the creation of the new mock environment by looking for the mock_mozilla -r mozilla-centos6-x86_64 --init command in the log. This is using the mozilla-centos6-x86_64 configuration file to create the new environment. This file is managed by Puppet, so you can see it in the puppet repository. The setup command on line 11 is the most important line in this file: it defines what commands to run to initialize the new mock environment.

Populating the mock environment takes a number of buildbot steps. After copying a bunch of files into the mock/chroot, the mock environment is further initialized by installing a number of packages. Search for Started mock-install in the log. Yum is being used to install a number of packages required to build Firefox. This package list appears to come from config.py in the buildbot-configs repository. These packages are downloaded from a Yum repository hosted by Mozilla. Altogether, 249 new packages consuming 821 MB are installed during this step!

After the mock environment is created, the mozharness repository is removed, re-cloned, and updated to the production branch. After that, we pull changes from mozilla-central and update the local checkout to the revision we've been told to build.

If you search for Started got mozconfig, you'll find where the mozconfig file (the build system configuration file) is acquired.

After that, the tooltool configuration is consulted. Tooltool is essentially a content-addressable file store: files are stored and retrieved by the hash value of their content. A manifest file inside mozilla-central defines the set of files to fetch from tooltool at build time. At this step of the build, that file is consulted and listed files are downloaded. While the Linux manifest is currently empty, the OS X tooltool manifest defines the digest of the Clang archive to use to build the tree.

Since tooltool files are addressed by content (not merely by name), this means that the same file will be fetched no matter when the build runs. In other words, behavior is constant as the contents of the tooltool repository itself change (or at least it should be).

After tooltool contents sync, we finally arrive at the actual build step. Search for Started compile in the log. The important detail here is that mock_mozilla is used to build firefox with client.mk inside the fresh mock/chroot environment.

After building, we move on to other tasks, such as creating a distributable package, running tests, etc. I'm just going to glance over them because there's a lot going on and it would take a long time to explain it all! I encourage you to look at the steps in the log and learn.

While these steps are going on, the buildbot master is notified that the build and packaging aspect of the job has completed (sometime before make check is executed). Upon receiving this success notification, the buildbot master scheduled derived jobs, notably all the test suites (reftests, mochitests, xpcshell tests, etc). This scheduling occurs before make check has completed so overall turnaround time is reduced as much as possible.

Finally, at the end of the log, the slave/machine is rebooted. At this point, the job has finished. TBPL colors the B next to the job green to signal successul completion. The slave waits for its next job from the buildbot master.

And that is how Firefox is built! I could go into details for all of the derived test jobs, but I won't, as that would take a lot of effort! I leave that as an exercise for the reader.

Analysis

The Firefox build automation is complex and composed of many pieces. It's a testament to a lot of people's hard work that it works as well as it does!

From my experience at a previous job managing large numbers of servers in a production datacenter, I commend Release Engineering for deploying Puppet to help ensure the machines performing Firefox's build automation are in a consistent state. I also like how mock is used to mostly isolate build jobs from one another. These are both very important to ensure Firefox is built consistently over time.

The on-going migration of automation logic from buildbotcustom to mozharness is a fantastic project and I hope it is completed soon. I hold hope that one day we can integrate mozharness into the local development workflow (likely transparently through tools like mach) and local developers can invoke actions using the same code path as the official automation infrastructure.

Areas for improvement

The Firefox build automation largely works well. And I don't mean to take away from that. However, there are a number of areas where things could be improved. In this section, I'll talk about some of them.

Before I get into details, let me share an experience I had about a year ago.

A tale of modifying the xpcshell test harness

In April 2012, I was writing a lot of JavaScript testing code and was frustrated at how difficult it was to share test helper code between tests. I filed bug 748490 to request a new feature in the JavaScript test harnesses: the ability to create testing-only JavaScript modules that weren't shipped as part of Firefox. This would encourage code reuse among tests and make writing tests easier. I thought it would be a relatively simple feature to implement!

And, it was. At first. The initial implementation landed two weeks after I filed the bug (it didn't require too much effort, but I was busy with others tasks). It landed without incident. Although, nothing was using it and there was no test coverage. However, my local development started relying on it and things were working just fine.

When I attempted to actually land a test that made use of this new feature, it failed because the new directory for these shared modules wasn't being archived. Simple enough to fix, right? Wrong. Start reading bug 755339 from comment #4. It is a trail of agony. I had to uplift my change to all the major branches. Then, once that was done, buildbotcustom could be updated. Uplifts were performed on May 24. buildbotcustom was pushed out to production a week later on May 31. It immediately broke the world. At the time, philor says it was the worst tree bustage he'd ever seen. Literally every tree was red. Achievement unlocked. A workaround was quickly devised on May 31 (after the buildbotcustom change was backed out to restore working automation) and landed. Again, it had to be uplifted to all the trees. This patch conflicted when applied on older trees and I accidentally committed a typo and broke beta, release, and esr in the process. I got an earful for breaking these trees and lost a lot of street cred with the sheriffs (who are charged with keeping law and order in the land of the source trees). The buildbotcustom change finally landed on June 7 and stuck.

Finally, my initial idea of adding testing-only JavaScript modules had been implemented and was available for all to use. It only required uplifting a patch to all project branches, having Release Engineering reconfigure buildbotcustom, and breaking all the trees in the process. This experience was truly WTFOMGBBQ. And, the agony above was from adding a seemingly innocuous feature to a single test harness. Can you imagine what it's like to add a new test harness to automation?!

In-tree automation configs different from Release Engineering's

The above example demonstrates many shortcomings in the current automation infrastructure. The first one I will talk about is that the in-tree automation configs (how to perform a specific automation job, such as run an individual test harness) are almost completely different from what Release Engineering runs on the official infrastructure!

A year ago, mozilla-central had testsuite-targets.mk - a make file defining targets like xpcshell-tests and mochitest-plain that allowed developers to run these test suites locally. Release Engineering had buildbotcustom, which didn't use testsuite-targets.mk at all. While invoking most test harnesses is simply a matter of formulating arguments for a Python script, that logic was duplicated between testsuite-targets and buildbotcustom.

Today, Release Engineering has been porting job invocation code to mozharness. And, I have been encouraging people to locally run tests with mach. Initially, mach commands executed make. However, commands are now bypassing make and calling into the Python test harnesses natively. This allows more advanced behavior since mach has more control and insight into the underlying test harness. The downside is mach is now reinventing logic.

So, we now have 3 or 4 separate implementations for performing many of our automation jobs. Medium term, you figure Release Engineering consolidates all the buildbotcustom logic into mozharness, eliminating 1. I also think it is inevitable that testsuite-targets is refactored to invoke mach commands or is just removed altogether. But that still leaves us with 2 independent implementations!

I believe we should work towards consolidating the logic for job invocation to inside mozilla-central. The commands developers run locally should be as similar as possible to what runs on official automation infrastructure. Any differences introduce potential for different results. Differences also increase the burden to roll out changes. I would argue that an in-tree change should be all that is necessary to change the behavior of official automation infrastructure.

How this is accomplished, I'm not entirely certain. In my ideal world, I think mozharness would live in tree. Although, I understand that may not be practical because much of what mozharness does involves downloading just-built packages, uploading results to a server, etc. I think a middle ground that accomplishes most of what we seek is for the Release Engineering configs (mozharness) to contain as little logic as possible for actual job invocation and let something in-tree do the rest. For example, the mozharness job for the xpcshell test harness would simply be execute the run-xpcshell-tests script from the tests archive. mozilla-central could then have full domain over what exactly is done. If mozilla-central changes, everyone picks up those changes immediately.

Local run-time environments different from official ones

Similar in vein to the previous section is that there are discrepancies between local run-time environments and official ones. By run-time environment, I mean the state of the operating system (installed packages, configuration settings, etc). This plays an important role in determining how an automation job executes. An obvious example is the compiler toolchain. GCC 4.5 is obviously going to have different behavior from GCC 4.7.

While Release Engineering has taken steps to ensure consistency in the configuration of the machines powering the official automation infrastructure, local machine configurations are effectively living in the wild west. Developers are effectively unable to recreate the official automation environment. This lowers the liklihood that a local build will have the same outcome when performed on official automation infrastructure.

While supporting diverse run-time environments will result in greater compatibility and is thus a good thing, I think it is important that local developers have the ability to recreate the official run-time environment as closely as possible. This will raise confidence that local results can be trusted and should cut down on development cost by reducing the number of Try pushes and reducing development cycles.

Recreating the official run-time environment varies in difficulty depending on the operating system. For Linux, it should actually be pretty easy! For the build environment, Mozilla simply needs to package the mock environment used to build or at least could publish a script used to create said environment. In bug 886226, I made an initial stab at this by creating a Vagrantfile that will kinda/sorta recreate the official 64-bit Linux build environment. I even published an archive of the mock environment (which can be imported into Docker). As pointed out on the bug, my work isn't perfect. But it's a start. And it's better than what developers have today to reproduce the official environments.

If we decide publishing archives of chroot environments is the way to go, I believe we could extend the Linux solution to OS X as well. OS X has chroot. It also has a sandboxing facility (man sandbox). There are also tools like jailkit. Also, using chroot environments will also likely make our automation faster since using archives will almost surely be faster than recreating the environment on every job run. Bug 851294 has been filed to track this.

What about Windows? Well, I don't know. We sort of have an environment with MozillaBuild. But, it's not isolated from the rest of the operating system like chroot environments are. Modulo Windows, two out of three tier one platforms isn't too bad!

Now, even if the software environment is similar, hardware differences can also affect results. Unfortunately, there's little that can be done on this front aside from having everybody use the same hardware models. That's not going to happen. So let's pretend we don't have that problem.

Configurations change over time

I recently wrote a post on the importance of time on machine configurations. The gist is that configurations always pulling from version control tip or relying on external resource often vary with time and aren't truly idempotent. This makes it difficult to reproduce specific configurations at future points in time.

Unfortunately, Mozilla's build automation is highly susceptible to configuration variance over time. There are many examples.

The buildbot configuration is periodically pushed to the infrastructure. A job will inherit the config that is currently deployed. If you make a backwards incompatible change to the buildbot configs and push an old revision of mozilla-central, things will blow up.

There are two time-dependent aspects of Puppet bootstrapping. First, the Puppet configs (like buildbot configs) are periodically pushed to a central master server. Whichever version is deployed at a given time is the version picked up by the automation job. Second, we don't do a good job of pinning versions inside the Puppet config. Here are the Mercurial, mock, and Python manifests using the latest package version. This means if a new version of a package is added to Mozilla's Yum repositories that new package version will be deployed on the next Puppet sync. This is definitely not time independent. Fortunately, the base operating system configuration for Linux isn't terribly important because of the mock environment. But, it still varies.

Like the base operating system, the mock environment is full of dependence on time. First, the config file itself is managed by Puppet and thus susceptible to its time-dependent behavior. Second, the packages installed in the mock environment aren't version pinned. This means that whatever packages deployed in Mozilla's Yum repositories will be used. So, every time the Yum package repository is updated, we potentially switch the configuration of our build environment. On the surface, this terrifies me. Maybe we have a strict update policy in place to prevent excessive package updates. The modified times of packages on the server seems to indicate that. Even if we didn't update the package repository, it still seems a bit fragile: I'd much prefer to pin package versions everywhere than rely on the content of a Yum server.

The default branch of the tools repository is always checked out at build time. As new revisions are added to this repository, builds will pick them up immediately. If you make a backwards incompatible change to the tools repository, old revisions may no longer build.

The production branch of the mozharness repository is always used during jobs. Again, if mozharness changes, every future job sees these changes.

The takeaway here is that many of the tightly-related systems/repositories aren't linked in any way. The automation configuration simply pulls the tip of each. We run into problems when we wish to make backwards incompatible changes. If you make one, you first have to update all repositories to be compatible. This is a huge pain (see my experience above). Furthermore, it makes it impossible to have successful automation runs from old revisions of those repositories!

The solution to this mess is to pin versions and configs everywhere. The repositories being put through automation (namely mozilla-central) should contain revisions of the configs to use. It should say I want to use revision X of mozharness, revision Y of the build machine configuration, etc. This will enable much more consistent automation output over time. It cuts down on surprises (did behavior change on July 14 because of a change to mozilla-central or to a new automation config being rolled out to the server?). It allows people to build very old packages. This means Mozilla wouldn't need to store terabytes of old builds around (we could just trigger the build again and get the same output).

I recognize that deterministic automation configuration is likely not completely achievable. But, that doesn't mean we shouldn't work towards it. Having something better than today enables so many more useful scenarios and flexibility in our automation. Let me explain.

Say you want to add or remove a test harness. Or, maybe you want to add a required argument or remove an obsolute argument from a test harness. The procedure from doing this today is far from trivial. It's not enough to simply land your change in mozilla-central and be done with it. Instead, you need to land support in buildbotcustom or mozharness, prepare to land in mozilla-central, then coordinate with Release Engineering to have your changes landed around the time of a server deployment. And since changes likely affect multiple trees, you're likely also landing things in inbound, fx-team, services-central, possibly aurora, beta, and release, etc. If you aren't landing in aurora, beta, release, esr, b2g, etc, you need to remember to have your change merged into these server configurations when those trees inherit your code (although Release Engineering is typically pretty good about tracking this and doing it for you).

Contrast this with simply making a backwards incompatible change by checking in support in mozharness then making the change in mozilla-central along with a revision bump of the mozharness revision to use. When that changeset is pushed to the infrastructure, a compatible version of mozharness is used. When that change gets merged into other trees, the appropriate mozharness revision is used. No extra work needed. Utopia. You just made A*Team and Release Engineering much more productive by eliminating a lot of overhead.

Conclusion

The Mozilla automation infrastructure is a complex beast. There are many moving parts and separate systems. Any newcomer to Mozilla should simply stand in awe that so many systems seemlessly work so well together.

There are improvements that can be made, sure (especially in the area of deterministic behavior over time). But, I think with the mozharness work and the Puppetization of the server infrastructure that Mozilla is on the right course. I look forward to a future where the source code in mozilla-central and the automation infrastructure are more tightly integrated. We're trending there. Give it time.


« Previous Page -- Next Page »