Dropping Explicit Support for Mercurial 3.0

May 07, 2015 at 04:05 PM | categories: Mercurial, Mozilla | View Comments

As of a few minutes ago, we explicitly dropped support for Mercurial 3.0 for all the Mercurial code in the version-control-tools repository. File issues in bug 1162304.

Code may still work against Mercurial 3.0. But it isn't supported and could break hard at any time.

Supporting multiple versions of any software carries with it some cost. The people writing tooling around Mercurial are busy. It is a waste of our time to bend over backwards to support old versions of software that all users should have upgraded from months ago. Still using older Mercurial versions means you aren't getting the best performance and may encounter bugs that have since been fixed.

See the Mozilla tailored Mercurial installation instructions for info on how to upgrade to the latest/greatest Mercurial version.

Read and Post Comments

Reporting Mercurial Issues

May 04, 2015 at 01:45 PM | categories: Mercurial, Mozilla | View Comments

I semi-frequently stumble upon conversations in hallways and on irc.mozilla.org about issues people are having with Mercurial. These conversations periodically involve a legitimate bug with Mercurial. Unfortunately, these conversations frequently end without an actionable result. Unless someone files a bug, pings me, etc, the complaints disappear into ether. That's not good for anyone and only results in bugs living longer than they should.

There are posters around Mozilla offices that say if you see something, file something. This advice does not just apply to Mozilla projects!

If you encounter an issue in Mercurial, please take the time to report it somewhere meaningful. The Reporting Issues with Mercurial page from the Mercurial for Mozillians guide tells you how to do this.

It is OK to complain about something. But if you don't inform someone empowered to do something about it, you are part of the problem without being part of the solution. Please make the incremental effort to be part of the solution.

Read and Post Comments

Mercurial 3.4 Released

May 04, 2015 at 12:40 PM | categories: Mercurial, Mozilla | View Comments

Mercurial 3.4 was released on May 1 (following Mercurial's time-based schedule of releasing a new version every 3 months).

3.4 is a significant release for a few reasons.

First, the next version of the wire protocol (bundle2) has been marked as non-experimental on servers. This version of the protocol paves over a number of deficiencies in the classic protocol. I won't go into low-level details. But I will say that the protocol enables some rich end-user experiences, such as having the server hand out URLs for pre-generated bundles (e.g. offload clones to S3), atomic push operations, and advanced workflows, such as having the server rebase automatically on push. Of course, you'll need a server running 3.4 to realize the benefits of the new protocol. hg.mozilla.org won't be updated until at least June 1.

Second, Mercurial 3.4 contains improvements to the tags cache to make performance concerns a thing of the past. Due to the structure of the Firefox repositories, the previous implementation of the tags cache could result in pauses of dozens of seconds during certain workflows. The problem should go away with Mercurial 3.4. Please note that on first use of Mercurial 3.4, your repository may perform a one-time upgrade of the tags cache. This will spin a full CPU core and will take up to a few minutes to complete on Firefox repos. Let it run to completion and performance should not be an issue again. I wrote the patches to change the tags cache (with lots of help from Pierre-Yves David, a Mercurial core contributor). So if you find anything wrong, I'm the one to complain to.

Third, the HTTP interface to Mercurial (hgweb) now has JSON output for nearly every endpoint. The implementation isn't yet complete, but it is better than nothing. But, it should be good enough for services to start consuming it. Again, this won't be available on hg.mozilla.org until the server is upgraded on June 1 at the earliest. This is a feature I added to core Mercurial. If you have feature requests, send them my way.

Fourth, a number of performance regressions introduced in Mercurial 3.3 were addressed. These performance issues frequently manifested during hg blame operations. Many Mozillians noticed them on hg.mozilla.org when looking at blame through the web interface.

For a more comprehensive list of changes, see my post about the 3.4 RC and the official release notes.

3.4 was a significant release. There are compelling reasons to upgrade. That being said, there were a lot of changes in 3.4. If you want to wait until 3.4.1 is released (scheduled for June 1) so you don't run into any regressions, nobody can fault you for that.

If you want to upgrade, I recommend reading the Mercurial for Mozillians Installation Page.

Read and Post Comments

Automatically Redirecting Mercurial Pushes

April 30, 2015 at 12:30 PM | categories: Mercurial, Mozilla | View Comments

Managing URLs in distributed version control tools can be a pain, especially if multiple repositories are involved. For example, with Mozilla's repository-based code review workflow (you push to a special review repository to initiate code review - this is conceptually similar to GitHub pull requests), there exist separate code review repositories for each logical repository. Figuring out how repositories map to each other and setting up remote paths for each new clone can be a pain and time sink.

As of today, we can now do something better.

If you push to ssh://reviewboard-hg.mozilla.org/autoreview, Mercurial will automatically figure out the appropriate review repository and redirect your push automatically. In other words, if we have MozReview set up to review whatever repository you are working on, your push and review request will automatically go through. No need to figure out what the appropriate review repo is or configure repository URLs!

Here's what it looks like:

$ hg push review
pushing to ssh://reviewboard-hg.mozilla.org/autoreview
searching for appropriate review repository
redirecting push to ssh://reviewboard-hg.mozilla.org/version-control-tools/
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
remote: Trying to insert into pushlog.
remote: Inserted into the pushlog db successfully.
submitting 1 changesets for review

changeset:  11043:b65b087a81be
summary:    mozreview: create per-commit identifiers (bug 1160266)
review:     https://reviewboard.mozilla.org/r/7953 (draft)

review id:  bz://1160266/gps
review url: https://reviewboard.mozilla.org/r/7951 (draft)
(visit review url to publish this review request so others can see it)

Read the full instructions for more details.

This requires an updated version-control-tools repository, which you can get by running mach mercurial-setup from a Firefox repository.

For those that are curious, the autoreview repo/server advertises a list of repository URLs and their root commit SHA-1. The client automatically sends the push to a URL sharing the same root commit. The code is quite simple.

While this is only implemented for MozReview, I could envision us doing something similar for other centralized repository-centric services, such as Try and Autoland. Stay tuned.

Read and Post Comments

My Current Thoughts on System Administration

April 17, 2015 at 03:35 PM | categories: sysadmin, Mozilla | View Comments

I attended PyCon last week. It's a great conference. You should attend. While I should write up a detailed trip report, I wanted to quickly share one of my takeaways.

Ansible was talked about a lot at PyCon. Sitting through a few presentations and talking with others helped me articulate why I've been drawn to Ansible (over say Puppet, Chef, Salt, etc) lately.

First, Ansible doesn't require a central server. Administration is done remotely Ansible establishes a SSH connection to a remote machine and does stuff. Having Ruby, Python, support libraries, etc installed on production systems just for system administration never really jived with me. I love Ansible's default hands off approach. (Yes, you can use a central server for Ansible, but that's not the default behavior. While tools like Puppet could be used without a central server, it felt like they were optimized for central server use and thus local mode felt awkward.)

Related to central servers, I never liked how that model consists of clients periodically polling for and applying updates. I like the idea of immutable server images and periodic updates work against this goal. The central model also has a major bazooka pointed at you: at any time, you are only one mistake away from completely hosing every machine doing continuous polling. e.g. if you accidentally update firewall configs and lock out central server and SSH connectivity, every machine will pick up these changes during periodic polling and by the time anyone realizes what's happened, your machines are all effectively bricked. (Yes, I've seen this happen.) I like having humans control exactly when my systems apply changes, thank you. I concede periodic updates and central control have some benefits.

Choosing not to use a central server by default means that hosts are modeled as a set of applied Ansible playbooks, not necessarily as a host with a set of Ansible playbooks attached. Although, Ansible does support both models. I can easily apply a playbook to a host in a one-off manner. This means I can have playbooks represent common, one-off tasks and I can easily run these tasks without having to muck around with the host to playbook configuration. More on this later.

I love the simplicity of Ansible's configuration. It is just YAML files. Not some Ruby-inspired DSL that takes hours to learn. With Ansible, I'm learning what modules are available and how they work, not complicated syntax. Yes, there is complexity in Ansible's configuration. But at least I'm not trying to figure out the file syntax as part of learning it.

Along that vein, I appreciate the readability of Ansible playbooks. They are simple, linear lists of tasks. Conceptually, I love the promise of full dependency graphs and concurrent execution. But I've spent hours debugging race conditions and cyclic dependencies in Puppet that I'm left unconvinced the complexity and power is worth it. I do wish Ansible could run faster by running things concurrently. But I think they made the right decision by following KISS.

I enjoy how Ansible playbooks are effectively high-level scripts. If I have a shell script or block of code, I can usually port it to Ansible pretty easily. One pass to do the conversion 1:1. Another pass to Ansibilize it. Simple.

I love how Ansible playbooks can be checked in to source control and live next to the code and applications they manage. I frequently see people maintain separate source control repositories for configuration management from the code it is managing. This always bothered me. When I write a service, I want the code for deploying and managing that service to live next to it in version control. That way, I get the configuration management and the code versioned in the same timeline. If I check out a release from 2 years ago, I should still be able to use its exact configuration management code. This becomes difficult to impossible when your organization is maintaining configuration management code in a separate repository where a central server is required to do deployments (see Puppet).

Before PyCon, I was having an internal monolog about adopting the policy that all changes to remote servers be implemented with Ansible playbooks. I'm pleased to report that a fellow contributor to the Mercurial project has adopted this workflow himself and he only has great things to say! So, starting today, I'm going to try to enforce that every change I make to a remote server is performed via Ansible and that the Ansible playbooks are checked into version control. The Ansible playbooks will become implicit documentation of every process involved with maintaining a server.

I've already applied this principle to deploying MozReview. Before, there was some internal Mozilla wiki documenting commands to execute in a terminal to deploy MozReview. I have replaced that documentation with a one-liner that invokes Ansible. And, the Ansible files are now in a public repository.

If you poke around that repository, you'll see that I have Ansible playbooks referencing Docker. I have Ansible provisioning Docker images used by the test and development environment. That same Ansible code is used to configure our production systems (or is at least in the process of being used in that way). Having dev, test, and prod using the same configuration management has been a pipe dream of mine and I finally achieved it! I attempted this before with Puppet but was unable to make it work just right. The flexibility that Ansible's design decisions have enabled has made this finally possible.

Ansible is my go to system management tool right now. And I still feel like I have a lot to learn about its hidden powers.

If you are still using Puppet, Chef, or other tools invented in previous generations, I urge you to check out Ansible. I think you'll be pleasantly surprised.

Read and Post Comments

« Previous Page -- Next Page »