Dropping Explicit Support for Mercurial 3.0

May 07, 2015 at 04:05 PM | categories: Mercurial, Mozilla

As of a few minutes ago, we explicitly dropped support for Mercurial 3.0 for all the Mercurial code in the version-control-tools repository. File issues in bug 1162304.

Code may still work against Mercurial 3.0. But it isn't supported and could break hard at any time.

Supporting multiple versions of any software carries with it some cost. The people writing tooling around Mercurial are busy. It is a waste of our time to bend over backwards to support old versions of software that all users should have upgraded from months ago. Still using older Mercurial versions means you aren't getting the best performance and may encounter bugs that have since been fixed.

See the Mozilla tailored Mercurial installation instructions for info on how to upgrade to the latest/greatest Mercurial version.


Reporting Mercurial Issues

May 04, 2015 at 01:45 PM | categories: Mercurial, Mozilla

I semi-frequently stumble upon conversations in hallways and on irc.mozilla.org about issues people are having with Mercurial. These conversations periodically involve a legitimate bug with Mercurial. Unfortunately, these conversations frequently end without an actionable result. Unless someone files a bug, pings me, etc, the complaints disappear into ether. That's not good for anyone and only results in bugs living longer than they should.

There are posters around Mozilla offices that say if you see something, file something. This advice does not just apply to Mozilla projects!

If you encounter an issue in Mercurial, please take the time to report it somewhere meaningful. The Reporting Issues with Mercurial page from the Mercurial for Mozillians guide tells you how to do this.

It is OK to complain about something. But if you don't inform someone empowered to do something about it, you are part of the problem without being part of the solution. Please make the incremental effort to be part of the solution.


Mercurial 3.4 Released

May 04, 2015 at 12:40 PM | categories: Mercurial, Mozilla

Mercurial 3.4 was released on May 1 (following Mercurial's time-based schedule of releasing a new version every 3 months).

3.4 is a significant release for a few reasons.

First, the next version of the wire protocol (bundle2) has been marked as non-experimental on servers. This version of the protocol paves over a number of deficiencies in the classic protocol. I won't go into low-level details. But I will say that the protocol enables some rich end-user experiences, such as having the server hand out URLs for pre-generated bundles (e.g. offload clones to S3), atomic push operations, and advanced workflows, such as having the server rebase automatically on push. Of course, you'll need a server running 3.4 to realize the benefits of the new protocol. hg.mozilla.org won't be updated until at least June 1.

Second, Mercurial 3.4 contains improvements to the tags cache to make performance concerns a thing of the past. Due to the structure of the Firefox repositories, the previous implementation of the tags cache could result in pauses of dozens of seconds during certain workflows. The problem should go away with Mercurial 3.4. Please note that on first use of Mercurial 3.4, your repository may perform a one-time upgrade of the tags cache. This will spin a full CPU core and will take up to a few minutes to complete on Firefox repos. Let it run to completion and performance should not be an issue again. I wrote the patches to change the tags cache (with lots of help from Pierre-Yves David, a Mercurial core contributor). So if you find anything wrong, I'm the one to complain to.

Third, the HTTP interface to Mercurial (hgweb) now has JSON output for nearly every endpoint. The implementation isn't yet complete, but it is better than nothing. But, it should be good enough for services to start consuming it. Again, this won't be available on hg.mozilla.org until the server is upgraded on June 1 at the earliest. This is a feature I added to core Mercurial. If you have feature requests, send them my way.

Fourth, a number of performance regressions introduced in Mercurial 3.3 were addressed. These performance issues frequently manifested during hg blame operations. Many Mozillians noticed them on hg.mozilla.org when looking at blame through the web interface.

For a more comprehensive list of changes, see my post about the 3.4 RC and the official release notes.

3.4 was a significant release. There are compelling reasons to upgrade. That being said, there were a lot of changes in 3.4. If you want to wait until 3.4.1 is released (scheduled for June 1) so you don't run into any regressions, nobody can fault you for that.

If you want to upgrade, I recommend reading the Mercurial for Mozillians Installation Page.


Automatically Redirecting Mercurial Pushes

April 30, 2015 at 12:30 PM | categories: Mercurial, Mozilla

Managing URLs in distributed version control tools can be a pain, especially if multiple repositories are involved. For example, with Mozilla's repository-based code review workflow (you push to a special review repository to initiate code review - this is conceptually similar to GitHub pull requests), there exist separate code review repositories for each logical repository. Figuring out how repositories map to each other and setting up remote paths for each new clone can be a pain and time sink.

As of today, we can now do something better.

If you push to ssh://reviewboard-hg.mozilla.org/autoreview, Mercurial will automatically figure out the appropriate review repository and redirect your push automatically. In other words, if we have MozReview set up to review whatever repository you are working on, your push and review request will automatically go through. No need to figure out what the appropriate review repo is or configure repository URLs!

Here's what it looks like:

$ hg push review
pushing to ssh://reviewboard-hg.mozilla.org/autoreview
searching for appropriate review repository
redirecting push to ssh://reviewboard-hg.mozilla.org/version-control-tools/
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
remote: Trying to insert into pushlog.
remote: Inserted into the pushlog db successfully.
submitting 1 changesets for review

changeset:  11043:b65b087a81be
summary:    mozreview: create per-commit identifiers (bug 1160266)
review:     https://reviewboard.mozilla.org/r/7953 (draft)

review id:  bz://1160266/gps
review url: https://reviewboard.mozilla.org/r/7951 (draft)
(visit review url to publish this review request so others can see it)

Read the full instructions for more details.

This requires an updated version-control-tools repository, which you can get by running mach mercurial-setup from a Firefox repository.

For those that are curious, the autoreview repo/server advertises a list of repository URLs and their root commit SHA-1. The client automatically sends the push to a URL sharing the same root commit. The code is quite simple.

While this is only implemented for MozReview, I could envision us doing something similar for other centralized repository-centric services, such as Try and Autoland. Stay tuned.


New High Scores for hg.mozilla.org

March 19, 2015 at 08:20 PM | categories: Mercurial, Mozilla

It's been a rough week.

The very short summary of events this week is that both the Firefox and Firefox OS release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of release automation was making more requests than it should have and was doing so in a way that both increased the chances of an outage occurring and increased the chances of a prolonged outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TB of data. This is a significant increase from the ~4 TB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making under-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.


« Previous Page -- Next Page »