mach -- the generic command line interface framework that is behind the mach tool used to build Firefox -- now has its canonical home in mozilla-central, the canonical repository for Firefox. The previous home has been updated to reflect the change.
mach will continue to be released on PyPI and installable via pip install mach.
I made the change because keeping multiple repositories in sync wasn't something I wanted to spend time doing. Furthermore, Mozillians have been contributing a steady stream of improvements to the mach core recently and it makes sense to leverage Mozilla's familiar infrastructure for patch contribution.
This decision may be revisited in the future. Time will tell.
I see a number of open source projects supporting old versions of Python. Mercurial supports 2.4, for example. I have to ask: why do projects continue to support old Python releases?
- Python 2.4 was last released on December 19, 2008 and there will be no more releases of Python 2.4.
- Python 2.5 was last released on May 26, 2011 and there will be no more releases of Python 2.5.
- Python 2.6 was last released on October 29, 2013 and there will be no more releases of Python 2.6.
- Everything before Python 2.7 is end-of-lifed
- Python 2.7 continues to see periodic releases, but mostly for bug fixes.
- Practically all of the work on CPython is happening in the 3.3 and 3.4 branches. Other implementations continue to support 2.7.
- Python 2.7 has been available since July 2010
- Python 2.7 provides some very compelling language features over earlier releases that developers want to use
- It's much easier to write dual compatible 2/3 Python when 2.7 is the only 2.x release considered.
- Python 2.7 can be installed in userland relatively easily (see projects like pyenv).
Given these facts, I'm not sure why projects insist on supporting old and end-of-lifed Python releases.
I think maintainers of Python projects should seriously consider dropping support for Python 2.6 and below. Are there really that many people on systems that don't have Python 2.7 easily available? Why are we Python developers inflicting so much pain on ourselves to support antiquated Python releases?
As a data point, I successfully transitioned Firefox's build system from requiring Python 2.5+ to 2.7.3+ and it was relatively pain free. Sure, a few people complained. But as far as I know, not very many new developers are coming in and complaining about the requirement. If we can do it with a few thousand developers, I'm guessing your project can as well.
Update 2014-01-09 16:05:00 PST: This post is being discussed on Slashdot. A lot of the comments talk about Python 3. Python 3 is its own set of considerations. The intended focus of this post is strictly about dropping support for Python 2.6 and below. Python 3 is related in that porting Python 2.x to Python 3 is much easier the higher the Python 2.x version. This especially holds true when you want to write Python that works simultaneously in both 2.x and 3.x.
There is a common practice at Mozilla for developing patches with multiple parts. Nothing wrong with that. In fact, I think it's a best practice:
- Smaller, self-contained patches are much easier to grok and review than larger patches.
- Smaller patches can land as soon as they are reviewed. Larger patches tend to linger and get bit rotted.
- Smaller patches contribute to a culture of being fast and nimble, not slow and lethargic. This helps with developer confidence, community contributions, etc.
There are some downsides to multiple, smaller patches:
- The bigger picture is harder to understand until all parts of a logical patch series are shared. (This can be alleviated through commit messages or reviewer notes documenting future intentions. And of course reviewers can delay review until they are comfortable.)
- There is more overhead to maintain the patches (rebasing, etc). IMO the solutions provided by Mercurial and Git are sufficient.
- The process overhead for dealing with multiple patches and/or bugs can be non-trivial. (I would like to think good tooling coupled with continuous revisiting of policy decisions is sufficient to counteract this.)
Anyway, the prevailing practice at Mozilla seems to be that multiple patches related to the same logical change are attached to the same bug. I would like to challenge the effectiveness of this practice.
- An individual commit to Firefox should be standalone and should not rely on future commits to unbust it (i.e. bisects to any commit should be safe).
- Bugzilla has no good mechanism to isolate review comments from multiple attachments on the same bug, making deciphering simultaneous reviews on multiple attachments difficult and frustrating. This leads to review comments inevitably falling through the cracks and the quality of code suffering.
- Reiterating the last point because it's important.
I therefore argue that attaching multiple reviews to a single Bugzilla bug is not a best practice and it should be avoided if possible. If that means filing separate bugs for each patch, so be it. That process can be automated. Tools like bzexport already do it. Alternatively (and even better IMO), we ditch Bugzilla's code review interface (Splinter) and integrate something like ReviewBoard instead. We limit Bugzilla to tracking, high-level discussion, and metadata aggregation. Code review happens elsewhere, without all the clutter and chaos that Bugzilla brings to the table.
I have a number of Python projects and tools that interact with various Mozilla services. I had authored clients for all these services as standalone Python modules so they could be reused across projects.
$ pip install mozautomation
Currently included in the Python package are:
- A client for treestatus.mozilla.org
- Module for extracting cookies from Firefox profiles (useful for programmatically getting Bugzilla auth credentials).
- A client for reading and interpretting the JSON dumps of automation jobs
- An interface to a SQLite database to manage associations between Mercurial changesets, bugs, and pushes.
- Rudimentary parsing of commit messages to extract bugs and reviewers.
- A client to obtain information about Firefox releases via the releases API
- A module defining common Firefox source repositories, aliases, logical groups (e.g. twigs and integration trees), and APIs for fetching pushlog data.
- A client for the self serve API
Documentation and testing is currently sparse. Things aren't up to my regular high quality standard. But something is better than nothing.
If you are interested in contributing, drop me a line or send pull requests my way!
The subject of where to host version control repositories comes up a lot at Mozilla. It takes many forms:
- We should move the Firefox repository to GitHub
- I should be allowed to commit to GitHub
- I want the canonical repository to be hosted by Bitbucket
When Firefox development is concerned, Release Engineerings puts down their foot and insists the canonical repository be hosted by Mozilla, under a Mozilla hostname. When that's not possible, they set up a mirror on Mozilla infrastructure.
I think a recent issue with the Jenkins project demonstrates why hosting your own version control server is important. The gist is someone force pushed to a bunch of repos hosted on GitHub. They needed to involve GitHub support to recover from the issue. While it appears they largely recovered (and GitHub support deserves kudos - I don't want to take away from their excellence), this problem would have been avoided or the response time significantly decreased if the Jenkins people had direct control over the Git server: they either could have installed a custom hook that would have prevented the pushes or had access to the reflog so they could have easily seen the last pushed revision and easily forced pushed back to it. GitHub doesn't have a mechanism for defining pre-* hooks, doesn't allow defining custom hooks (a security and performance issue for them), and doesn't expose the reflog data.
Until repository hosting services expose full repository data (such as reflogs) and allow you to define custom hooks, accidents like these will happen and the recovery time will be longer than if you hosted the repo yourself.
It's possible repository hosting services like GitHub and Bitbucket will expose these features or provide a means to quickly recover. If so, kudos to them. But larger, more advanced projects will likely employ custom hooks and considering custom hooks are a massive security and performance issue for any hosted service provider, I'm not going to hold my breath this particular feature is rolled out any time soon. This is unfortunate, as it makes projects seemingly choose between low risk/low convenience and GitHub's vibrant developer community.
« Previous Page -- Next Page »