Announcing the 0.9 Release of PyOxidizer

October 18, 2020 at 10:00 PM | categories: Python, PyOxidizer

I have decided to make up for the 6 month lull between PyOxidizer's 0.7 and 0.8 releases by releasing PyOxidizer 0.9 just 1 week after 0.8!

The full 0.9 changelog is found in the docs. First time user? See the Getting Started documentation.

While the 0.9 release is far smaller in terms of features compared to 0.8, it is an important release because of progress closing compatibility gaps.

Build a python Executable

PyOxidizer 0.8 quietly shipped the ability to build executables that behave like python executables via enhancements to the configurability of embedded Python interpreters.

PyOxidizer 0.9 made some minor changes to make this scenario work better and there is even official documentation on how to achieve this. So now you can emit a python executable next to your application's executable. Or you could use PyOxidizer to build a highly portable, self-contained python executable and ship your Python scripts next to it, using PyOxidizer's python in your #!.

Support Packaging Files as Files for Maximum Compatibility

There is a long-tail of Python packages that don't just work with PyOxidizer. A subset of these packages don't work because of bugs with how PyOxidizer attempts to classify files as specific types of Python resources.

The way that normal Python works is you materialize a bunch of files on the filesystem and at run-time the filesystem-based importer stat()s a bunch of paths until it finds a candidate file satisfying the import request. This works of course. But it is inefficient. Since PyOxidizer has awareness of every resource being packaged at build time, it attempts to index all known resources and serialize them to an efficient data structure so finding and loading a resource can be extremely quick (effectively just a hashmap lookup in Rust code to resolve the memory address of data).

PyOxidizer's approach does work in the majority of cases. But there are edge cases. For example, NumPy's binary wheels have installed file paths like numpy.libs/libopenblasp-r0-ae94cfde.3.9.dev.so. The numpy.libs directory is not a valid Python package directory since it has a . and since it doesn't have an __init__.py[c] file. This is a case where PyOxidizer's code for turning files into resources is currently confused.

It is tempting to argue that file layouts like NumPy's are wrong. But there doesn't seem to be any formal specification preventing the use of such layouts. The arbiter of truth here is what Python packaging tools accept and the current code for installing wheels gladly accepts file layouts like these. So I've accepted that PyOxidizer is just going to have to support edge cases like this. (I've captured more details about this particular issue in the docs).

Anyway, PyOxidizer 0.9 ships a new, simpler mode for handling files: files mode. In files mode, PyOxidizer disables its code for classifying files as typed Python resources (like module sources and extension modules) and instead treats a file as... a file.

When in files mode, actions that invoke Python packaging tools return files objects instead of classified resources. If you then add these files for packaging, those files are materialized on the filesystem next to your built executable. You can then use Python's standard filesystem importer to load these files at run-time.

This allows you to use PyOxidizer with packages like NumPy that were previously incompatible due to bugs with file/resource classification. In fact, getting NumPy working with PyOxidizer is now in the official documentation!

Files mode is still in its infancy. There exists code for embedding files data in the produced executable. I plan to eventually teach PyOxidizer's run-time code to extract these embedded files to a temporary directory, SquashFS FUSE filesystem, etc. This is the approach that other Python packaging tools like PyInstaller and XAR use. While it is less efficient, this approach is highly compatible with Python code in the wild since you sidestep issues with __file__ and other assumptions about installed file layouts. So it makes sense for PyOxidizer to provide support for this so you can still achieve the friendliness of a self-contained executable without worrying about compatibility. Look for improvements to files mode in future releases.

And to help debug issues with PyOxidizer's file handling and resource classification, the new pyoxidizer find-resources command can be used to invoke PyOxidizer's code for scanning and classifying files. Hopefully this makes it easier to diagnose bugs in this critical component of PyOxidizer!

Some Important Bug Fixes

PyOxidizer 0.8 shipped with some pretty annoying bugs and behavior quirks.

The ability to set custom sys.path values via Starlark was broken. How I managed to ship that, I'm not sure. But it is fixed in 0.9.

Another bug I can't believe I shipped was the PythonExecutable.read_virtualenv() Starlark method being broken due to a typo. You can read from virtualenvs again in PyOxidizer 0.9.

Another important improvement is in the default Python interpreter configuration. We now automatically initialize Python's locales configuration by default. Without this, the encoding of filesystem paths and sys.argv may not have been correct. If someone passed a non-ASCII argument, the Python str value was likely mangled. PyOxidizer built binaries should behave reasonably by default now. The issue is a good read if the subtle behaviors of how encodings work in Python and on different operating systems is interesting to you.

Better Binary Portability Documentation

The documentation on binary portability has been overhauled. Hopefully it is much more clear about the capabilities of PyOxidizer to produce a binary that just works on other machines.

I eventually want to get PyOxidizer to a point where users don't have to think about binary portability. But until PyOxidizer starts generating installers and providing the ability to run builds in deterministic and reproducible environments, it is sadly a problem that is being externalized to end users.

In Conclusion

PyOxidizer 0.9 is a small release representing just 1 week of work. But it contains some notable features that I wanted to get out the door.

As always, please report any issues or feedback in the GitHub issue tracker or the users mailing list.


Announcing the 0.8 Release of PyOxidizer

October 12, 2020 at 12:45 AM | categories: Python, PyOxidizer

I am very excited to announce the 0.8 release of PyOxidizer, a modern Python application packaging tool. You can find the full changelog in the docs. First time user? See the Getting Started documentation.

Foremost, I apologize that this release took so long to publish (0.7 was released on 2020-04-09). I fervently believe that frequent releases are a healthy software development practice. And 6 months between PyOxidizer releases was way too long. Part of the delay was due to world events (it has proven difficult to focus on... anything given a global pandemic, social unrest, and wildfires further undermining any resemblance of lifestyle normalcy in California). Another contributing factor was I was waiting on a few 3rd party Rust crates to have new versions published to crates.io (you can't release a crate to crates.io unless all your dependencies are also published there).

Release delay and general life hardships aside, the 0.8 release is here and it is full of notable improvements!

Python 3.8 and 3.9 Support

PyOxidizer 0.8 now targets Python 3.8 by default and support for Python 3.9 is available by tweaking configuration files. Previously, we only supported Python 3.7 and this release drops support for Python 3.7. I feel a bit bad for dropping compatibility. But Python 3.8 introduced a new C API for initializing Python interpreters (thank you Victor Stinner!) and this makes PyOxidizer's run-time code for interfacing with Python interpreters vastly simpler. I decided that given the beta nature of PyOxidizer, it wasn't worth maintaining complexity to continue to support Python 3.7. I'm optimistic that I'll be able to support Python 3.8 as a baseline for a while.

Better Default Packaging Settings

PyOxidizer started as a science experiment of sorts to see if I could achieve the elusive goal of producing a single file executable providing a Python application. I was successful in proving this hypothesis. But the cost to achieving this outcome was rather high in terms of end-user experience: in order to produce single file executables, you had to break a lot of assumptions about how Python typically works and this in turn broke a lot of Python code and packages in the wild.

In other words, PyOxidizer's opinionated defaults of producing a single file executable were externalizing hardship on end-users and preventing them from using PyOxidizer.

PyOxidizer 0.8 contains a handful of changes to defaults that should hopefully lessen the friction.

On Windows, the default Python distribution now has a more traditional build configuration (using .pyd extension modules and a pythonXY.dll file). This means that PyOxidizer can consume pre-built extension modules without having to recompile them from source. If you publish a Windows binary wheel on PyPI, in many cases it will just work with PyOxidizer 0.8! (There are some notable exceptions to this, such as numpy, which is doing wonky things with the location of shared libraries in wheels - but I aim to fix this soon.)

Also on Windows, we no longer attempt to embed Python extension modules (.pyd files) and their shared library dependencies in the produced binary and load them from memory by default. This is because PyOxidizer's from-memory library loader didn't work in all cases. For example, some OpenSSL functionality used by the _ssl module in the standard library didn't work, preventing Python from establishing TLS connections. The old mode enabling you to produce a single file executable on Windows is still available. But you have to opt in to it (at the likely cost of more packaging and compatibility pain).

Starlark Configuration Overhaul

PyOxidizer 0.8 contains a ton of changes to its Starlark configuration files. There are so many changes that you may find it easier to port to PyOxidizer 0.8 by creating a new configuration file rather than attempting to port an existing one.

I apologize for this churn and recognize it will be disruptive. However, this churn needed to happen for various reasons.

Much of the old Starlark configuration semantics was rooted in the days when configuration files were static TOML files. Now that configuration files provide the power of a (Python-inspired) programming language, we are free to expose much more flexibility. But that flexibility requires refactoring things so the experience feels more native.

Many changes to Starlark were rooted in necessity. For example, the methods for invoking setup.py or pip install used to live on a Python distribution type and have been moved to a type representing executables. This is because the binary we are targeting influences how packaging actions behave. For example, if the binary only supports loading resources from memory (as opposed to standalone files), we need to know that when invoking the packaging tool so we can produce files (notably Python extension modules) compatible with the destination.

A major change to Starlark in 0.8 is around resource location handling. Before, you could define a static string denoting the resources policy for where things should be placed. And there were 10+ methods for adding different resource types (source, bytecode, extensions, package data) to different load locations (memory, filesystem). This mechanism is vastly simplified and more powerful in PyOxidizer 0.8!

In PyOxidizer 0.8, there is a single add_python_resource() method for adding a resource to a binary and the Starlark objects you add can denote where they should be added by defining attributes on those objects.

Furthermore, you can define a Starlark function that is called when resource objects are created to apply custom packaging rules using custom Starlark code defined in your PyOxidizer config file. So rather than having everyone try to abide by a few pre-canned policies for packaging resources, you can define a proper function in your config file that can be as complex as you want/need it to be! I feel this is vastly simpler and more powerful than implementing a custom DSL in static configuration files (like TOML, JSON, YAML, etc).

While the ability to implement your own arbitrarily complex packaging policies is useful, there is a new PythonPackagingPolicy Starlark type with enough flexibility to suit most needs.

Shipping oxidized_importer

During the development of PyOxidizer 0.8, I broke out the custom Rust-based Python meta-path importer used by PyOxidizer's run-time code into a standalone Python package. This sub-project is called oxidized_importer and I previously blogged about it.

PyOxidizer 0.8 ships oxidized_importer and all of its useful APIs available to Python. Read more in the official docs. The new Python APIs should make debugging issues with PyOxidizer-packaged applications vastly simpler: I found them invaluable when tracking down user-reported bugs!

Tons of New Tests and Refactored Code

PyOxidizer was my first non-toy Rust project. And the quality of the Rust code I produced in early versions of PyOxidizer clearly showed it. And when I was in the rapid-prototyping phase of PyOxidizer, I eschewed writing tests in favor of short-term progress.

PyOxidizer 0.8 pays down a ton of technical debt in the code base. Lots of Rust code has been refactored and is using somewhat reasonable practices. I'm not yet a Rust guru. But I'm at the point where I cringe when I look at some of the early code I wrote, which is a good sign. I do have to say that Rust has been a dream to work with during this transition. Despite being a low-level language, my early misuse of Rust did not result in crashes like you would see in languages like C/C++. And Rust's seemingly omniscient compiler and IDE tools facilitating refactoring have ensured that code changes aren't accompanied by subtle random bugs that would occur in dynamic programming languages. I really need to write a dedicated post espousing the virtues of Rust...

There are a ton of new tests in PyOxidizer 0.8 and I now feel somewhat confident that the main branch of PyOxidizer should be considered production-ready at any time assuming the tests pass. This will hopefully lead to more rapid releases in the future.

There are now tests for the pyembed Rust crate, which provides the run-time code for PyOxidizer-built binaries. We even have Python-based unit tests for validating the Python-exposed APIs behave as expected. These tests have been invaluable for ensuring that the run-time code works as expected. So now when someone files a bug I can easily write a test to capture it and keep the code working as intended through various refactors.

The packaging-time Rust code has also gained its fair share of tests. We now have fairly comprehensive test coverage around how resources are added/packaged. Python extension modules have proved to be highly nuanced in how they are handled. Tremendously helping testing of extension modules is that we're able to run tests for platform non-native extensions! While not yet exposed/supported by Starlark configuration files, I've taught PyOxidizer's core Rust code to be cross-compiling aware so that we can e.g. test Windows or macOS behavior from Linux. Before, I'd have to test Windows wheel handling on Windows. But after writing a wheel parser in Rust and teaching PyOxidizer to use a different Python distribution for the host architecture from the target architecture, I'm now able to write tests for platform-specific functionality that run on any platform that PyOxidizer can run on. This may eventually lead to proper cross-compiling support (at least in some configuration). Time will tell. But the foundation is definitely there!

New Rust Crates

As part of the aforementioned refactoring of PyOxidizer's Rust code, I've been extracting some useful/generic functionality built as part of developing PyOxidizer to their own Rust crates.

As part of this release, I'm publishing the initial 0.1 release of the python-packaging crate (docs). This crate provides pure Rust code for various Python packaging related functionality. This includes:

  • Rust types representing Python resource types (source modules, bytecode modules, extension modules, package resources, etc).
  • Scanning the filesystem for Python resource files .
  • Configuring an embedded Python interpreter.
  • Parsing PKG-INFO and related files.
  • Parsing wheel files.
  • Collecting Python resources and serializing them to a data structure.

The crate is somewhat PyOxidizer centric. But if others are interested in improving its utility, I'll happily accept pull requests!

PyOxidizer's crates footprint now includes:

Major Documentation Updates

I strongly believe that software should be documented thoroughly and I strive for PyOxidizer's documentation to be useful and comprehensive.

There have been a lot of changes to PyOxidizer's documentation since the 0.7 release.

All configuration file documentation has been consolidated.

Likewise, I've attempted to consolidate a lot of the paved road documentation for how to use PyOxidizer in the Packaging User Guide section of the docs.

I'll be honest, since I have so much of PyOxidizer's workings internalized, it can be difficult for me to empathize with PyOxidizer's users. So if you have difficult with the readability of the documentation, please file an issue and report what is confusing so the documentation can be improved!

Mercurial Shipping With PyOxidizer 0.8

PyOxidizer is arguably an epic yak shave of mine to help the Mercurial version control tool transition to Python 3 and Rust.

I'm pleased to report that Mercurial is now shipping PyOxidizer-built distributions on Windows as of the 5.2.2 release a few days ago! If a complex Python application like Mercurial can be configured to work with PyOxidizer, chances are your Python application will work as well.

Whats Next

I view PyOxidizer 0.8 as a pivotal release where PyOxidizer is turning the corner from a prototyping science experiment to something more generally usable. The investments in test coverage and refactoring of the Rust internals are paving the way towards future features and bug fixes.

In upcoming releases, I'd like to close remaining known compatibility gaps with popular Python packages (such as numpy and other packages in the scientific/data space). I have a general idea of what work needs to be done and I've been laying the ground work via various refactorings to execute here.

I want a general theme of future releases to be eliminating reasons why people can't use PyOxidizer. PyOxidizer's historical origin was as a science experiment to see if single file Python applications were possible. It is clear that achieving this is fundamentally incompatible with compatibility with tons of Python packages in the wild. I'd like to find a way where PyOxidizer can achieve 99% package compatibility by default so new users don't get discouraged when using PyOxidizer. And for the subset of users who want single file executables, they can spend the magnitude of additional effort to achieve that.

At some point, I also want to make a pivot towards focusing on producing distributable artifacts (Debian/RPM packages, MSI installers, macOS DMG files, etc). I'm slightly bummed that I haven't made much progress here. But I have a vision in my mind of where I want to go (I'll be making a standalone Rust crate + Starlark dialect to facilitate producing distributable artifacts for any application) and I'm anticipating starting this work in the next few months. In the mean time, PyOxidizer 0.8 should be able to give people a directory tree that they can coerce into distributable artifacts using existing packaging tooling. That's not as turnkey as I would like it to be. But the technical problems around building a distributable Python application binary still needs some work and I view that as the most pressing need for the Python ecosystem. So I'll continue to focus there so there is a solid foundation to build upon.

In conclusion, I hope you enjoy the new release! Please report any issues or feedback in the GitHub issue tracker.


Using Rust to Power Python Importing With oxidized_importer

May 10, 2020 at 01:15 PM | categories: Python, PyOxidizer

I'm pleased to announce the availability of the oxidized_importer Python package, a standalone version of the custom Python module importer used by PyOxidizer. oxidized_importer - a Python extension module implemented in Rust - enables Python applications to start and run quicker by providing an alternate, more efficient mechanism for loading Python resources (such as source and bytecode modules).

Installation instructions and detailed usage information are available in the official documentation. The rest of this post hopefully answers the questions of why are you doing this and why should I care.

In a traditional Python process, Python's module importer inspects the filesystem at run-time to find and load resources like Python source and bytecode modules. It is highly dynamic in nature and relies on the filesystem as a point-in-time source of truth for resource availability.

oxidized_importer takes a different approach to resource loading that is more static in nature and more suitable to application environments (where Python resources aren't changing). Instead of dynamically probing the filesystem for available resources, resources are instead indexed ahead of time. When Python goes to resolve a resource (say it is looking to import a module), oxidized_importer simply needs to perform a lookup in an in-memory data structure to locate said resource. This means oxidized_importer only has marginal reliance on the filesystem, which can make it much faster than Python's traditional importer. (Performance benefits of binaries built with PyOxidizer have already been clearly demonstrated.)

The oxidized_importer Python extension module exposes parts of PyOxidizer's packaging and run-time functionality to Python code, without requiring the full use of PyOxidizer for application packaging. Specifically, oxidized_importer allows you to:

  • Install a custom, high-performance module importer (OxidizedFinder) to service Python import statements and resource loading (potentially from memory, using zero-copy).
  • Scan the filesystem for Python resources (source modules, bytecode files, package resources, distribution metadata, etc) and turn them into Python objects, which can be loaded into OxidizedFinder instances.
  • Serialize Python resource data into an efficient binary data structure for loading into an OxidizedFinder instance. This facilitates producing a standalone resources blob that can be distributed with a Python application which contains all the Python modules, bytecode, etc required to power that application. See the docs on freezing an application with oxidized_importer.

oxidized_importer can be thought of as PyOxidizer-lite: it provides just enough functionality to allow Python application maintainers to leverage some of the technical advancements of PyOxidizer (such as in-memory module imports) without using PyOxidizer for application packaging. oxidized_importer can work with the Python distribution already installed on your system. You just pip install it like any other Python package.

By releasing oxidized_importer as a standalone Python package, my hope is to allow more people to leverage some of the technical achievements and performance benefits coming out of PyOxidizer. I also hope that having more users of PyOxidizer's underlying code will help uncover bugs and conformance issues, raising the quality and viability of the projects.

I would also like to use oxidized_importer as an opportunity to advance the discourse around Python's resource loading mechanism. Filesystem I/O can be extremely slow, especially in mobile and embedded environments. Dynamically probing the filesystem to service module imports can therefore be slow. (The Python standard library has the zipimport module for importing Python resources from a zip file. But in my opinion, we can do much better.) I would like to see Python move towards leveraging immutable, serialized data structures for loading resources as efficiently as possible. After all, Python resources like the Python standard library are likely not changing between Python process invocations. The performance zealot in me cringes thinking of all the overhead that Python's filesystem probing approach incurs - all of the excessive stat() and other filesystem I/O calls that must be performed to answer questions about state that is easily indexed and often doesn't change. oxidized_importer represents my vision for what a high-performance Python resource loader should look like. I hope it can be successful in steering Python towards a better approach for resource loading.

I plan to release oxidized_importer independently from PyOxidizer. While the projects will continue to be developed in the same repository and will leverage the same underlying Rust code, I view them as somewhat independent and serving different audiences.

While oxidized_importer evolved from facilitating PyOxidizer's run-time use cases, I'm not opposed to taking it in new directions. For example, I would entertain implementing Python's dynamic filesystem probing logic in oxidized_importer, allowing it to serve as a functional stand-in for the official importer shipped with the Python standard library. I have little doubt an importer implemented in 100% Rust would outperform the official importer, which is implemented in Python. There's all kinds of possibilities here, such as using a background thread to index sys.path outside the constraints of the GIL. But I don't want to get ahead of myself...

If you are a Python application maintainer and want to make your Python processes execute a bit faster by leveraging a pre-built index of available Python resources and/or taking advantage of in-memory module importing, I highly encourage you to take a look at oxidized_importer!


PyOxidizer 0.7

April 09, 2020 at 09:00 PM | categories: Python, PyOxidizer

I am very pleased to announce the 0.7 release of PyOxidizer, a modern Python application packaging tool.

There are a host of notable new features in this release. You can read all about them in the project history.

I want to use this blog post to call out the more meaningful ones.

I started PyOxidizer as a science experiment of sorts: I sat out to prove the hypothesis that it was possible to produce high performance single file executables embedding Python and all of its resources (Python modules, non-module resource files, compiled extensions, etc). PyOxidizer has achieved this on Windows, Linux, and macOS since its very earliest releases. Hypothesis confirmed!

In order to actually achieve single file executables, you have to fundamentally change aspects of Python's behavior. Some of these changes invalidate deeply rooted assumptions about how Python works, such as the existence of __file__ in modules. As you can imagine, these broken assumptions translated to numerous compatibility issues and PyOxidizer didn't work with many popular Python packages.

With the science experiment phase of PyOxidizer out of the way, I have been making a concerted effort to broaden the user base of PyOxidizer. While single file executables can be an amazing property, it isn't critical for many use cases and the issues it was causing were preventing people from exploring PyOxidizer.

This brings us to what I think are the major new features in PyOxidizer 0.7.

Better Support for Loading Extension Modules

Earlier versions of PyOxidizer insisted that you compile Python (C) extension modules from source and statically link them into a produced binary. This requirement prevented the use of pre-built extension modules (commonly found in Python binary wheels available on PyPI) with PyOxidizer, forcing people to compile them locally. While this often just worked for many extension modules, it frequently failed on complex extension modules and it frequently failed on Windows.

PyOxidizer now supports loading compiled extension modules from standalone files (typically .so or .pyd files, which are actually shared libraries). There are still some sharp edges and known deficiencies. But in many cases, if you tell PyOxidizer to run pip install and package the result, pre-built wheels can be installed and PyOxidizer will pick up the standalone files.

On Windows, PyOxidizer even supports embedding the shared library data into the produced .exe and loading the .pyd/DLL directly from memory.

Loading Resources from the Filesystem

Binaries built with PyOxidizer contain a blob holding an index of available Python resources along with their data.

Earlier versions of PyOxidizer only allowed you to define resources as in-memory. If the resource was defined in this blob, it was imported from memory. Otherwise it wasn't known to PyOxidizer. You could still install files next to the produced binary and tell PyOxidizer to enable Python's default filesystem-based importer. But PyOxidizer didn't explicitly know about these files on the filesystem.

In PyOxidizer 0.7, the blob index of Python resources is able to express different locations for that resource. Currently, a resource can have its data made available in-memory or filesystem-relative. in-memory works as before: the raw data is embedded next to the next in memory and loaded from there (using 0-copy). filesystem-relative encodes a filesystem path to the resource. During packaging, PyOxidizer will place the resource next to the executable (using a typical Python file layout scheme) and store the relative path to that resource in the resources index.

The filesystem-relative resource indexing feature has a few implications for PyOxidizer.

First, it is more standard. When PyOxidizer loads a Python module from the filesystem, it sets __file__, __path__, etc and the module semantics should behave as if the file were imported by Python's standard importer. This means that if a package is having issues with in-memory importing, you can simply fall back to filesystem-relative to get standard Python behavior and everything should just work.

Second, PyOxidizer's filesystem resource loading is faster than Python's! When Python's standard importer goes to import a module, it needs to stat() various paths to first locate the file. It then performs some sanity checking and other minor actions before actually importing the module. All of this has overhead. Since the goal of PyOxidizer is to produce standalone applications and applications should be immutable, PyOxidizer can avoid most of this overhead. PyOxidizer simply tries to open() and read() the relative path baked into the resource index at build time. If that works, the resource is loaded. Else there is a failure. The code path in PyOxidizer to locate a Python resource is effectively a lookup in a Rust HashMap<&str, T>.

I thought it would be interesting to isolate the performance benefits of this new feature. I ran Mercurial's test harness with different variants of hg on Linux on my Ryzen 3950X.

  • traditional - A hg script with a #!/path/to/python3.7 shebang.
  • oxidized - A hg executable built with PyOxidizer, without PyOxidizer's custom module importer.
  • filesystem - A hg executable built with PyOxidizer using the new filesystem-relative resource index.
  • in-memory - A hg executable built with PyOxidizer with all resources loaded from memory (how PyOxidizer has traditionally worked).

The results are quite clear:

VariantCPU Time (s)Delta (s)% Orig
traditional11,287-552100
oxidized10,735-55295.1
filesystem10,186-1,10190.2
in-memory9,883-1,40487.6

We see a nice win just from using a native executable built with PyOxidizer (traditional to oxidized).

Then from oxidized to filesystem we see another jump of ~5%. This difference is attributed to using PyOxidizer's Rust-powered importer with an index of resources available on the filesystem. In other words, all that work that Python's standard importer is doing to discover files and then operate on them is non-trivial!

Finally, the smaller jump from filesystem to in-memory isolates the benefits of importing resource data from memory instead of involving filesystem I/O. (Filesystems are generally slow.) While I haven't measured explicitly, I hypothesize that macOS and Windows will see a bigger jump between these two variants, as the filesystem performance on these platforms generally isn't as good as it is on Linux.

PyOxidizer's Future

With PyOxidizer now supporting a couple of much-needed features to support a broader set of users, I'm hoping that future releases of PyOxidizer continue to broaden the utility of PyOxidizer.

The over-arching goal of PyOxidizer is to solve large aspects of the Python application packaging and distribution problem. So far a lot of focus has been spent on the former. PyOxidizer in its current form can materialize files on the filesystem that you can copy or package up manually and distribute. But I want these processes to be part of PyOxidizer: I want it to be possible for PyOxidizer to emit a Windows MSI installer, a macOS dmg, a Debian package, etc for a Python application.

In order to support the aforementioned marquee features of this PyOxidizer release, I had to pay down a lot of technical debt in the code base left over from the science experiment phase of PyOxidizer's inception.

In the short term, I plan to continue shoring up the code base and rounding out support for features requested in the issue tracker on GitHub. The next release of PyOxidizer will also likely require Python 3.8, as this will improve run-time control over the embedded Python interpreter and enable PyOxidizer to better support package metadata (importlib.metadata), enabling support for features like entry points.

I've also been thinking about extracting PyOxidizer's custom module importer to be usable as a standalone Python extension module. I think there's some value in publishing a pyoxidizer_importer package on PyPI that you can easily add to your installed packages to speed up Python's standard filesystem importer by a few percent. If nothing else, this may drum up interest in the larger Python community for standardizing a format for serializing Python resources in a single file. Perhaps we can get other Python packaging tools producing the same packed resources data blob that PyOxidizer uses so we can all standardize on a more efficient mechanism for loading Python modules. Time will tell.

Enjoy the new release. File issues at https://github.com/indygreg/PyOxidizer as you encounter them.


Mercurial's Journey to and Reflections on Python 3

January 13, 2020 at 08:45 AM | categories: Python, Mercurial

Mercurial 5.2 was released on November 5, 2019. It is the first version of Mercurial that supports Python 3. This milestone comes nearly 11 years after Python 3.0 was first released on December 3, 2008.

Speaking as a maintainer of Mercurial and an avid user of Python, I feel like the experience of making Mercurial work with Python 3 is worth sharing because there are a number of lessons to be learned.

This post is logically divided into two sections: a mostly factual recount of Mercurial's Python 3 porting effort and a more opinionated commentary of the transition to Python 3 and the Python language ecosystem as a whole. Those who don't care about the mechanics of porting a large Python project to Python 3 may want to skip the next section or two.

Porting Mercurial to Python 3

Let's start with a brief history lesson of Mercurial's support for Python 3 as told by its own commit history.

The Mercurial version control tool was first released in April 2005 (the same month that Git was initially released). Version 1.0 came out in March 2008. The first reference to Python 3 I found in the code base was in September 2008. Then not much happens for a while until June 2010, when someone authors a bunch of changes to make the Python C extensions start to recognize Python 3. Then things were again quiet for a while until January 2013, when a handful of changes landed to remove 2 argument raise. There were a handful of commits in 2014 but nothing worth calling out.

Mercurial's meaningful journey to Python 3 started in 2015. In code, the work started in April 2015, with effort to make Mercurial's test harness run with Python 3. Part of this was a decision that Python 3.5 (to be released several months later in September 2015) would be the minimum Python 3 version that Mercurial would support.

Once the Mercurial Project decided it wanted to port to Python 3 (as opposed to another language), one of the earliest decisions was how to perform that port. Mercurial's code base was too large to attempt a flag day conversion where there would be a Python 2 version and a Python 3 version and one day everyone would switch from Python 2 to 3. Mercurial needed a way to run the same code (or as much of the same code) on both Python 2 and 3. We would maintain a single code base and users would gradually switch from running with Python 2 to Python 3.

In May 2015, Mercurial dropped support for Python 2.4 and 2.5. Dropping support for these older Python versions was critical, as it was effectively impossible to write Python code that ran on this wide gamut of versions because of incompatibilities in syntax and language features. For example, you needed Python 2.6 to get print() via from __future__ import print_function. The project's late start at a Python 3 port can be significantly attributed to Python 2.4 and 2.5 compatibility holding us back.

The main goal with Mercurial's early porting work was just getting the code base to a point where import mercurial would work. There were a myriad of places where Mercurial used syntax that was invalid on Python 3 and Python 3 couldn't even parse the source code, let alone compile it to bytecode and execute it.

This effort began in earnest in June 2015 with global source code rewrites like using modern octal syntax, modern exception catching syntax (except Exception as e instead of except Exception, e), print() instead of print, and a modern import convention along with the use of from __future__ import absolute_import.

In the early days of the port, our first goal was to get all source code parsing as valid Python 3. The next step was to get all the modules importing cleanly. This entailed fixing code that ran at import time to work on Python 3. Our thinking was that we would need the code base to be import clean on Python 3 before seriously thinking about run-time behavior. In reality, we quickly ported a lot of modules to import cleanly and then moved on to higher-level porting, leaving a long-tail of modules with import failures.

This initial porting effort played out over months. There weren't many people working on it in the early days: a few people would basically hack on Python 3 as a form of itch scratching and most of the project's energy was focused on improving the existing Python 2 based product. You can get a rough idea of the timeline and participation in the early porting effort through the history of test-check-py3-compat.t. We see the test being added in December 2015, By June 2016, most of the code base was ported to our modern import convention and we were ready to move on to more meaningful porting.

One of the biggest early hurdles in our porting effort was how to overcome the string literals type mismatch between Python 2 and 3. In Python 2, a '' string literal is a sequence of bytes. In Python 3, a '' string literal is a sequence of Unicode code points. These are fundamentally different types. And in Mercurial's code base, most of our string types are binary by design: use of a Unicode based str for representing data is flat out wrong for our use case. We knew that Mercurial would need to eventually switch many string literals from '' to b'' to preserve type compatibility. But doing so would be problematic.

In the early days of Mercurial's Python 3 port in 2015, Mercurial's project maintainer (Matt Mackall) set a ground rule that the Python 3 port shouldn't overly disrupt others: he wanted the Python 3 port to more or less happen in the background and not require every developer to be aware of Python 3's low-level behavior in order to get work done on the existing Python 2 code base. This may seem like a questionable decision (and I probably disagreed with him to some extent at the time because I was doing Python 3 porting work and the decision constrained this work). But it was the correct decision. Matt knew that it would be years before the Python 3 port was either necessary or resulted in a meaningful return on investment (the value proposition of Python 3 has always been weak to Mercurial because Python 3 doesn't demonstrate a compelling advantage over Python 2 for our use case). What Matt was trying to do was minimize the externalized costs that a Python 3 port would inflict on the project. He correctly recognized that maintaining the existing product and supporting existing users was more important than a long-term bet in its infancy.

This ground rule meant that a mass insertion of b'' prefixes everywhere was not desirable, as that would require developers to think about whether a type was a bytes or str, a distinction they didn't have to worry about on Python 2 because we practically never used the Unicode-based string type in Mercurial.

In addition, there were some other practical issues with doing a bulk b'' prefix insertion. One was that the added b characters would cause a lot of lines to grow beyond our length limits and we'd have to reformat code. That would require manual intervention and would significantly slow down porting. And a sub-issue of adding all the b prefixes and reformatting code is that it would break annotate/blame more than was tolerable. The latter issue was addressed by teaching Mercurial's annotate/blame feature to skip revisions. The project now has a convention of annotating commit messages with # skip-blame <reason> so structural only changes can easily be ignored when performing an annotate/blame.

A stop-gap solution to the b'' everywhere issue came in July 2016, when I introduced a custom Python module importer that rewrote source code as part of import when running on Python 3. (I have previously blogged about this hack.) What this did was transparently add b'' prefixes to all un-prefixed string literals as well as modify how a few common functions were called so that we wouldn't need to modify source code so things would run natively on Python 3. The source transformer allowed us to have the benefits of progressing in our Python 3 port without having to rewrite tens of thousands of lines of source code. The solution was hacky. But it enabled us to make significant progress on the Python 3 port without externalizing a lot of cost onto others.

I thought the source transformer would be relatively short-lived and would be removed shortly after the project inevitably decided to go all in on Python 3. To my surprise, others built additional transforms over the years and the source transformer persisted all the way until October 2019, when I removed it just before the first non-alpha Python 3 compatible version of Mercurial was released.

A common problem Mercurial faced with making the code base dual Python 2/3 native was dealing with standard library differences. Most of the problems stemmed from changes between Python 2.7 and 3.5+. But there are changes within the versions of Python 3 that we had to wallpaper over as well. In April 2016, the mercurial.pycompat module was introduced to export aliases or wrappers around standard library functionality to abstract the differences between Python versions. This file grew over time and eventually became Mercurial's version of six. To be honest, I'm not sure if we should have used six from the beginning. six probably would have saved some work. But we had to eventually write a lot of shims for converting between str and bytes and would have needed to invent a pycompat layer in some form anyway. So I'm not sure six would have saved enough effort to justify the baggage of integrating a 3rd party package into Mercurial. (When Mercurial accepts a 3rd party package, downstream packagers like Debian get all hot and bothered and end up making questionable patches to our source code. So we prefer to minimize the surface area for problems by minimizing dependencies on 3rd party packages.)

Once we had a source transforming module importer and the pycompat compatibility shim, we started to focus in earnest on making core functionality actually work on Python 3. We established a convention of annotating changesets needed for Python 3 with py3, so a commit message search yields a lot of the history. (But it isn't a full history since not every Python 3 oriented change used this convention). We see from that history that after the source importer landed, a lot of porting effort was spent on things very early in the hg process lifetime. This included handling environment variables, loading config files, and argument parsing. We introduced a test-check-py3-commands.t test to track the progress of hg commands working in Python 3. The very early history of that file shows the various error messages changing, as underlying early process functionality was slowly ported to work on Python 3. By December 2016, we had hg version working on Python 3!

With basic hg command dispatch ported to Python 3 at the end of 2016, 2017 represented an inflection point in the Python 3 porting effort. With the early process functionality working, different people could pick up different commands and code paths and start making code work with Python 3. By March 2017, basic repository opening and hg files worked. Shortly thereafter, hg init started working as well. And hg status and hg commit did as well.

Within a few months, enough of Mercurial's functionality was working with Python 3 that we started to track which tests passed on Python 3. The evolution of this file shows a reasonable history of the porting velocity.

In May 2017, we dropped support for Python 2.6. This significantly reduced the complexity of supporting Python 3, as there was tons of functionality in Python 2.7 that made it easier to target both Python 2 and 3 and now our hands were untied to utilize it.

In November 2017, I landed a test harness feature to report exceptions seen during test runs. I later refined the output so the most frequent failures were reported more prominently. This feature greatly enabled our ability to target the most common exceptions, allowing us to write patches to fix the most prevalent issues on Python 3 and uncover previously unknown failures.

By the end of 2017, we had most of the structural pieces in place to complete the port. Essentially all that was required at that point was time and labor. We didn't have a formal mechanism in place to target porting efforts. Instead, people would pick up a component or test that they wanted to hack on and then make incremental changes towards making that work. All the while, we didn't have a strict policy on not regressing Python 3 and regressions in Python 3 porting progress were semi-frequent. Although we did tend to correct regressions quickly. And over time, developers saw a flurry of Python 3 patches and slowly grew awareness of how to accommodate Python 3, and the number of Python 3 regressions became less frequent.

As useful as the source-transforming module importer was, it incurred some additional burden for the porting effort. The source transformer effectively converted all un-prefixed string literals ('') to bytes literals (b'') to preserve string type behavior with Python 2. But various aspects of Python 3 didn't like the existence of bytes. Various standard library functionality now wanted unicode str and didn't accept bytes, even though the Python 2 implementation used the equivalent of bytes. So our pycompat layer grew pretty large to accommodate calling into various standard library functionality. Another side-effect which we didn't initially anticipate was the **kwargs calling convention. Python allows you to use ** with a dict with string keys to turn those keys into named arguments in a function call. But Python 3 requires these dict keys to be str and outright rejects bytes keys, even if the bytes instance is ASCII safe and has the same underlying byte representation of the string data as the str instance would. So we had to invent support functions that would convert dict keys from bytes to str for use with **kwargs and another to convert a **kwargs dict from str keys to bytes keys so we could use '' syntax to access keys in our source code! Also on the string type front, we had to sprinkle the codebase with raw string literals (r'') to force the use of str irregardless of which Python version you were running on (our source transformer only changed unprefixed string literals, so existing r'' strings would be preserved as str).

Blind transformation of all string literals to bytes was less than ideal and it did impose some unwanted side-effects. But, again, most strings in Mercurial are bytes by design, so we thought it would be easier to byteify all strings then selectively undo that where native strings were actually warranted (like keys in most dicts) than to take the up-front cost to examine every string and make an intelligent determination as to what type it should be. I go back and forth as to whether this was the correct call. But when you factor in that the source transforming module importer unblocked Python 3 porting at a time in the project's history when there was so much focus on improving the core product and it did so without externalizing many costs onto the people doing the critical core product work, I think it was the right call.

By mid 2019, the number of test failures in Python 3 had been whittled down to a reasonable, less daunting number. It felt like victory was in grasp and inevitable. But a few significant issues lingered.

One remaining question was around addressing differences between Python 3 versions. At the time, Python 3.5, 3.6, and 3.7 were released and 3.8 was scheduled for release by the end of the year. We had a surprising number of issues with differences in Python 3 versions. Many of us were running Python 3.7, so it had the fewest failures. We had to spend extra effort to get Python 3.5 and 3.6 working as well as 3.7. Same for 3.8.

Another task we deferred until the second half of 2019 was standing up robust CI for Python 3. We had some coverage, but it was minimal. Wanting a distraction from PyOxidizer for a bit and wanting to overhaul Mercurial's CI system (which is officially built on Buildbot), I cobbled together a serverless CI system built on top of AWS DynamoDB and S3 for storage, Lambda functions and CloudWatch events for all business logic, and EC2 spot instances for job execution. This CI system executed Python 3.5, 3.6, 3.7, and 3.8 variants of our test harness on Linux and Python 3.7 on Windows. This gave developers insight into version-specific failures. More importantly, it also gave insight into Windows failures, which was previously not well tested. It was discovered that Python 3 on Windows was lagging significantly behind POSIX.

By the time of the Mercurial developer meetup in October 2019, nearly all tests were passing on POSIX platforms and we were confident that we could declare Python 3 support as at least beta quality for the Mercurial 5.2 release, planned for early November.

One of our blockers for ripping off the alpha label on Python 3 support was removing our source-transforming module importer. It had performance implications and it wasn't something we wanted to ship because it felt too hacky. A blocker for this was we wanted to automatically format our source tree with black because if we removed the source transformer, we'd have to rewrite a lot of source code to apply changes the transformer was performing, which would necessitate wrapping a lot of lines, which would involve a lot of manual effort. We wanted to blacken our code base first so that mass rewriting source code wouldn't involve a lot of tedious reformatting since black would handle that for us automatically. And rewriting the source tree with black was blocked on a specific feature landing in black! (We did not agree with black's behavior of unwrapping comma-delimited lists of items if they could fit on a single line. So one of our core contributors wrote a patch to black that changed its behavior so a trailing , in a list of items will force items to be formatted on multiple lines. I personally find the multiple line formatting much easier to read. And the behavior is arguably better for code review and annotation, which is line based.) Once this feature landed in black, we reformatted our source tree and started ripping out the source transformations, starting by inserting b'' literals everywhere. By late October, the source transformer was no more and we were ready to release beta quality support for Python 3 (at least on UNIX-like platforms).

Having described a mostly factual overview of Mercurial's port to Python 3, it is now time to shift gears to the speculative and opinionated parts of this post. I want to underscore that the opinions reflected here are my own and do not reflect the overall Mercurial Project or even a consensus within it.

The Future of Python 3 and Mercurial

Mercurial's port to Python 3 is still ongoing. While we've shipped Python 3 support and the test harness is clean on Python 3, I view shipping as only a milestone - arguably the most important one - in a longer journey. There's still a lot of work to do.

It is now 2020 and Python 2 support is now officially dead from the perspective of the Python language maintainers. Linux distributions are starting to rip out Python 2. Packages are dropping Python 2 support in new versions. The world is moving to Python 3 only. But Mercurial still officially supports Python 2. And it is still yet to be determined how long we will retain support for Python 2 in the code base. We've only had one release supporting Python 3. Our users still need to port their extensions (implemented in Python). Our users still need to start widely using Mercurial with Python 3. Even our own developers need to switch to Python 3 (old habits are hard to break).

I anticipate a long tail of random bugs in Mercurial on Python 3. While the tests may pass, our code coverage is not 100%. And even if it were, Python is a dynamic language and there are tons of invariants that aren't caught at compile time and can only be discovered at run time. These invariants cannot all be detected by tests, no matter how good your test coverage is. This is a feature/limitation of dynamic languages. Our users will likely be finding a long tail of miscellaneous bugs on Python 3 for years.

At present, our code base is littered with tons of random hacks to bridge the gap between Python 2 and 3. Once Python 2 support is dropped, we'll need to remove these hacks and make the source tree Python 3 native, with minimal shims to wallpaper over differences in Python 3 versions. Removing this Python version bridge code will likely require hundreds of commits and will be a non-trivial effort. It's likely to be deemed a low priority (it is glorified busy work after all), and code for the express purpose of supporting Python 2 will likely linger for years.

We are also still shoring up our packaging and distribution story on Python 3. This is easier on some platforms than others. I created PyOxidizer partially because of the poor experience I had with Python application packaging and distribution through the Mercurial Project. The Mercurial Project has already signed off on using PyOxidizer for distributing Mercurial in the future. So look for an oxidized Mercurial distribution in the near future! (You could argue PyOxidizer is an epic yak shave to better support Mercurial. But that's for another post.)

Then there's Windows support. A Python 3 powered Mercurial on Windows still has a handful of known issues. It may require a few more releases before we consider Python 3 on Windows to be stable.

Because we're still on a code base that must support Python 2, our adoption of Python 3 features is very limited. The only Python 3 feature that Mercurial developers seem to almost universally get excited about is type annotations. We already have some people playing around with pytype using comment-based annotations and pytype has already caught a few bugs. We're eager to go all in on type annotations and uncover lots of dynamic typing bugs and poorly implemented APIs. Beyond type annotations, I can't name any feature that people are screaming to adopt and which makes a lot of sense for Mercurial. There's a long tail of minor features I'm sure will get utilized. But none of the marquee features that define major language releases seem that interesting to us. Time will tell.

Commentary on Python 3

Having described Mercurial's ongoing journey to Python 3, I now want to focus more on Python itself. Again, the opinions here are my own and don't reflect those of the Mercurial Project.

Succinctly, my experience porting Mercurial and other projects to Python 3 has significantly soured my perceptions of Python. As much as I have historically loved Python - from the language to the welcoming community - I am still struggling to understand how Python could manage to inflict so much hardship on the community by choosing the transition plan that they did. I believe Python's choices represent a terrific example of what not to do when managing a large project or ecosystem. Maintainers of other largely-deployed systems would benefit from taking the time to understand and reflect on Python's missteps.

Python 3.0 was released on December 3, 2008. And it took the better part of a decade for the community to embrace it. This should be universally recognized as a failure. While hindsight is 20/20, many of the issues with Python 3 were obvious at the time and could have been mitigated had the language maintainers been more accommodating - and dare I say empathetic - to its users.

Initially, Python 3 had a rather cavalier attitude towards backwards and forwards compatibility. In the early years of Python 3, the attitude of Python's maintainers was Python 3 is a new, better language: you should target it explicitly. There were some tools and methods to ease the transition. But nothing super polished, especially in the early years. Adoption of Python 3 in the overall community was slow. Python developers in the wild justifiably complained that the value proposition of Python 3 was too weak to justify porting effort. Not helping was that the early advice for targeting Python 3 was to rewrite the source code to become Python 3 native. This is in contrast with using the same source to run on both Python 2 and 3. For library and application maintainers, this potentially meant maintaining separate versions of your code or forcing end-users to make a giant leap, which would realistically orphan users on an old version, fragmenting your user base. Neither of those were great alternatives, so you can understand why many projects didn't bite.

For many projects of non-trivial size, flag day transitions from Python 2 to 3 were simply not viable: the pathway to Python 3 was to make code dual Python 2/3 compatible and gradually switch over the runtime to Python 3. But initial versions of Python 3 made this effectively impossible! Let me give a few specific examples.

In Python 2, a string literal '' is effectively an array of bytes. In Python 3, it is a series of Unicode code points - a fundamentally different type! In Python 2, you could write b'' to be explicit that a string literal was bytes or you could write u'' to indicate a Unicode literal, mimicking Python 3's behavior. In Python 3, you could write b'' to create a bytes instance. But for whatever reason, Python 3 initially removed the u'' syntax, meaning there wasn't as easy way to explicitly denote the type of each string literal so that it was consistent between Python 2 and 3! Python 3.3 (released September 2012) restored u'' support, making it more viable to write Python source code that worked on both Python 2 and 3. For nearly 4 years, Python 3 took away the consistent syntax for denoting bytes/Unicode string literals.

Another feature was % formatting of strings. Python 2 allowed use of the % formatting operator on both its string types. But Python 3 initially removed the implementation of % from bytes. Why, I have no clue. It is perfectly reasonable to splice byte sequences into a buffer via use of a formatting string. But the Python language maintainers insisted otherwise. And it wasn't until the community complained about its absence loudly enough that this feature was restored in Python 3.5, which was released in September 2015. Fun fact: the lack of this feature was once considered a blocker for Mercurial moving to Python 3 because Mercurial uses bytes almost universally, which meant that nearly every use of % would have to be changed to something else. And to this day, Python 3's bytes still doesn't have a format() method, so the alternative was effectively string concatenation, which is a massive step backwards from the expressiveness of % formatting.

The initial approach of Python 3 mirrors a folly that many developers and projects make: attempting a rewrite instead of performing incremental evolution. For established projects, large scale rewrites often go poorly. And Python 3 is no exception. Yes, from a code level, CPython (and likely other Python implementations) were incremental changes over Python 2 using the same code base. But from a language and standard library level, the differences in Python 3 were significant enough that I - and even Python's core maintainers - considered it a new language, and therefore a rewrite. When your random project attempts a rewrite and fails, the blast radius of that is often contained to that project. Maybe you don't publish a new release as soon as you otherwise would. But when you are powering an ecosystem, the ripple effects from a failed rewrite percolate throughout that ecosystem and last for years and have many second order effects. We see this with Python 3, where poor choices made in the late 2000s are inflicting significant hardship still in 2020.

From the initial restrained adoption of Python 3, it is obvious that the Python ecosystem overwhelmingly rejected the initial boil the oceans approach of Python 3. Python's maintainers eventually got the message and started restoring features like u'' and bytes % formatting back into the language to placate the community. All the while Python 3 had been accumulating new features and the cumulative sum of those features was compelling enough to win over users.

For many projects (including Mercurial), Python 3.4/3.5 was the first viable porting target for Python 3. Python 3.5 was released in September 2015, almost 7 years after Python 3.0 was released in December 2008. Seven. Years. An ecosystem that falters for that long is generally not healthy. What may have saved Python from total collapse here is that Python 2 was still going strong and people were generally happy with it. I really do think Python dodged a bullet here, because there was a massive window where the language could have hemorrhaged a critical amount of its user base and been relegated to an afterthought. One could draw an analogy to Perl, which lost out to PHP, Python, and Ruby, and whose fall from grace aligned with a lengthy transition from Perl 5 to 6.

If you look back at the early history of Python 3, I think you are forced to conclude that Python effectively kneecapped itself for 5-7 years through questionable implementation choices that prevented users from incurring incremental transitions between the major language versions. 2008 to 2013-2015 should be known as the lost years of Python because so much opportunity and energy was squandered. Yes, Python is still healthy today and Python 3 is (finally) being adopted at scale. But had earlier versions of Python 3 been more empathetic towards Python 2 users porting to it, Python and Python 3 in 2020 would be even stronger than it is. The community was artificially hindered for years. And we won't know until 2023-2025 what things could have looked like in 2020 had the Python core language team spent more time paving a smoother road between the major language versions.

To be clear, I do think Python 3 is generally a better language than Python 2. It has fewer warts, more compelling features, and better performance (except for startup time, which is still slower than Python 2). I am ecstatic the community is finally rallying around Python 3! For my Python coding, it has reached the point where I curse under my breath when I need to support Python 2 or even older versions of Python 3, like 3.5 or 3.6: I just wish the world would move on and adopt the future already!

But I would be remiss if I failed to mention some of my gripes with Python 3 beyond the transition shenanigans.

Perhaps my least favorite feature of Python 3 is its insistence that the world is Unicode. In Python 2, the default string type was backed by bytes. In Python 3, the default string type is backed by Unicode code points. As part of that transition, large parts of the standard library now operate in the Unicode space instead of the domain of bytes. I understand why Python does this: they want strings to be Unicode and don't want users to have to spend that much energy thinking about when to use str versus bytes. This approach is admirable and somewhat defensible because it takes a stand on a solution that is arguably good enough for most users. However, the approach of assuming the world is Unicode is flat out wrong and has significant implications for systems level applications (like version control tools).

There are a myriad of places in Python's standard library where Python insists on using the Unicode-backed str type and rejects bytes. For example, various networking modules refuse to accept bytes for hostnames or URLs. HTTP libraries won't accept bytes for HTTP header names or values. Functions that are proxies to POSIX-defined functions won't accept bytes even though the POSIX function it calls into is using char * and isn't Unicode aware. Then there's filename handling, where Python assumes the existence of a global encoding for filenames and uses this encoding to convert between str and bytes. And it does this despite POSIX filesystem paths being a bag of bytes where the only rules are that \0 terminates the filename and / is special.

In cases like Python refusing to accept bytes for things like HTTP header names (which will just be spit out over the wire as bytes), Python's pendulum has swung too far towards Unicode only. In my opinion, Python needs to be more accommodating and allow bytes when it makes sense. I hope the pendulum knocks some sense into people when it swings back towards a more reasonable solution that better acknowledges the realities of the world we live in.

For areas like filename handling, the world is more complicated. Python is effectively an abstraction layer over the operating system APIs exposing this functionality. And there is often an impedance mismatch between operating systems. For example, POSIX (Linux) tends to use char * for everything and doesn't care about encoding and Windows tends to use 16 bit character types where the encoding is... a can of worms.

The reality here is that it is impossible to abstract over differences between operating system behavior without compromises that can result in data loss, outright wrong behavior, or loss of functionality. But Python 3 attempts to do it anyway, making Python 3 unsuitable (or at least highly undesirable) for certain systems level applications that rely on it (like a version control tool).

In fairness to Python, it isn't the only programming language that gets this wrong. The only language I've seen properly implement higher-order abstractions on top of operating system facilities is Rust, whose approach can be generalized as use Python 3's solution of normalizing to Unicode/UTF-8 by default, but expose escape hatches which allow access to the raw underlying types and APIs used by the operating system for the advanced consumers who require it. For example, Rust's Path type which represents a filesystem path allows access to the raw OsStr value used by the operating system, not a normalization of it to bytes or Unicode, which may be lossy. This allows consumers to e.g. create and retrieve OS-native filesystem paths without data loss. This functionality is critical in some domains. Python 3's awareness/insistence that the world is Unicode (which it isn't universally) reduces Python's applicability in these domains.

Speaking of Rust, at the Mercurial developer meetup in October 2019, we were discussing the use of Rust in Mercurial and one of the core maintainers blurted out something along the lines of if Rust were at its current state 5 years ago, Mercurial would have likely ported from Python 2 to Rust instead of Python 3. As crazy as it initially sounded, I think I agree with that assessment. With the benefit of hindsight, having been a key player in the Python 3 porting effort, seeing all the complications and headaches Python 3 is introducing, and having learned Rust and witnessed its benefits for performance, control, and correctness firsthand, porting to Rust would likely have been the correct move for the project at that point in time. 2020 is not 2014, however, and I'm not sure if I would opt for a rewrite in Rust today. (Most rewrites are follies after all.) But I know one thing: I certainly wouldn't implement a new version control tool in Python 3 and I would probably choose Rust as an implementation language for most new projects in the systems level space or with an expected shelf life of 10+ years. (I really should blog about how awesome Rust is.)

Back to the topic of Python itself, I'm really soured on Python at this point in time. The effort required to port to Python 3 was staggering. For Mercurial, Python 3 introduces a ton of problems and doesn't really solve many. We effectively sludged through mud for several years only to wind up in a state that feels strictly worse than where we started. I'm sure it will be strictly better in a few years. But at that point, we're talking about a 5+ year transition. To call the Python 3 transition disruptive and distracting for the project would be an understatement. As a project maintainer, it's natural to ask what we could have accomplished if we weren't forced to carry out this sideshow.

I can't shake the feeling that a lot of the pain afflicted by the Python 3 transition could have been avoided had Python's language leadership made a different set of decisions and more highly prioritized the transition experience. (Like not initially removing features like u'' and bytes % and not introducing gratuitous backwards compatibility breaks, like with items()/iteritems(). I would have also liked to see a feature like from __future__ - maybe from __past__ - that would make it easier for Python 3 code to target semantics in earlier versions in order to provide a more turnkey on-ramp onto new versions.) I simultaneously see Python 3 losing its position as a justifiable tool in some domains (like systems level tooling) due to ongoing design decisions and poor implementation (like startup overhead problems). (In contrast, I see Rust excelling where Python is faltering and find Rust code surprisingly expressive to write and maintain given how low-level it is and therefore feel that Rust is a compelling alternative to Python in a surprisingly large number of domains.)

Look, I know it is easy for me to armchair quarterback and critique with the benefit of hindsight/ignorance. I'm sure there is a lot of nuance here. I'm sure there was disagreement within the Python community over a lot of these issues. Maintaining a large and successful programming language and community like Python's is hard and you aren't going to please all the people all the time. And speaking as a maintainer, I have mad respect for the people leading such a large community. But niceties aside, everyone knows the Python 3 transition was rough and could have gone better. It should not have taken 11 years to get to where we are today.

I'd like to encourage the Python Project to conduct a thorough postmortem on the transition to Python 3. Identify what went well, what could have gone better, and what should be done next time such a large language change is wanted. Speaking as a Python user, a maintainer of a Python project, and as someone in industry who is now skeptical about use of Python at work due to risks of potentially company crippling high-effort migrations in the future, a postmortem would help restore my confidence that Python's maintainers learned from the various missteps on the road to Python 3 and these potentially ecosystem crippling mistakes won't be made again.

Python had a wildly successful past few decades. And it can continue to thrive for several more. But the Python 3 migration was painful for all involved. And as much as we need to move on and leave Python 2 behind us, there are some important lessons to be learned. I hope the Python community takes the opportunity to reflect and am confident it will grow stronger by taking the time to do so.


Next Page ยป