Building Standalone Python Applications with PyOxidizer

June 24, 2019 at 09:00 AM | categories: Python, PyOxidizer, Rust

Python application distribution is generally considered an unsolved problem. At their PyCon 2019 keynote talk, Russel Keith-Magee identified code distribution as a potential black swan - an existential threat for longevity - for Python. In their words, Python hasn't ever had a consistent story for how I give my code to someone else, especially if that someone else isn't a developer and just wants to use my application. I completely agree. And I want to add my opinion that unless your target user is a Python developer, they shouldn't need to know anything about Python packaging, Python itself, or even the existence of Python in order to use your application. (And you can replace Python in the previous sentence with any programming language or software technology: most end-users don't care about the technical implementation, they just want to get stuff done.)

Today, I'm excited to announce the first release of PyOxidizer (project, documentation), an open source utility that aims to solve the Python application distribution problem! (The installation instructions are in the docs.)

Standalone Single File, No Dependencies Executable Python Applications

PyOxidizer's marquee feature is that it can produce a single file executable containing a fully-featured Python interpreter, its extensions, standard library, and your application's modules and resources. In other words, you can have a single .exe providing your application. And unlike other tools in this space which tend to be operating system specific, PyOxidizer works across platforms (currently Windows, macOS, and Linux - the most popular platforms for Python today). Executables built with PyOxidizer have minimal dependencies on the host environment nor do they do anything complicated at run-time. I believe PyOxidizer is the only open source tool to have all these attributes.

On Linux, it is possible to build a fully statically linked executable. You can drop this executable into a chroot or container where it is the only file and it will just work. On macOS and Windows, the only library dependencies are on always-present or extremely common libraries. More details are in the docs.

At execution time, binaries built with PyOxidizer do not do anything special to run the Python interpreter. (Other tools in this space do things like create a temporary directory or SquashFS filesystem and extract Python to it.) PyOxidizer loads everything from memory and there is no explicit I/O being performed. When you import a Python module, the bytecode for that module is being loaded from a memory address in the executable using zero-copy. This makes PyOxidizer executables faster to start and import - faster than a python executable itself!

Current Release and Future Roadmap

Today's release of PyOxidizer is just the first release milestone in what I envision is a long and successful project history. While my over-arching goal with PyOxidizer is to solve vast swaths of the Python application distribution problem, I want to be clear that this first release comes nowhere close to doing so. I toiled with what features must be in the initial release. I ultimately decided that PyOxidizer's current functionality is extremely valuable to some audiences and that the project has matured to the point where more eyeballs and users would substantially help its development. (I could definitely use some help prioritizing which features to work on and for that I need users and user feedback.)

In today's release, PyOxidizer is good at producing executables embedding Python. It doesn't yet venture too far into the distribution part of the problem (I want it to be trivial to produce MSI installers, DMG images, deb/rpm packages, etc). But on Linux, this is already a huge step forward because PyOxidizer makes it easy (hopefully!) to produce binaries that should just work on other machines. (Anyone who has attempted to distribute Linux applications will tell you how painful this problem can be.)

Despite its limitations, I believe today's release of PyOxidizer to be a viable tool for some applications. And I believe PyOxidizer can start to replace existing tools in this space. (See the Comparisons to Other Tools document for how PyOxidizer compares to other Python packaging and distribution tools.)

Using today's release of PyOxidizer, larger user-facing applications using Python (like Dropbox, Kodi, MusicBrainz Picard, etc) could use PyOxidizer to produce self-contained executables. This would likely cut down on installer size, decrease install/update time (fewer files means faster operations), and hopefully make packaging simpler for application maintainers. Maintainers of Python utilities could produce self-contained executables, making their utilities faster to start and easier to package and distribute.

New Possibilities and Reliability for Python

By enabling support for self-contained, single file Python applications, PyOxidizer opens exciting new doors for Python. Because Python has historically required an explicit, separate runtime not part of the executable, Python was not viable (or was a hinderance) in many domains. For example, if you wanted to use Python to bootstrap a fresh server or empty container environment, you had a chicken-and-egg problem because you needed to install Python before you could use it.

Let's take Ansible for example. One of Ansible's features is that it remotes into a machine and runs things. The way it does this is it dynamically generates Python scripts locally, uploads them to the remote machine, and tells the remote to execute them. Those Python scripts require the existence of a Python interpreter on the remote machine. This means you need to install Python on a machine before you can control it with Ansible. Furthermore, because the remote's Python isn't under Ansible's control, you can assume very little about its behavior and capabilities, making interaction a bit brittle.

Using PyOxidizer, projects like Ansible could produce a self-contained executable containing a Python interpreter. They could transfer that single binary to the remote machine and execute it, instantly giving the remote machine access to a fully-featured and modern Python interpreter. From there, the sky is the limit. In Ansible's case, the executable could contain the full Ansible runtime, along with any 3rd party Python packages they wanted to leverage. This would allow execution to occur (possibly mostly independently) on the remote machine. This architecture is simpler, scales better, would likely result in faster operations, and would probably improve the quality of life for everyone involved, from application developers to its end users.

Self-contained Python applications built with PyOxidizer essentially solve the Python interpreter bootstrapping and reliability problems. By providing a Python interpreter and a known set of Python modules, you provide a highly deterministic and reliable execution environment for your application. You don't need to fret about which version of Python is installed: you know which version of Python you are using. You don't need to worry about which Python packages are installed: you control explicitly which packages are available. You don't need to worry about whether you are running in a virtualenv, what sys.path is set to, whether .pth files come into play, whether various PYTHON* environment variables can mess up your application, whether some Linux distribution packaged Python differently, what to put in your script's shebang, etc: executables built with PyOxidizer behave as you have instructed them to because they are compiled that way.

All of the concerns in the previous paragraph contribute to a larger problem in the eyes of application maintainers that can be summarized as Python isn't reliable. And because Python isn't reliable, many people reach the conclusion that Python shouldn't be used (this is the black swan that was referred to earlier). With PyOxidizer, the Python environment is isolated and highly deterministic making the reliability problem largely go away. This makes Python a more viable technology choice. And it enables application maintainers to aggressively adopt modern Python versions, utilize third party packages fearlessly, and spend far less time chasing an extremely long tail of issues related to Python environment variance. Succinctly, application developers can focus on building great applications instead of toiling with Python environment problems.

Project Status

PyOxidizer is still in its relative infancy. While it is far from feature complete, I'm mentally committed to working on the remaining major functionality. The Status document lists major missing functionality, lesser missing functionality, and potential future value-add functionality.

I want PyOxidizer to provide a Python application packaging and distribution experience that just works with minimal cognitive effort from Python application maintainers. I have spent a lot of effort documenting PyOxidizer. I care passionately about user experience and want everything about PyOxidizer to be simple and frustration free. I know things aren't there yet. The problems that PyOxidizer is attempting to solve are hard (that's a reason nobody has solved them well yet). I know there's details floating around in my head that haven't been added to the documentation yet. I know there's missing features and bugs in PyOxidizer. I know there are Packaging Pitfalls yet to be discovered.

This is where you come in.

I need your help to make PyOxidizer great. I encourage Python application maintainers reading this to head over to Getting Started and the Packaging User Guide and try to package your applications with PyOxidizer. If things don't work, let me know by filing an issue. If you are confused by lack of or unclear documentation, file an issue. If something frustrates you, file an issue. If you want to suggest I work on a certain feature or fix a bug, file an issue! Tweet to @indygreg to engage with me there. Join the pyoxidizer-users mailing list. While I feel PyOxidizer is usable today (that's why I'm announcing it), I need your feedback to help guide future prioritization.

Finally, I know PyOxidizer has significant implications for some companies and projects that use Python. While I'm not looking to enrich myself or make my livelihood from PyOxidizer, if PyOxidizer is useful to you and you'd like to send money my way as appreciation, you can do so on Patreon or PayPal. If not, that's totally fine: I wouldn't be making PyOxidizer open source if I didn't want to share it with the world for free! And I am financially well off as well. I just feel like there should be more financial contribution to open source because it would improve the health of the ecosystem and I can help achieve that end by advocating for it and giving myself.

Leveraging Rust

The oxidize part of PyOxidizer comes from Rust (See the Wikipedia Rust article - for the chemical not the programming language - to understand where oxidize comes from.) The build time packaging and building functionality is implemented in Rust. And the binary that embeds and controls the Python interpreter in built applications is Rust code. Rationale for these decisions is explained in the FAQ.

This is my first non-toy project using Rust and I have to say that Rust is... incredible! I may have to author a dedicated blog post extolling the virtues of Rust. In short, Rust is now my go-to language for systems level projects. Unless you need the target platform versatility, I don't think C or C++ are defensibles choices in 2019 given their security deficiencies. Languages like Go, Java, and various JVM or CLR languages are acceptable if you can tolerate having a garbage collector and/or a larger runtime. But what makes Rust superior in my mind is the ability for the compiler to prevent large classes of software bugs (especially those that turn into CVEs) and inefficiencies that have plagued our industry for decades. Rust is the first programming language I've used where I feel like the language itself, the compiler, the tools around it (cargo, rustfmt, clippy, rustup, etc), and the community surrounding it all actually care about and assist me with writing high quality software. Nothing else I've used comes even close.

What I've been most surprised about Rust is how high level it feels for a systems level language that isn't garbage collected. When you program lower-level languages like C or C++, compared to a higher level language like Python, you have to type a lot more and be more explicit in nearly everything you do. While Rust is certainly not as expressive or compact as say Python, it is far, far closer to Python than I was expecting it to be. Yes, you do have to type more and think more about your code to appease the Rust compiler's constraints. But the return on that investment is the compiler preventing entire classes of bugs and C/C++ levels of performance. When I started PyOxidizer, the build time logic was implemented in Python and only the run-time pieces were in Rust. After learning a bit more Rust and realizing the obvious code quality benefits, I ditched Python and adopted Rust for the build time logic. And as the code base has grown and gone through various refactorings, I am so glad I did so! The Rust compiler has caught dozens of would-be bugs in Python. Granted, many of these can be attributed to having strong typing and compile time type checking and Rust is little different than say Java on this front. But a significant number of prevented bugs covered invariants in the code because of the way Rust's type system often intersects with control flow. e.g. match arms must be exhaustive, so you can't have unhandled values/types and unchecked Result instances result ina compiler warning. And clippy has been just fantastic helping to guide me towards writing more acceptable code following community accepted best practices.

Even though PyOxidizer is implemented in Rust, most end-users shouldn't have to care (beyond having to install a Rust compiler and build PyOxidizer from source). The existence of Rust should be abstracted away from Python packagers. I did this on purpose because I believe that users of an application shouldn't have to care about the technical implementation of that application. It is a bit unfortunate that I force users to install Rust before using PyOxidizer, but in my defense the target audience is technically savvy developers, bootstrapping Rust is easy, and PyOxidizer is young, so I think it is acceptble for now. If people get hung up on it, I can provide pre-compiled pyoxidizer executables.

But if you do know Rust, PyOxidizer being implemented in Rust opens up some exciting possibilities!

One exciting possibility with PyOxidizer is the ability to add Rust code to your Python application. PyOxidizer works by generating a default Rust application (main.rs) that simply instantiates and runs an embedded Python interpreter then exits. It essentially does what python or a Python script would do. The key takeaway here is your Python application is technically a Rust application (in the same way that python is technically a C application). And being a Rust application means you can add Rust code to that application. You can modify the autogenerated main.rs to do things before, during, and after the embedded Python interpreter runs. It's a regular Rust program and can do anything that Rust programs can do!

Another possibility - and variant of above - is embedding Python in existing Rust projects. PyOxidizer's mechanism for embedding a Python interpreter is implemented as a standalone Rust crate. One can add the pyembed crate to an existing Rust project and a little of build system magic later, your Rust project can now embed and run a Python interpreter!

There's a lot of potential for hybrid Rust + Python programs. And I am very excited about the possibilities.

If you are a Rust programmer, PyOxidizer allows you to easily embed Python in your Rust application. If you are a Python programmer, PyOxidizer allows your to easily leverage Rust in your Python application. In short, the package ecosystem of the other becomes available to you. And if you aren't familiar with Rust, there are some potentially crazy possibilities. For example, Alacritty is a GPU accelerated terminal emulator written in Rust and Servo is an entire web browser engine written in Rust. With PyOxidizer, you could integrate a terminal emulator or browser engine as part of your Python application if you really wanted to. And, yes, Rust's packaging tools are so good that stuff like this tends to just work. As a concrete example, the pyoxidizer CLI tool contains libgit2 for performing in-process interactions with Git repositories. Adding this required a single line change to a Cargo.toml file and it just worked on Linux, macOS, and Windows. Stuff like this often takes hours to days to integrate in C/C++. It is quite ridiculous how easy it is to add (complex) components to Rust projects!

For years, Python projects have implemented extensions in C to realize performance wins. If your Python application is a Rust executable, then implementing this functionality in Rust (rather than C) seems rationale. So we may see oxidized Python applications have their performance critical pieces slowly be rewritten in Rust. (Honestly, the Rust crates to interface between Rust and the CPython API still leave a bit to be desired, so the experience of writing this Rust code still isn't great. But things will certainly improve over time.)

This type of inside-out split language work has been practiced in Python for years. What PyOxidizer brings to the table is the ability to more easily port code outside-in. For example, you could implement performance-criticial, early application logic such as config file parsing and command line argument parsing in Rust. You could then have Rust service some application functionality without Python. Why would you want this? Performance is a valid reason. Starting a Python interpreter, importing modules, and running code can consume several dozen or even hundreds of milliseconds. If you are writing performance sensitive applications, the existence of any Python can add enough latency that people no longer perceive the interaction as instananeous. This added latency can make Python totally inappropriate for some contexts, such as for programs that run as part of populating your shell's prompt. Writing such code in Rust instead of Python dramatically increases the probability that the code is fast and likely delivers stronger correctness guarantees courtesy of Rust's compile time validation as well!

An extreme practice of outside-in porting of Python to Rust would be to incrementally rewrite an entire Python application in Rust. Rust's ergonomics are exceptional and I do think we'll see people choose Rust where they previously would have chosen Python. I've done this myself with PyOxidizer and feel it is a very defensible decision to reach! I feel a bit conflicted releasing a tool which may undermine Python's popularity by encouraging use of Rust over Python. But at the end of the day, PyOxidizer increases the utility of both Python and Rust by giving each more readily accessible access to the other and PyOxidizer improves the overall utility of Python by improving the application distribution story. I have no doubt PyOxidizer is a net benefit for the Python ecosystem, even if it does help usher in more people choosing Rust over Python. If I have an ulterior motive in developing PyOxidizer, it is to enable Mercurial's official distribution to be a Rust executable and for some functionality (like hg status) to be runnable without Python (for performance reasons).

Another possible use of PyOxidizer is as a library. All the build time functionality of PyOxidizer exists in a Rust crate. So, you can add the pyoxidizer crate to your own Rust project and use its code to do things like build a library containing Python, compile Python source modules to bytecode, or walk a directory tree and find Python resources within. The code is still heavily geared towards PyOxidizer and there's no promise of API stability. But this potential for library usage exists and if others want to experiment with building custom Python binaries not using the pyoxidizer CLI tool, using PyOxidizer as a library might save you a lot of time.

Standalone Python Distributions

One of the most time consuming parts of building PyOxidizer was figuring out how to build self-contained Python distributions. Typically, a Python build consists of a library, shared libraries for various extension modules, shared libraries required by the prior items, and a hodgepodge of other files, such as .py files implementing the Python standard library. The python-build-standalone project was created to automate creating special builds of Python which are self-contained and distributable. This requires doing dirty things with build systems. But I don't want to inflict the details on you here. What I do think is worth mentioning is how those Python distributions are distributed. The output of the build is a tarball containing the Python installation, build artifacts that can be used to link a custom libpython, and a PYTHON.json file describing the contents of the distribution. PyOxidizer reads the PYTHON.json file and learns how it should interact with that distribution. If you produce a Python distribution conforming to the format that python-build-standalone defines, you can use that Python with PyOxidizer.

While I have no urgency to do so at this time, I could see a future where this Python distribution format is standardized. Then maintainers of various Python distributions (CPython, PyPy, etc) would independently produce their own distributable artifacts conforming to this standard, in turn allowing machine consumers of Python distributions (such as PyOxidizer) to easily consume different Python distributions and do interesting things with them. You could even imagine these Python distribution archives being readily available as packages in your system's package manager and their locations exposed via the sysconfig Python module, making it easy for tools (like PyOxidizer) to find and use them.

Over time, I could see PyOxidizer's functionality rolling up into official packaging tools like pip, which would know how to consume the distribution archives and produce an executable containing a Python interpreter, required Python modules, etc.

Getting PyOxidizer's functionality rolled into official Python packaging tools is likely years away (if it ever happens). But I think standardizing a format describing a Python distribution and (optionally) contains build artifacts that can be used to repackage it is a prerequisite and would be a good place to start this journey. I would certainly love for Python distributions (like CPython) to be in charge of producing official repackagable distributions because this is not something I want to be in the business of doing long term (I'm lazy, less equipped to make the correct decisions, and there are various trust and security concerns). And while I'm here, I am definitely interested in upstreaming some of the python-build-standalone functionality into the existing CPython build system because coercing CPython's build system to produce distributable binaries is currently a major pain and I'd love to enable others to do this. I just haven't had time nor do I know if the patches would be well received. If a CPython maintainer wants to get in touch, I'd love to have a conversation!

Conclusion

I started hacking on PyOxidizer in November 2018. After months of chipping away at it, I think I finally have a useful utility for some audiences. There's still a lot of missing features and some rough edges. But the core functionality is there and I'm convinced that PyOxidizer or its underlying technology could be an integral part of solving Python's application distribution black swan problem. I'm particularly proud of the hacks I concocted to coerce Python into importing module bytecode from memory using zero-copy. Those are documented in this blog post and in the pyembed crate docs.

So what are you waiting for? Head on over to the documentation, install PyOxidizer, and let me know how it goes by filing issues!

I hope you enjoy oxidizing your Python applications!


On Algorithms and Interviewing

January 17, 2019 at 10:45 AM | categories: Personal

As I write this, I'm hours away from starting to interview for full-time jobs in the software field. I've spoken with a number of recruiters and hiring managers and have received interview preparation materials from a handful of companies, many of which you've probably heard of.

I was hoping things would have changed since I last seriously underwent this endeavor ~7.5 years ago (I did interview periodically when I was at Mozilla in order to test waters, keep my interview skills sharp, etc). But it appears the industry is still generally fixated on algorithms and data structures in interviews. The way algorithms and similar coding tricks are emphasized in the preparation materials I've received, you'd think people in software spend a major part of their work days thinking about and implementing algorithms. But from my experience, this is very far from the case! So why are so many companies and interviewers fixated on algorithms. And is this a good thing?

When they matter, efficient algorithms, data structures, and other tricks are important and useful skills to have. But from my experience, they matter far less than you would think. If I were to make a list of important job skills and traits for software and programming, memorized knowledge of algorithms and data structures is so far down the list that I don't think I would even ask about algorithms fundamentals for most job candidates! (In fact I don't.) I think it is vastly more important to focus on behavioral qualities and potential to actually think and apply knowledge rather than regurgitate it. Algorithms and data structures, after all, are learned knowledge. All other things be equal, I'd rather have someone who knows when to ask for help with an algorithms issue or can pick up the skill than a curmudgeon algorithms genius who has an abrasive personality and clings to old habits.

In the spirit of full disclosure, I should to state that my algorithms skills are relatively weak. You can accuse me of writing this post to fulfill my own selfish interests. You wouldn't be wrong. But I know there are others like me who are good at programming yet struggle with algorithms and question the utility of algorithms in interviews. I'm attempting to write this post for all of us.

I have failed job interviews because the interviewer assessed my algorithms abilities as weak. I'm able to work through this deficiency with interviewers who care more about the behavioral traits I exhibit when in such a situation (I try to be quick about admitting my technical weaknesses and to ask for help when needed). But some interviewers aren't as interested in the behavioral traits or insist on a baseline level of memorized algorithms knowledge beyond my own. I feel like my relative algorithms weakness hasn't hurt me on the job, as I hardly find myself caring about algorithms in the work I do. In the majority of cases, the choice of an algorithm just doesn't matter for the size of the data set. Or a standard algorithm or data structure available in the standard library of the language I'm using is good enough. In the cases where I realize algorithms and data structures would matter, I run my technical questions past someone with more knowledge in the domain than me. Or if I don't do that, it often comes up during code review. Without strong algorithms and data structures knowledge, I'm able to maintain the Firefox build system, become a core contributor to a version control tool (something you think would require a lot of heavy algorithms knowledge), maintain various open source projects, diagnose and address low-level performance issues in complex software and systems. About the only impact that being weak in algorithms and data structures has had on my career is that some companies passed on hiring me because they perceived strength in this area to be important.

Albert Einstein once said, I never commit to memory anything that can easily be looked up in a book. A modern adaptation of that quote may go something like, never memorize how to implement an algorithm or data structure when you can just Google it or use a software library implementing it. If you have knowledge of how to implement various algorithms in your head, that's good for you, I suppose. But I think the bigger brain knowledge to possess is when algorithms matter and to a lesser extent, what types of algorithms are appropriate for particular problems. Answering these problems requires critical thinking. Actually implementing algorithms, by contrast, merely requires knowledge that can easily be looked up in a book (the algorithm or data structure itself) coupled with some programming knowledge for how to apply it. A capable programmer will be able to do both these things and pick up algorithms and data structure knowledge on the job, if necessary.

Some would say that algorithms are a good way to flush out coding ability. And coding ability is important to assess as part of interviewing a job candidate for a programming position! They aren't wrong. But there are much better ways to receive stronger signals about an interviewee's compatibility! On the coding front, there are infinite ways to assess programming capability without involving algorithms. So why involve algorithms as part of the interview?

One way I approach interviewing people is to imagine what the typical work day of that role will be like. How much time do they spend coding, investigating bugs, debugging, attending meetings, writing proposals, politicking with managers, etc. This produces a conceptual pie chart of that role's activities. I then try to structure the interview such that the topics covered in the interview correlate with and somewhat in proportion to activities in that job role. Is the role a heads down junior coder? A team lead or manager? When you start trying to map the time in various areas of the role to time spent in the interview, you realize that the common technical interview overly emphasizes some areas and often completely ignores others! One of the areas that is over-emphasizes is algorithms. Again, your typical programmer is going to be spending most of their typical day doing things unrelated to algorithms. So why are you spending precious interview time asking about algorithms when you could be probing an area that actually correlates to typical job activities? When viewed through this lens, the prevalence of algorithms in interviews just doesn't make much sense to me.

Perhaps knowledge of algorithms should be basic knowledge that every programmer should possess. If so, then asking about algorithms is fair game during an interview, I suppose. But I'm not comfortable with this line of thought.

I've always found it fascinating the ways that people with different backgrounds and degrees approach problems differently. From my experience, some of the best ideas and perspectives come from people with backgrounds and degrees which are minorities in the field. I've worked with programmers with degrees in philosophy and history who were some of the best programmers and overall minds in the room. One of the great things about software and programming is it is accessible to anyone, regardless of background. If you can code, you can land a (usually high-paying) job. Yes, the field is highly technical. But you don't need formal education or a degree to enter it like you do similar high-end professions, such as medicine or law. You can argue whether this is a good thing or not. But I think the accessibility of the software profession - the lack of formal gatekeeping - is something to marvel at, something that we as an industry should embrace and be proud of. Do arbitrary hurdles to joining the industry help or hinder it?

A problem with emphasizing algorithms in interviews is that algorithms are somewhat highly specialized and academic. There are entire areas of programming and software where detailed knowledge of algorithms just isn't that important. The bar for so much software is it works and it quite frankly doesn't matter if you have a quadratic algorithm instead of something better.

Most people I know are exposed to algorithms fundamentals during their university education as part of pursuing a degree in computer science or engineering. You almost certainly aren't going to have academic exposure to algorithms if you are say a liberal arts major - never mind someone who doesn't attend university at all (I also know plenty of terrific programmers who don't have degrees). From my own experience, my degree is in computer engineering. Not computer science or software engineering: computer engineering. I remember from my university days that my computer science friends seemed to have a much better grasp at algorithms and theory of software and programming than I did. When I was taking classes about how hardware and electronics work, they were learning all about the mathematical concepts underpinning the field, different approaches to programming language design, etc. I received very little of that. And on top of that, I struggled with my single algorithms course at university. So I entered the workforce without as good of a grasp on the computer science fundamentals as others I knew. (But I still probably knew more than someone in an unrelated field.) The point I'm trying to make is that because algorithms are somewhat highly specialized and academic in nature, requiring knowledge in algorithms will effectively bias your hiring towards people with strong computer science backgrounds. Stated another way, screening on algorithms knowledge undermines diversity and inclusion initiatives by excluding viable candidates who don't have strong backgrounds in computer science. Sure, if someone wants to enter the industry they can take the time to study up on algorithms. But why force them to do that? It feels like arbitrary gatekeeping given the relative non-importance of algorithms given the typical activities of the typical programmer. So why do it?

I suspect major contributing reasons to why algorithms are so prevalent in interviews are cargo culting, laziness, and lack of formal interview training / caring about diversity. As an industry, the software field is pretty bad at applying best practices and learning from our mistakes. I suspect this will change once the relatively young industry catches up to more-established industries and we're forced to cope with the realities of legal and monetary liabilities the way practically every other industry is. (We're starting to see this with monetary damages for security breaches.) Anyway, we as an industry are pretty bad at self-regulating and adopting practices with proven benefits. We like to settle for what is known. Laziness and the comfort associated with is easy. Seeking out and implementing change is harder. This is human nature. We see this with well-known people in industry rejecting the ideas of continuous testing (years ago) or fuzzing (more recently). We see it in C/C++ programmers who are delusional about their abilities to write secure code and decry e.g. Rust's safety guarantees as superfluous. The industry is disproportionately white and male (at least in the United States). And this brings with it certain personality tendencies. One is a macho attitude, which can manifest in interviews via the interviewer embarking on an ego trip proving they know some esoteric algorithm or data structure the candidate does not.

As a clear example of this, Google was known for asking brainteaser interview questions. (The practice may have been prevalent at Microsoft before Google was the darling of Silicon Valley, but that was before I entered industry.) This trend caught on and soon companies all over were asking brainteasers! The problem was that these questions didn't correlate to actual job performance! From a 2013 NYTimes interview with Google's VP of People Operations:

On the hiring side, we found that brainteasers are a complete waste of
time. How many golf balls can you fit into an airplane? How many gas
stations in Manhattan? A complete waste of time. They don’t predict
anything. They serve primarily to make the interviewer feel smart.

But the damage was done. I still heard these kinds of questions when interviewing in the wild long after Google realized they were bad questions and instructed interviewers not to ask them. I even believe I got a brainteaser when interviewing at Google after the supposed banning of these types of questions! And I won't be shocked if I'm asked a brainteaser in 2019 as part of the several interviews I'll be doing in the days ahead.

Asking questions with no correlation to job performance because a popular company asked that type of question for a while: that's textbook cargo culting. Failing to change your ways despite evidence saying you should: laziness. Insisting that your way is correct and others need to be like you: gatekeeping.

I'm not saying algorithms and data structures during interviews are intrinsically bad and that we should stop asking about them. What I am saying is that we as an industry need to examine how we interview. We need to invest in scientifically proven techniques. (Research shows that behavioral interview questions are better. Tell me about a time when, etc.) And after more than ten years in industry, my experience tells me that interviews place a disproportionate emphasis on algorithms and data structures compared to the daily activities of the typical programmer. And on top of that, due to their academic nature, I worry that screening for algorithms and data structures knowledge is undermining the diversity and inclusivity of our field by biasing towards people with strong computer science backgrounds. I think it is time we examine the role of algorithms and data structures in interviews and consider focusing on other areas instead.


What I've Learned About Optimizing Python

January 10, 2019 at 03:00 PM | categories: Python

I've used Python more than any other programming language in the past 4-5 years. Python is the lingua franca for Firefox's build, test, and CI tooling. Mercurial is written in mostly Python. Many of my side-projects are in Python.

Along the way, I've accrued a bit of knowledge about Python performance and how to optimize Python. This post is about sharing that knowledge with the larger community.

My experience with Python is mostly with the CPython interpreter, specifically CPython 2.7. Not all observations apply to all Python distributions or have the same characteristics across Python versions. I'll try to call this out when relevant. And this post is in no way a thorough survey of the Python performance landscape. I mainly want to highlight areas that have particularly plagued me.

Startup and Module Importing Overhead

Starting a Python interpreter and importing Python modules is relatively slow if you care about milliseconds.

If you need to start hundreds or thousands of Python processes as part of a workload, this overhead will amount to several seconds of overhead.

If you use Python to provide CLI tools, the overhead can cause enough lag to be noticeable by people. If you want instantaneous CLI tools, launching a Python interpreter on every invocation will make it very difficult to achieve that with a sufficiently complex tool.

I've written about this problem extensively. My 2014 post on python-dev outlines the problem. Posts in May 2018 and October 2018 restate and refine it.

There's not much you can do to alleviate interpreter startup overhead: fixing this mostly resides with the maintainers of the Python interpreter because they control the code that is taking precious milliseconds to complete. About the best you can do is disable the site import in your shebangs and invocations to avoid some extra Python code running at startup. However, many applications rely on functionality provided by site.py, so use at your own risk.

Related to this is the problem of module importing. What good is a Python interpreter if it doesn't have code to run! And the way code is made available to the interpreter is often through importing modules.

There are multiple steps to importing modules. And there are sources of overhead in each one.

There is overhead in finding modules and reading their data. As I've demonstrated with PyOxidizer, replacing the default find and load a module from the filesystem with an architecturally simpler solution of read the module data from an in-memory data structure makes importing the Python standard library take 70-80% of its original time! Having a single module per filesystem file introduces filesystem overhead and can slow down Python applications in the critical first milliseconds of execution. Solutions like PyOxidizer can mitigate this. And hopefully the Python community sees the overhead in the current approach and considers moving towards module distribution mechanisms that don't rely so much on separate files per module.

Another source of module importing overhead is executing code in that module at import time. Some modules have code in the module scope outside of functions and classes that runs when the module is imported. This code execution can add overhead to importing. A mitigation for this is to not run as much code at import time: only run code as needed. Python 3.7 supports a module __getattr__ that will be called when a module attribute is not found. This can be used to lazily populate module attributes on first access.

Another workaround for module importing slowness is lazy module importing. Instead of actually loading a module when it is imported, you register a custom module importer that returns a stub for that module instead. When that stub is first accessed, it will load the actual module and mutate itself to be that module.

By avoiding the filesystem and module running overhead for unused modules (modules are typically imported globally and then only used by certain functions in a module), you can easily shave dozens of milliseconds from applications importing several dozens of modules.

But lazy module importers are a bit fragile. Lots of modules have a pattern where they try: import foo; except ImportError:. A lazy module importer may never raise ImportError here because to do so, it would need to search the filesystem for a module to know if it exists and searching the filesystem would add overhead, so they don't do it! You work around this by accessing an attribute on the imported module. This forces the ImportError to be raised if the module doesn't exist but undermines the laziness of the module import! This problem is quite nasty. Mercurial's lazy module importer has to maintain a list of modules that are known to not be lazy importable to work around it. Another issue is the from foo import x, y syntax, which also undermines lazy module importing in cases where foo is a module (as opposed to a package) because in order to return a reference to x and y, the module has to be imported.

PyOxidizer, having a fixed set of modules frozen into the binary, can be efficient about raising ImportError. And Python 3.7's module __getattr__ provides additional flexibility for lazy module importers. I hope to integrate a robust lazy module importer into PyOxidizer so these gains are realized automatically.

The best solution to avoiding the interpreter startup and module import overhead problem is to run a persistent Python process. If you run Python in a daemon process (say for a web server), you pretty much get this for free. Mercurial's solution to this is to run a persistent Python process in the background which exposes a command server protocol. hg is aliased to a C (or now Rust) executable which connects to that persistent process and dispatches a command. The command server approach is a lot of work and can be a bit fragile and has security concerns. I'm exploring the idea of shipping a command server with PyOxidizer so executable can easily gain its benefits and the cost to solving the problem only needs to be paid in one central place: the PyOxidizer project.

Function Call Overhead

Function calls in Python are relatively slow. (This observation applies less to PyPy, which can JIT code execution.)

I've seen literally dozens of patches to Mercurial where we inline code or combine Python functions in order to avoid function call overhead. In the current development cycle, some effort was made to reduce the number of functions called when updating progress bars. (We try to use progress bars for any operation that could take a while so users know what is going on.) The old progress bar update code would dispatch to a handful of functions. Caching function call results and avoiding simple lookups via functions shaves dozens to hundreds of milliseconds off execution when we're talking about 1 million executions.

If you have tight loops or recursive functions in Python where hundreds of thousands or more function calls could be in play, you need to be aware of the overhead of calling an individual function, as it can add up quickly! Consider in-lining simple functions and combining functions to avoid the overhead.

Attribute Lookup Overhead

This problem is similar to function call overhead because it can actually be the same problem!

Resolving an attribute in Python can be relatively slow. (Again, this observation applies less to PyPy.)

Again, working around this issue is something we do a lot in Mercurial.

Say you have the following code:

obj = MyObject()
total = 0

for i in len(obj.member):
    total += obj.member[i]

Ignoring that there are better ways to write this example (total = sum(obj.member) should work), as written, the loop here will need to resolve obj.member on every iteration. Python has a relatively complex mechanism for resolving attributes. For simple types, it can be quite fast. But for complex types, that attribute access can silently be invoking __getattr__, __getattribute__, various other dunder methods, and even custom @property functions. What looks like it should be a fast attribute lookup can silently be several function calls, leading to function call overhead! And this overhead can compound if you are doing things like obj.member1.member2.member3 etc.

Each attribute lookup adds overhead. And since nearly everything in Python is a dictionary, it is somewhat accurate to equate each attribute lookup as a dictionary lookup. And we know from basic data structures that dictionary lookups are intrinsically not as fast as having say a pointer. Yes, there are some tricks in CPython to avoid the dictionary lookup overhead. But the general theme I want to get across is that each attribute lookup is a potential performance sink.

For tight loops - especially those over potentially hundreds of thousands of iterations - you can avoid this measurable attribute lookup overhead by aliasing the value to a local. We would write the example above as:

obj = MyObject()
total = 0

member = obj.member
for i in len(member):
    total += member[i]

Of course, this is only safe when the aliased item isn't replaced inside the loop! If that happens, your iterator will hold a reference to the old item and things may blow up.

The same trick can be used when calling a method of an object. Instead of:

obj = MyObject()

for i in range(1000000):
    obj.process(i)

Do the following:

obj = MyObject()
fn = obj.process

for i in range(1000000:)
    fn(i)

It's also worth noting that in cases where the attribute lookup is used to call a method (such as the previous example), Python 3.7 is significantly faster than previous releases. But I'm pretty sure this is due to dispatch overhead to the method function itself, not attribute lookup overhead. So things will be faster yet by avoiding the attribute lookup.

Finally, unless attribute lookup is calling functions to resolve the attribute, attribute lookup is generally less of a problem than function call overhead. And it generally requires eliminating a lot of attribute lookups for you to notice a meaningful improvement. That being said, once you add up all attribute accesses inside a loop, you may be talking about 10 or 20 attributes in the loop alone - before function calls. And loops with only thousands or low tens of thousands of iterations can quickly provide hundreds of thousands or millions of attribute lookups. So be on the lookout!

Object Overhead

From the perspective of the Python interpreter, every value is an object. In CPython, each value is a PyObject struct. Each object managed by the interpreter is on the heap and needs to have its own memory holding its reference count, its type, and other state. Every object is garbage collected. This means that each new object introduces overhead for the reference counting / garbage collection mechanism to process. (Again, PyPy can avoid some of this overhead by being more intelligent about the lifetimes of short-lived values.)

As a general rule of thumb, the more unique Python values/objects you create, the slower things are.

For example, say you are iterating over a collection of 1 million objects. You call a function to process that object into a tuple:

for x in my_collection:
    a, b, c, d, e, f, g, h = process(x)

In this example, process() returns an 8-tuple. It doesn't matter of we destructure the return value or not: this tuple requires the creation of at least 9 Python values: 1 for the tuple itself and 8 for its inner members. OK, in reality there could be fewer values if process() returns a reference to an existing value. Or there could be more if the types aren't simple types and require multiple PyObject to represent. My point is that under the hood the interpreter is having to juggle multiple objects to represent things.

From my experience, this overhead is only relevant for operations that benefit from speedups when implemented in a native language like C or Rust. The reason is the CPython interpreter is just unable to execute bytecode fast enough for object overhead itself to matter. Instead, you will likely hit performance issues with function call overhead, processing overhead, etc long before object overhead. But there are some exceptions to this, such as constructing tuples or dicts with several members.

As a concrete example of this overhead, Mercurial has C code for parsing some of the lower-level data structures. In terms of raw parsing speed, the C code runs on an order of two magnitudes faster than CPython. But once we have that C code create PyObject to present the result, the speedup drops to just a few times faster, if that. In other words, the overhead is coming from creating and managing Python values so they can be used by Python code.

A workaround for this is to produce fewer Python values. If you only need to access a single value, have a function return that single value instead of say a tuple or dict with N values. However, watch out for function call overhead!

When you have a lot of speedup code using the CPython C API and values need to be shared across different modules, pass around Python types that expose data as C structs and have the compiled code access those C structs instead of going through the CPython C API. By avoiding the Python C API for data access, you will be avoiding most of its overhead.

Treating values as data (instead of having functions for accessing everything) is more Pythonic. So another workaround for compiled code is is to lazily create PyObject instances. If you create a custom Python type (PyTypeObject) to represent your complex values, you can define the tp_members and/or tp_getset fields to register custom C functions to resolve the value for an attribute. If you are say writing a parser and you know that consumers will only access a subset of the parsed fields, you can quickly construct a type holding the raw data, return that type, and have the Python attribute lookup call a C function which resolves the PyObject. You can even defer parsing until this function is called, saving additional overhead if a parse is never required! This technique is quite rare (because it requires writing a non-trivial amount of code against the Python C API). But it can result in substantial wins.

Pre-Sizing Collections

This one applies to the CPython C API.

When creating collections like lists or dicts, use e.g. PyList_New() + PyList_SET_ITEM() to populate new collections when their size is known at collection creation time. This will pre-size the collection to have capacity to hold the final number of elements. And it skips checks when inserting elements that the collection is large enough to hold them. When creating collections of thousands of elements, this can save a bit of overhead!

Using Zero-copy in the C API

The CPython C API really likes to make copies of things rather than return references. For example, PyBytes_FromStringAndSize() copies a char* to memory owned by Python. If you are doing this for a large number of values or sufficiently large data, we could be talking about gigabytes of memory I/O and associated allocator overhead.

If writing high-performance code against the C API, you'll want to become familiar with the buffer protocol and related types, like memoryview.

The buffer protocol is implemented by Python types and allows the Python interpreter to cast a type to/from bytes. It essentially allows the interpreter's C code to get a handle on a void* of certain size representing the object. This allows you to associate any address in memory with a PyObject. Many functions operating on binary data transparently accept any object implementing the buffer protocol. And if you are coding against the C API and want to accept any object that can be treated as bytes, you should be using the s*, y* or w* format units when parsing function arguments.

By using the buffer protocol, you give the interpreter the best opportunity possible to be using zero-copy operations and avoiding having to copy bytes around in memory.

By using Python types like memoryview, you are also allowing Python to reference slices of memory by reference instead of by copy.

When you have gigabytes of data flowing through your Python program, astute use of Python types that support zero-copy can make a world of difference on performance. I once measured that python-zstandard was faster than some Python LZ4 bindings (LZ4 should be faster than zstandard) because I made heavy use of the buffer protocol and avoiding excessive memory I/O in python-zstandard!

Conclusion

This post has outlined some of the things I've learned optimizing Python programs over the years. This post is by no means a comprehensive overview of Python performance techniques and gotchas. I recognize that my use of Python is probably more demanding than most and that the recommendations I made are not applicable to many Python programs. You should not mass update your Python code to e.g. inline functions and remove attribute lookups after reading this post. As always, when it comes to performance optimization, measure first and optimize where things are observed to be slow. I highly recommend py-spy for profiling Python applications. That being said, it's hard to attach a time value to low-level activity in the Python interpreter such as calling functions and looking up attributes. So if you e.g. have a loop that you know is tight, experiment with suggestions in this post, and see if you can measure an improvement!

Finally this post should not be interpreted as a dig against Python or its performance properties. Yes, you can make arguments that Python should or shouldn't be used in particular areas because of performance properties. But Python is extremely versatile - especially with PyPy delivering exceptional performance for a dynamic programming language. The performance of Python is probably good enough for most people. For better or worse, I have used Python for uses cases that often feel like outliers across all users. And I wanted to share my experiences such that others know what life at the frontier is like. And maybe, just maybe, I can cause the smart people who actually maintain Python distributions to think about the issues I've had in more detail and provide improvements to mitigate them.


Seeking Employment

January 07, 2019 at 03:25 PM | categories: Personal, Mozilla

After almost seven and a half years as an employee of Mozilla Corporation, I'm moving on. I have already worked my final day as an employee.

This post is the first time that I've publicly acknowledged my departure. To any Mozillians reading this, I regret that I did not send out a farewell email before I left. But the circumstances of my departure weren't conducive to doing so. I've been drafting a proper farewell blog post. But it has been very challenging to compose. Furthermore, each passing day brings with it new insights into my time at Mozilla and a new wrinkle to integrate into the reflective story I want to tell in that post. I vow to eventually publish a proper goodbye that serves as the bookend to my employment at Mozilla. Until then, just let me say that I'm already missing working with many of you. I've connected with several people since I left and still owe responses or messages to many more. If you want to get in touch, my contact info is in my résumé.

I left Mozilla without new employment lined up. That leads me to the subject line of this post: I'm seeking employment. The remainder of this post is thus tailored to potential employers.

My résumé has been updated. But that two page summary only scratches the surface of my experience and set of skills. The Body of Work page of my website is a more detailed record of the work I've done. But even it is not complete!

Perusing through my posts on this blog will reveal even more about the work I've done and how I go about it. My résumé links to a few posts that I think are great examples of the level of understanding and detail that I'm capable of harnessing.

As far as the kind of work I want to do or the type of company I want to work for, I'm trying to keep an open mind. But I do have some biases.

I prefer established companies to early start-ups for various reasons. Dan Luu's Big companies v. startups is aligned pretty well with my thinking.

One of the reasons I worked for Mozilla was because of my personal alignment with the Mozilla Manifesto. So I gravitate towards employers that share those principles and am somewhat turned off by those that counteract them. But I recognize that the world is complex and that competing perspectives aren't intrinsically evil. In other words, I try to maintain an open mind.

I'm attracted to employers that align their business with improving the well-being of the planet, especially the people on it. The link between the business and well-being can be tenuous: a B2B business for example is presumably selling something that helps people, and that helping is what matters to me. The tighter the link between the business and improving the world will increase my attraction to a employer.

I started my university education as a biomedical engineer because I liked the idea of being at the intersection of technology and medicine. And part of me really wants to return to this space because there are few things more noble than helping a fellow human being in need.

As for the kind of role or technical work I want to do, I could go in any number of directions. I still enjoy doing individual contributor type work and believe I could be an asset to an employer doing that work. But I also crave working on a team, performing technical mentorship, and being a leader of technical components. I enjoy participating in high-level planning as well as implementing the low-level aspects. I recognize that while my individual output can be substantial (I can provide data showing that I was one of the most prolific technical contributors at Mozilla during my time there) I can be more valuable to an employer when I bestow skills and knowledge unto others through teaching, mentorship, setting an example, etc.

I have what I would consider expertise in a few domains that may be attractive to employers.

I was a technical maintainer of Firefox's build system and initiated a transition away from an architecture that had been in place since the Netscape days. I definitely geek out way too much on build systems.

I am a contributor to the Mercurial version control tool. I know way too much about the internals of Mercurial, Git, and other version control tools. I am intimately aware of scaling problems with these tools. Some of the scaling work I did for Mercurial saved Mozilla tens of thousands of dollars in direct operational costs and probably hundreds of thousands of dollars in saved people time due to fewer service disruptions and faster operations.

I have exposure to both client and server side work and the problems encountered within each domain. I've dabbled in lots of technologies, operating systems, and tools. I'm not afraid to learn something new. Although as my experience increases, so does my skepticism of shiny new things (I've been burned by technical fads too many times).

I have a keen fascination with optimization and scaling, whether it be on a technical level or in terms of workflows and human behavior. I like to ask and then what so I'm thinking a few steps out and am prepared for the next problem or consequence of an immediate action.

I seem to have a knack for caring about user experience and interfaces. (Although my own visual design skills aren't the greatest - see my website design for proof.) I'm pretty passionate that tools that people use should be simple and usable. Cognitive dissonance, latency, and distractions are real and as an industry we don't do a great job minimizing these disruptions so focus and productivity can be maximized. I'm not saying I would be a good product manager or UI designer. But it's something I've thought about because not many engineers seem to exhibit the passion for good user experience that I do and that intersection of skills could be valuable.

My favorite time at Mozilla was when I was working on a unified engineering productivity team. The team controlled most of the tools and infrastructure that Firefox developers interacted with in order to do their jobs. I absolutely loved taking a whole-world view of that problem space and identifying the high-level problems - and low-hanging fruit - to improve the overall Firefox development experience. I derived a lot of satisfaction from identifying pain points, equating them to a dollar cost by extrapolating people time wasted due to them, justifying working on them, and finally celebrating - along with the overall engineering team - when improvements were made. I think I would be a tremendous asset to a company working in this space. And if my experience at Mozilla is any indicator, I would more than offset my total employment cost by doing this kind of work.

I've been entertaining the idea of contracting for a while before I resume full-time employment with a single employer. However, I've never contracted before and need to do some homework before I commit to that. (Please leave a comment or email me if you have recommendations on reading material.)

My dream contract gig would likely be to finish the Mercurial wire protocol and storage work I started last year. I would need to type up a formal proposal, but the gist of it is the work I started has the potential to leapfrog Git in terms of both client-side and server-side performance and scalability. Mercurial would be able to open Git repositories on the local filesystem as well as consume them via the Git wire protocol. Transparent Git interoperability would enable Mercurial to be used as a drop-in replacement for Git, which would benefit users who don't have control over the server (such as projects that live on GitHub). Mercurial's new wire protocol is designed with global scalability and distribution in mind. The goal is to enable server operators to deploy scalable VCS servers in a turn-key manner by relying on scalable key-value stores and content distribution networks as much as possible (Mercurial and Git today require servers to perform way too much work and aren't designed with modern distributed systems best practices, which is why scaling them is hard). The new protocol is being designed such that a Mercurial server could expose Git data. It would then be possible to teach a Git client to speak the Mercurial wire protocol, which would result in Mercurial being a more scalable Git server than Git is today. If my vision is achieved, this would make server-side VCS scaling problems go away and would eliminate the religious debate between Git and Mercurial (the answer would be deploy a Mercurial server, allow data to be exposed to Git, and let consumers choose). I conservatively estimate that the benefits to industry would be in the millions of dollars. How I would structure a contract to deliver aspects of this, I'm not sure. But if you are willing to invest six figures towards this bet, let's talk. A good foundation of this work is already implemented in Mercurial and the Mercurial core development team is already on-board with many aspects of the vision, so I'm not spewing vapor.

Another potential contract opportunity would be funding PyOxidizer. I started the project a few months back as a side-project as an excuse to learn Rust while solving a fun problem that I thought needed solving. I was hoping for the project to be useful for Mercurial and Mozilla one day. But if social media activity is any indication, there seems to be somewhat widespread interest in this project. I have no doubt that once complete, companies will be using PyOxidizer to ship products that generate revenue and that PyOxidizer will save them engineering resources. I'd very much like to recapture some of that value into my pockets, if possible. Again, I'm somewhat naive about how to write contracts since I've never contracted, but I imagine deliver a tool that allows me to ship product X as a standalone binary to platforms Y and Z is definitely something that could be structured as a contract.

As for the timeline, I was at Mozilla for what feels like an eternity in Silicon Valley. And Mozilla's way of working is substantially different from many companies. I need some time to decompress and unlearn some Mozilla habits. My future employer will inherit a happier and more productive employee by allowing me to take some much-needed time off.

I'm looking to resume full-time work no sooner than March 1. I'd like to figure out what the next step in my career is by the end of January. Then I can sign some papers, pack up my skiis, and become a ski bum for the month of February: if I don't use this unemployment opportunity to have at least 20 days on the slopes this season and visit some new mountains, I will be very disappointed in myself!

If you want to get in touch, my contact info is in my résumé. I tend not to answer incoming calls from unknown numbers, so email is preferred. But if you leave a voicemail, I'll try to get back to you.

I look forward to working for a great engineering organization in the near future!


PyOxidizer Support for Windows

January 06, 2019 at 10:00 AM | categories: Python, PyOxidizer, Rust

A few weeks ago I introduced PyOxidizer, a project that aims to make it easier to produce completely self-contained executables embedding a Python interpreter (using Rust). A few days later I observed some PyOxidizer performance benefits.

After a few more hacking sessions, I'm very pleased to report that PyOxidizer is now working on Windows!

I am able to produce a standalone Windows .exe containing a fully featured CPython interpreter, all its library dependencies (OpenSSL, SQLite, liblzma, etc), and a copy of the Python standard library (both source and bytecode data). The binary weighs in at around 25 MB. (It could be smaller if we didn't embed .py source files or stripped some dependencies.) The only DLL dependencies of the exe are vcruntime140.dll and various system DLLs that are always present on Windows.

Like I did for Linux and macOS, I produced a Python script that performs ~500 import statements for the near entirety of the Python standard library. I then ran this script with both the official 64-bit Python distribution and an executable produced with PyOxidizer:

# Official CPython 3.7.2 Windows distribution.
$ time python.exe < import_stdlib.py
real    0m0.475s

# PyOxidizer with non-PGO CPython 3.7.2
$ time target/release/pyapp.exe < import_stdlib.py
real    0m0.347s

Compared to the official CPython distribution, a PyOxidizer executable can import almost the entirety of the Python standard library ~125ms faster - or ~73% of original. In terms of the percentage of speedup, the gains are similar to Linux and macOS. However, there is substantial new process overhead on Windows compared to POSIX architectures. On the same machine, a hello world Python process will execute in ~10ms on Linux and ~40ms on Windows. If we remove the startup overhead, importing the Python standard library runs at ~70% of its original time, making the relative speedup on par with that seen on macOS + APFS.

Windows support is a major milestone for PyOxidizer. And it was the hardest platform to make work. CPython's build system on Windows uses Visual Studio project files. And coercing the build system to produce static libraries was a real pain. Lots of CPython's build tooling assumes Python is built in a very specific manner and multiple changes I made completely break those assumptions. On top of that, it's very easy to encounter problems with symbol name mismatch due to the use of __declspec(dllexport) and __declspec(dllimport). I spent several hours going down a rabbit hole learning how Rust generates symbols on Windows for extern {} items. Unfortunately, we currently have to use a Rust Nightly feature (the static-nobundle linkage kind) to get things to work. But I think there are options to remove that requirement.

Up to this point, my work on PyOxidizer has focused on prototyping the concept. With Windows out of the way and PyOxidizer working on Linux, macOS, and Windows, I have achieved confidence that my vision of a single executable embedding a full-featured Python interpreter is technically viable on major desktop platforms! (BSD people, I care about you too. The solution for Linux should be portable to BSD.) This means I can start focusing on features, usability, and optimization. In other words, I can start building a tool that others will want to use.

As always, you can follow my work on this blog and by following the python-build-standalone and PyOxidizer projects on GitHub.


« Previous Page -- Next Page »