My Shifting Open Source Priorities

March 17, 2024 at 09:00 PM | categories: Personal, PyOxidizer

I'm a maintainer of a handful of open source projects, some of which have millions of downloads and/or are used in important workloads, including in production.

I have a full time job as a software engineer and my open source work is effectively a side job. (Albeit one I try very hard to not let intersect with my day job.)

Historically, my biggest contributions to my open source projects have come when I'm not working full time:

  • python-zstandard was started when I was on medical leave, recovering from a surgery.
  • python-build-standalone and PyOxidizer were mainly built when I was between jobs, after leaving Mozilla.
  • apple-codesign was built in the height of COVID when I took a voluntary leave of absence from work to reconstitute my mental and physical health.

When working full time, my time to contribute to open source has been carved out of weekday nights and weekends, especially in the winter months. I believe that code is an art form and programming a form of creative expression. My open source contributions provide a relaxing avenue for me to express my artistic creativity, when able.

My open source contributions reflect my personal priorities of where and what to spend my free time on.

The only constant in life is change.

In the middle of 2022, I switched job roles and found myself reinvigorated by my new role - Infrastructure Performance - which is at the intersection of some of my strongest technical and professional skills. I found myself willingly pouring more energy and time into my day job. That had the side effect of reducing my open source contributions.

In 2023Q1 I got married. In the months leading up to and after, I chose to prioritize spending time with my now wife and all the time commitments that entails. This also reduced the amount of time available for open source contributions.

In 2023Q4 I became a father to a beautiful baby girl. While on my employer's generous-for-the-United-States fourteen week paternity leave, I somehow found some time to contribute to open source. As refreshing as that was, it didn't last. My man cave where my desktop computer resides has been converted into a nursery. And for the past few months it has been occupied by my mother-in-law, who has been generously effectively serving as a live-in nanny. Even when I'm able to sit down at my desktop, it's hard to get into a state of flow due to the added entropy from the additional three people now living with me.

After realizing the new normal in 2024Q1, I purchased a Wahoo KICKR MOVE bicycle trainer and now spend considerable time doing virtual bicycle rides on Zwift because its one of the few leisure activities I can do at home without drawing scrutiny from my wife and mother-in-law (but 98% my mother-in-law because I've observed that my wife is effectively infallible). I now get excited about virtually summiting famous climbs instead of contributing to open source. (Today's was Mont Ventoux - an absolute beast of a climb that reminded me a lot of my real world ride up Pike's Peak in 2020.)

Various changes in the past eighteen or so months have created additional time constraints and prioritization changes that have resulted in my open source contributions withering.

In addition, my technical interests have been shifting.

I've always gravitated to more systems-level areas of computers. My degree is in Computer Engineering and I have a stereotypical engineer mindset: I have an insatiable curiosity about how things work and interact and I want to always be tinkering. I prefer to be closer to hardware instead of abstracted far away from it. I enjoy interacting with the building blocks of software ecosystems: operating systems, filesystems, runtimes, file formats, compilers, etc.

Historically, my open source contributions to my preferred areas of computing were limited. Again, to me open source is an enjoyable form of creative expression. That means I do it for fun. Historically, the systems-level programming space was limited to languages like C and C++, which I consider frustrating and painful to use. If I'm going to subject myself to misery when programming, you are going to have to pay me well to do it.

As part of creating PyOxidizer, I learned Rust.

When I became proficient in Rust, I realized that Rust unlocks all kinds of systems-level problems that were effectively off-limits for my open source contributions. Would I implement Debian packaging primitives in Python? Or a tool to bulk analyze Linux packages and peek inside ELF binaries for insights about what compiler/linker features are used in the wild in Python/C/C++? Not unless you pay me to do it!

As I learned Rust, I also found myself being drawn away from Python, my prior go-to language. As I wrote in Rust is for Professionals, Rust feels surprisingly high level. It isn't as terse as Python but it is a lot closer than I thought it would be. And Rust gives you vastly stronger compile-time guarantees and run-time performance than Python. I felt like Rust's tooling ecosystem was supporting me instead of standing in my way. I felt that when you consider the overall software development lifecycle - not just the edit-build-run loop that people tend to fixate on, likely because it is the easiest to measure - Rust was vastly more productive and a joy to work with than Python. All those countless hours debugging, fixing, and authoring tests for TypeError and ValueError Python exceptions you see in production just don't happen with Rust and that time can be better spent iterating on core functionality, which is what actually matters.

On top of the Rust undercurrents, I've also become somewhat disenchanted with the Python ecosystem. As I wrote in 2020's Mercurial's Journey to and Reflections on Python 3, the Python 3 transition was bungled and resulted in years - if not a full decade - of lost opportunity. As I wrote in 2023's My User Experience Porting Off setup.py, the Python packaging story feels as discombobulated and frustrating as ever. PyOxidizer additionally brushed up against several limitations in how Python is designed and implemented, many of which are not trivially fixable. As a systems-level guy, I am frequently questioning various aspects of the Python ecosystem which I have contrasting opinions on, including the importance of correctness and performance.

Starting in 2021, I started gravitating towards writing more Rust code and solving problems in the systems domain that were previously off-limits to me, like Apple code signing. Initially the work was in support of PyOxidizer: I was going to implement all these packaging primitives in pure Rust and enable people to distribute Python applications without requiring access to a Windows or macOS machine! Over time, this work consumed me. Apple code signing turned into a major time sink because of its complexity and the fact I was having to reverse engineer a lot of its internals. But I was having a ton of fun doing it: more fun than swimming upstream against decades of encrusted technical debts in the Python ecosystem.

By late 2021, I realized I made a series of mistakes with PyOxidizer.

I started PyOxidizer as a science experiment to see if it was possible to achieve a single file executable Python application without requiring a temporary filesystem at run-time. I succeeded. But the cost was compatibility with the larger pre-built Python package ecosystem. I built all this complexity into PyOxidizer to allow people to tweak how Python resources are packaged so they could choose to build a single file application if they wanted. This ballooned into a hot mess and was obviously not user-friendly. It violated various personal principles about optimizing for end-user experience.

Armed with knowledge of all the pitfalls, I realized that there was a 90% use case for Python application packaging that was simple for end users and technically achievable using all the code primitives - like the pyembed Rust crate - that I built out for PyOxidizer.

Thus the PyOxy project was born and released in May 2022.

While I believe PyOxy is already a generally useful primitive to have in the Python ecosystem, I had bigger goals in mind.

My intent with PyOxy was to build in a simplified and opinionated PyOxidizer lite mode. The pyoxy executable is already a chameleon: if you rename it to python it behaves like a python executable. I wanted to extend this so you could do something like pyoxy build-app and it would collect all dependencies, assemble a Python packed resources blob, and embed that in a copy of the pyoxy binary as an ELF, Mach-O, or PE segment. Then at run-time, the variant executable binary would load the application configuration and Python resources metadata from its own binary and execute the application. Essentially, PyOxy would evolve into a self-packaging Python application. I just needed to evolve the Python packed resources format, implement a very crude ELF, Mach-O, and PE linker to append resources data to an executable, and teach pyembed to read resources data from an ELF, Mach-O, or PE segment. All within my sphere of technical competency. And I was excited to build it and forever alter people's perceptions of how easy it could be to produce a distributable Python application.

Then the roller coaster of my personal life took over. I felt newly invigorated with a new job role. I got engaged and married. I became a father.

By early 2023, it was clear my ability to contribute to open source would be vastly diminished for the foreseeable future. PyOxidizer and PyOxy fell into a state of neglect. Weeks went by without me even tinkering on my local computer, much less push commits or publish a release. Weeks turned into months. Months into quarters. At this point, I haven't pushed a commit to indygreg/PyOxidizer since January 2023. And I'm not sure when I next will, if ever.

In my limited open source contribution time, I've prioritized other projects over PyOxidizer.

python-build-standalone has gained a life outside PyOxidizer. It is now used by rye, Bazel's rules_python, briefcase, and a myriad of other consumers. The release assets have been downloaded over 23 million times and the download rate appears to be accelerating. I still actively support python-build-standalone and intend for the project to be actively supported for the indefinite future: it has become too important to abandon. I'm actively recruiting assistance to help maintain the project and I'm not concerned about its future.

Apple code signing still actively draws my engagement. What I love about the project is it either works or it doesn't: there's limited extra features we can add to it since Apple mostly dictates the feature set. And I perceive the current project to be mostly done.

python-zstandard is downloaded ~8 million times per month. The project is long overdue for some modernization. I'm sitting on a pile of commits to improve it, but progress has been slow. I just learned this weekend that the maintainer of the other popular zstandard Python package deleted their GitHub account recently and now users are looking to onboard to my package. Nothing quite like unanticipated distractions!

That's a very long-winded way of saying that PyOxidizer and all the projects under its umbrella are effectively in a zombie state. I'm hesitant to say dead because if I suddenly found myself with lots of free time I'd love to brush off the cobwebs and bring the projects back to life. But who am I kidding: they are effectively dead at the moment because with everything happening in my personal life, I don't see where I find the time to resuscitate the project. And that assumes I even want to: again, I've become somewhat disenchanted by the state of Python. The main thing that draws me to it is the size of the community and the potential for impact. But to realize that impact I feel like I'd be pushing Python in directions it isn't well-equipped to go in. Quite franky - and, yes, selfishly - I don't want to subject myself to the misery unless I'm being well paid to do it. Again, I view my open source contributions as a fun outlet for my creative expression and nudging Python packaging in directions it is obviously ill-equipped to go in just isn't fun.

If anyone reading has an interest in taking ownership or maintenance responsibilities of PyOxidizer, any projects under its umbrella, or any of my other open source projects, I'm receptive to proposals. Send me an email or create an issue or discussion on GitHub if you want to do it publicly.

But I'm going to assume that PyOxidizer is going to wither and die - or at least incur some massive backwards incompatible breaks if it continues to live. I've already filed issues against python-build-standalone - such as removing Windows static builds - to make the project easier to support and less work for future maintainers.

If I have one regret about how this has played out, it is my failure to communicate developments in my open source commitments / expectations in a timely manner. I knew the future was bleak in early 2023 but didn't publicly say anything. I still thought there was a chance that things were going to change and I didn't want to make a hard decision prematurely. Writing this post has been on my mind since the middle of 2023 but I just couldn't bring myself to write it. And - surprise - having a newborn at home is a giant time and mental commitment! I'm writing this now because people are (finally!) noticing my lack of contributions to PyOxidizer and asking questions. And I'm home alone for a few days and actually have time to sit down and compose this post. (Yes, I'm that stretched for time in my personal life.)

In 2023, I struggled with the idea of letting people down by declaring PyOxidizer dead. But when I wake up every morning, walk into the nursery, and cause my daughter to smile and flail her arms and legs with unbridled excitement when she sees me, I'd have it no other way. When it comes to choosing between open source and family, I choose family.

It feels appropriate to end this post with a link to XKCD 2347: Dependency. But I'm not the random person in Nebraska: I'm a husband and father.


My User Experience Porting Off setup.py

October 30, 2023 at 06:00 AM | categories: Python

In the past week I went to add Python 3.12 support to my zstandard Python package. A few hours into the unexpected yak shave / rat hole, I decided to start chronicling my experience so that I may share it with the broader Python community. My hope is that by sharing my (unfortunately painful) end-user experience that I can draw attention to aspects of Python packaging that are confusing so that better informed and empowered people can improve matters and help make future Python packaging decisions to help scenarios like what I'm about to describe.

This blog post is purposefully verbose and contains a very lightly edited stream of my mental thoughts. Think of it as a self-assessed user experience study of Python packaging.

Some Background

I'm no stranger to the Python ecosystem or Python packaging. I've been programming Python for 10+ years. I've even authored a Python application packaging tool, PyOxidizer.

When programming, I strive to understand how things work. I try to not blindly copy-paste or cargo cult patterns unless I understand how they work. This means I often scope bloat myself and slow down velocity in the short term. But I justify this practice because I find it often pays dividends in the long term because I actually understand how things work.

I also have a passion for security and supply chain robustness. After you've helped maintain complex CI systems for multiple companies, you learn the hard way that it is important to do things like transitively pin dependencies and reduce surface area for failures so that build automation breaks in reaction to code changes in your version control, not spooky-action-at-a-distance when state on a third party server changes (e.g. a new package version is uploaded).

I've been aware of the emergence of pyproject.toml. But I've largely sat on the sidelines and held off adopting them, mainly for if it isn't broken, don't fix it reasons. Plus, my perception has been that the tooling still hasn't stabilized: I'm not going to incur work now if it is going to invite avoidable churn that could be avoided by sitting on my hands a little longer.

Now, on to my user experience of adding Python 3.12 to python-zstandard and the epic packaging yak shave that entailed.

The Journey Begins

When I attempted to run CI against Python 3.12 on GitHub Actions, running python setup.pycomplained that setuptools couldn't be imported.

Huh? I thought setuptools was installed in pretty much every Python distribution by default? It was certainly installed in all previous Python versions by the actions/setup-python GitHub Action. I was aware distutils was removed from the Python 3.12 standard library. But setuptools and distutils are not the same! Why did setuptools disappear?

I look at the CI logs for the passing Python 3.11 job and notice a message:

********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer or other
standards-based tools.

See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
********************************************************************************

I had several immediate reactions:

  1. OK, maybe this is a sign I should be modernizing to pyproject.toml and moving away from python setup.py. Maybe the missing setuptools in the 3.12 CI environment is a side-effect of this policy shift?
  2. What are pypa/build and pypa/installer? I've never heard of them. I know pypa is the Python Packaging Authority (I suspect most Python developers don't know this). Are these GitHub org/repo identifiers?
  3. What exactly is a standards-based tool? Is pip not a standards-based tool?
  4. Speaking of pip, why isn't it mentioned? I thought pip was the de facto packaging tool and had been for a while!
  5. It's linking a URL for more info. But why is this a link to what looks like an individual's blog and not to some more official site, like the setuptools or pip docs? Or anything under python.org?

Learning That I Shouldn't Invoke python setup.py

I open https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html in my browser and see a 4,000+ word blog post. Oof. Do I really want/need to read this? Fortunately, the author included a tl;dr and linked to a summary section telling me a lot of useful information! It informs me (my commentary in parentheses):

  1. The setuptools project has stopped maintaining all direct invocations of setup.py years ago. (What?!)
  2. There are undoubtedly many ways that your setup.py-based system is broken today, even if it's not failing loudly or obviously.. (What?! Surely this can't be true. I didn't see any warnings from tooling until recently. How was I supposed to know this?)
  3. PEP 517, 518 and other standards-based packaging are the future of the Python ecosystem. (A ha - a definition of standards-based tooling. I guess I have to look at PEP 517 and PEP 518 in more detail. I'm pretty sure these are the PEPs that define pyproject.toml.)
  4. At this point you may be expecting me to give you a canonical list of the right way to do everything that setup.py used to do, and unfortunately the answer here is that it's complicated. (You are telling me that we had a working python setup.py solution for 10+ years, this workflow is now quasi deprecated, and the recommended replacement is it's complicated?! I'm just trying to get my package modernized. Why does that need to be complicated?)
  5. That said, I can give you some simple "works for most people" recommendations for some of the common commands. (Great, this is exactly what I was looking for!)

Then I look at the table mapping old ways to new ways. In the new column, it references the following tools: build, pytest, tox, nox, pip, and twine. That's quite the tooling salad! (And that build tool must be the pypa/build referenced in the setuptools warning message. One mystery solved!)

I scroll back to the top of the article and notice the date: October 2021. Two years old. The summary section also mentioned that there's been a lot of activity around packaging tooling occurring. So now I'm wondering if this blog post is outdated. Either way, it is clear I have to perform some additional research to figure out how to migrate off python setup.py so I can be compliant with the new world order.

Learning About pyproject.toml and Build Systems

I had pre-existing knowledge of pyproject.toml as the modern way to define build system metadata. So I decide to start my research by Googling pyproject.toml. The first results are:

  1. https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/
  2. https://stackoverflow.com/questions/62983756/what-is-pyproject-toml-file-for
  3. https://python-poetry.org/docs/pyproject/
  4. https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html
  5. https://godatadriven.com/blog/a-practical-guide-to-setuptools-and-pyproject-toml/
  6. https://towardsdatascience.com/pyproject-python-9df8cc092f61

I click pip's documentation first because pip is known to me and it seems a canonical source. Pip's documentation proceeds to link to PEP-518, PEP-517, PEP-621, and PEP-660 before telling me how projects with pyproject.toml are built, without giving me - a package maintainer - much useful advice for what to do or how to port from setup.py. This seems like a dead end.

Then I look at the Stack Overflow link. Again, telling me a lot of what I don't really care about. (I've somewhat lost faith in Stack Overflow and only really skimmed this page: I would much prefer to get an answer from a first party source.)

I click on the Poetry link. It documents TOML fields. But only for the [tool.poetry] section. While I've heard about Poetry, I know that I probably don't want to scope bloat myself to learn how Poetry works so I can use it. (No offence meant to the Poetry project here but I don't perceive my project as needing whatever features Poetry provides: I'm just trying to publish a simple library package.) I go back to the search results.

I click on the setuptools link. I'm using setuptools via setup.py so this content looks promising! It gives me a nice example TOML of how to configure a [build-system] and [project] metadata. It links to PyPA's Declaring project metadata content, which I open in a new tab, as the content seems useful. I continue reading setuptools documentation. I land on its Quickstart documentation, which seems useful. I start reading it and it links to the build tool documentation. That's the second link to the build tool. So I open that in a new tab.

At this point, I think I have all the documentation on pyproject.toml. But I'm still trying to figure out what to replace python setup.py with. The build tool certainly seems like a contender since I've seen multiple references to it. But I'm still looking for modern, actively maintained documentation pointing me in a blessed direction.

The next Google link is A Practical Guide to Setuptools and Pyproject.toml. I start reading that. I'm immediately confused because it is recommending I put setuptools metadata in setup.cfg files. But I just read all about defining this metadata in pyproject.toml files in setuptools' own documentation! Is this blog post out of date? March 12, 2022. Seems pretty modern. I look at the setuptools documentation again and see the pyproject.toml metadata pieces are in version 61.0.0 and newer. I go to https://github.com/pypa/setuptools/releases/tag/v61.0.0 and see version 61.0.0 was released on March 25, 2022. So the fifth Google link was seemingly obsoleted 13 days after it was published. Good times. I pretend I never read this content because it seems out of date.

The next Google link is https://towardsdatascience.com/pyproject-python-9df8cc092f61. I click through. But Medium wants me to log in to read it all and it is unclear it is going to tell me anything important, so I back out.

Learning About the build Tool

I give up on Google for the moment and start reading up on the build tool from its docs.

The only usage documentation for the build tool is on its root documentation page. And that documentation basically prints what python -m build --help would print: says what the tool does but doesn't give any guidance or where I should be using it or how to replace existing tools (like python setup.py invocations). Yes, I can piece the parts together and figure out that python -m build can be used as a replacement for python setup.py sdist and python setup.py bdist_wheel (and maybe pip wheel?). But should it be the replacement I choose? I make use of python setup.py develop and the aforementioned blog post recommended replacing that with python -m pip install -e. Perhaps I can use pip as the singular replacement for building source distributions and binary wheels so I have N-1 packaging tools? I keep researching.

Exploring the Python Packaging User Guide

I had previously opened https://packaging.python.org/en/latest/specifications/declaring-project-metadata/ in a browser tab without really looking at it. On second glance, I see it is part of a broader Python Packaging User Guide. Oh, this looks promising! A guide on how to do what I'm seeking maintained by the Python Packaging Authority (PyPA), the group who I know to be the, well, authorities on Python packaging. It is is published under the canonical python.org domain. Surely the answer will be here.

I immediately click on the link to Packaging Python Projects to hopefully see what the PyPA folks are recommending.

Is Hatch the Answer?

I skim through. I see recommendations to use a pyproject.toml with a [build-system] to define the build backend. This matches my expectations. But they are using Hatchling as their build backend. Another tool I don't really know about. I click through some inline links and eventually arrive at https://github.com/pypa/hatch. (I'm kind of confused why the PyPA tutorial said Hatchling when the project and tool is apparently named Hatch. But whatever.)

I skim Hatch's GitHub README. It looks like a unified packaging tool. Build system. Package uploading/publishing. Environment management (sounds like a virtualenv alternative?). This tool actually seems quite nice! I start skimming the docs. Like Poetry, it seems like this is yet another new tool that I'd need to learn and would require me to blow up my existing setup.py in order to adopt. Do I really want to put in that effort? I'm just trying to get python-zstandard back on the paved road and avoid seemingly deprecated workflows: I'm not looking to adopt new tooling stacks.

I'm also further confused by the existence of Hatch under the PyPA GitHub Organization. That's the same GitHub organization hosting the Python packaging tools that are known to me, namely build, pip, and setuptools. Those three projects are pinned repositories. (The other three pinned repositories are virtualenv, wheel, and twine.) Hatch is seemingly a replacement for pip, setuptools, virtualenv, twine, and possibly other tools. But it isn't a pinned repository. Yet it is the default tool used in the PyPA maintained Packaging Python Projects guide. (That guide also suggests using other tools like setuptools, flit, and pdm. But the default is Hatch and that has me asking questions. Also, I didn't initially notice that Creating pyproject.toml has multiple tabs for different backends.)

While Hatch looks interesting, I'm just not getting a strong signal that Hatch is sufficiently stable or warrants my time investment to switch to. So I go back to reading the Python Packaging User Guide.

The PyPA User Guide Search Continues

As I click around the User Guide, it is clear the PyPA folks really want me to use pyproject.toml for packaging. I suppose that's the future and that's a fair ask. But I'm still confused how I should migrate my setup.py to it. What are the risks with replacing my setup.py with pyproject.toml? Could I break someone installing my package on an old Linux distribution or old virtualenv using an older version of setuptools or pip? Will my adoption of build, hatch, poetry, whatever constitute a one way door where I lock out users in older environments? My package is downloaded over one million times per month and if I break packaging someone is likely to complain.

I'm desperately looking for guidance from the PyPA at https://packaging.python.org/ on how to manage this migration. But I just... can't find it. Guides surprisingly has nothing on the topic.

Outdated Tool Recommendations from the PyPA

Finally I find Tool recommendations in the PyPA User Guide. Under Packaging tool recommendations it says:

  • Use setuptools to define projects.
  • Use build to create Source Distributions and wheels.
  • If you have binary extensions and want to distribute wheels for multiple platforms, use cibuildwheel as part of your CI setup to build distributable wheels.
  • Use twine for uploading distributions to PyPI.

Finally, some canonical documentation from the PyPA that comes out and suggests what to use!

But my relief immediately turns to questioning whether this tooling recommendations documentation is up to date:

  1. If setuptools is recommended, why does the Packaging Python Projects tutorial use Hatch?
  2. How exactly should I be using setuptools to define projects? Is this referring to setuptools as a [build-system] backend? The existence of define seemingly implies using setup.py or setup.cfg to define metadata. But I thought these distutils/setuptools specific mechanisms were deprecated in favor of the more generic pyproject.toml?
  3. Why aren't other tools like Hatch, pip, poetry, flit, and pdm mentioned on this page? Where's the guidance on when to use these alternative tools?
  4. There are footnotes referencing distutils as if it is still a modern practice. No mention that it was removed from the standard library in Python 3.12.
  5. But the build tool is referenced and that tool is relatively new. So the docs have to be somewhat up-to-date, right?

Sadly, I reach the conclusion that this Tool recommendations documentation is inconsistent with newer documentation and can't be trusted. But it did mention the build tool and we now have multiple independent sources steering me in the direction of the build tool (at least for source distribution and wheel building), so it seems like we have a winner on our hands.

Initial Failures Running build

So let's use the build tool. I remember docs saying to invoke it with python -m build, so I try that:

$ python3.12 -m build --help
No module named build.__main__; 'build' is a package and cannot be directly executed

So the build package exists but it doesn't have a __main__. Ummm.

$ python3.12R
Python 3.12.0 (main, Oct 23 2023, 19:58:35) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import build
>>> build.__spec__
ModuleSpec(name='build', loader=<_frozen_importlib_external.NamespaceLoader object at 0x10d403bc0>, submodule_search_locations=_NamespacePath(['/Users/gps/src/python-zstandard/build']))

Oh, it picked up the build directory from my source checkout because sys.path has the current directory by default. Good times.

$ (cd ~ && python3.12 -m build)
/Users/gps/.pyenv/versions/3.12.0/bin/python3.12: No module named build

I guess build isn't installed in my Python distribution / environment. You used to be able to build packages using just the Python standard library. I guess this battery is no longer included in the stdlib. I shrug and continue.

Installing build

I go to the Build installation docs. It says to pip install build. (I thought I read years ago that one should use python3 -m pip to invoke pip. Strange that a PyPA maintained tool is telling me to invoke pip directly since I'm pretty sure a lot of the reasons to use python -m to invoke tools are still valid. But I digress.)

I follow the instructions, installing it to the global site-packages because I figure I'll use this tool a lot and I'm not a virtual environment purist:

$ python3.12 -m pip install build
Collecting build
  Obtaining dependency information for build from https://files.pythonhosted.org/packages/93/dd/b464b728b866aaa62785a609e0dd8c72201d62c5f7c53e7c20f4dceb085f/build-1.0.3-py3-none-any.whl.metadata
  Downloading build-1.0.3-py3-none-any.whl.metadata (4.2 kB)
Collecting packaging>=19.0 (from build)
  Obtaining dependency information for packaging>=19.0 from https://files.pythonhosted.org/packages/ec/1a/610693ac4ee14fcdf2d9bf3c493370e4f2ef7ae2e19217d7a237ff42367d/packaging-23.2-py3-none-any.whl.metadata
  Downloading packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting pyproject_hooks (from build)
  Using cached pyproject_hooks-1.0.0-py3-none-any.whl (9.3 kB)
Using cached build-1.0.3-py3-none-any.whl (18 kB)
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Installing collected packages: pyproject_hooks, packaging, build
Successfully installed build-1.0.3 packaging-23.2 pyproject_hooks-1.0.0

That downloads and installs wheels for build, packaging, and pyproject_hooks.

At this point the security aware part of my brain is screaming because we didn't pin versions or SHA-256 digests of any of these packages anywhere. So if a malicious version of any of these packages is somehow uploaded to PyPI that's going to be a nightmare software supply chain vulnerability having similar industry impact as log4shell. Nowhere in build's documentation does it mention this or say how to securely install build. I suppose you have to just know about the supply chain gotchas with pip install in order to mitigate this risk for yourself.

Initial Results With build Are Promising

After getting build installed, python3.12 -m build --help works now and I can build a wheel:

$ python3.12 -m build --wheel .
* Creating venv isolated environment...
* Installing packages in isolated environment... (setuptools >= 40.8.0, wheel)
* Getting build dependencies for wheel...
...
* Installing packages in isolated environment... (wheel)
* Building wheel...
running bdist_wheel
running build
running build_py
...
Successfully built zstandard-0.22.0.dev0-cp312-cp312-macosx_14_0_x86_64.whl

That looks promising! It seems to have invoked my setup.py without me having to define a [build-system] in my pyproject.toml! Yay for backwards compatibility.

The Mystery of the Missing cffi Package

But I notice something.

My setup.py script conditionally builds a zstandard._cffi extension module if import cffi succeeds. Building with build isn't building this extension module.

Before using build, I had to run setup.py using a python having the cffi package installed, usually a project-local virtualenv. So let's try that:

$ venv/bin/python -m pip install build cffi
...
$ venv/bin/python -m build --wheel .
...

And I get the same behavior: no CFFI extension module.

Staring at the output, I see what looks like a smoking gun:

* Creating venv isolated environment...
* Installing packages in isolated environment... (setuptools >= 40.8.0, wheel)
* Getting build dependencies for wheel...
...
* Installing packages in isolated environment... (wheel)

OK. So it looks like build is creating its own isolated environment (disregarding the invoked Python environment having cffi installed), installing setuptools >= 40.8.0 and wheel into it, and then executing the build from that environment.

So build sandboxes builds in an ephemeral build environment. This actually seems like a useful feature to help with deterministic and reproducible builds: I like it! But at this moment it stands in the way of progress. So I run python -m build --help, spot a --no-isolation argument and do the obvious:

$ venv/bin/python -m build --wheel --no-isolation .
...
building 'zstandard._cffi' extension
...

Success!

And I don't see any deprecation warnings either. So I think I'm all good.

But obviously I've ventured off the paved road here, as we had to violate the default constraints of build to get things to work. I'll get back to that later.

Reproducing Working Wheel Builds With pip

Just for good measure, let's see if we can use pip wheel to produce wheels, as I've seen references that this is a supported mechanism for building wheels.

$ venv/bin/python -m pip wheel .
Processing /Users/gps/src/python-zstandard
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: zstandard
  Building wheel for zstandard (pyproject.toml) ... done
  Created wheel for zstandard: filename=zstandard-0.22.0.dev0-cp312-cp312-macosx_14_0_x86_64.whl size=407841 sha256=a2e1cc1ad570ab6b2c23999695165a71c8c9e30823f915b88db421443749f58e
  Stored in directory: /Users/gps/Library/Caches/pip/wheels/eb/6b/3e/89aae0b17b638c9cdcd2015d98b85ee7fb3ef00325bb44a572
Successfully built zstandard

That output is a bit terse, since the setuptools build logs are getting swallowed. That's fine. Rather than run with -v to get those logs, I manually inspect the built wheel:

$ unzip -lv zstandard-0.22.0.dev0-cp312-cp312-macosx_14_0_x86_64.whl
Archive:  zstandard-0.22.0.dev0-cp312-cp312-macosx_14_0_x86_64.whl
 Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
--------  ------  ------- ---- ---------- ----- --------  ----
    7107  Defl:N     2490  65% 10-23-2023 08:36 7bb42fff  zstandard/__init__.py
   13938  Defl:N     2498  82% 10-23-2023 08:36 8d8d1316  zstandard/__init__.pyi
  919352  Defl:N   366631  60% 10-26-2023 08:28 3aeefc48  zstandard/backend_c.cpython-312-darwin.so
  152430  Defl:N    32528  79% 10-26-2023 05:37 fc1a3c0c  zstandard/backend_cffi.py
       0  Defl:N        2   0% 12-26-2020 16:12 00000000  zstandard/py.typed
    1484  Defl:N      784  47% 10-26-2023 08:28 facba579  zstandard-0.22.0.dev0.dist-info/LICENSE
    2863  Defl:N      847  70% 10-26-2023 08:28 b8d80875  zstandard-0.22.0.dev0.dist-info/METADATA
     111  Defl:N      106   5% 10-26-2023 08:28 878098e6  zstandard-0.22.0.dev0.dist-info/WHEEL
      10  Defl:N       12 -20% 10-26-2023 08:28 a5f38e4e  zstandard-0.22.0.dev0.dist-info/top_level.txt
     841  Defl:N      509  40% 10-26-2023 08:28 e9a804ae  zstandard-0.22.0.dev0.dist-info/RECORD
--------          -------  ---                            -------
 1098136           406407  63%                            10 files

(Python wheels are just zip files with certain well-defined paths having special meanings. I know this because I wrote Rust code for parsing wheels as part of developing PyOxidizer.)

Looks like the zstandard/_cffi.cpython-312-darwin.so extension module is missing. Well, at least pip is consistent with build! Although somewhat confusingly I don't see any reference to a separate build environment in the pip output. But I suspect it is there because cffi is installed in the virtual environment I invoke pip from!

Reading pip help output, I find the relevant argument to not spawn a new environment and try again:

$ venv/bin/python -m pip wheel --no-build-isolation .
<same exact output except the wheel size and digest changes>

$ unzip -lv zstandard-0.22.0.dev0-cp312-cp312-macosx_14_0_x86_64.whl
...
 1002664  Defl:N   379132  62% 10-26-2023 08:33 48afe5ba  zstandard/_cffi.cpython-312-darwin.so
...

(I'm happy to see build and pip agreeing on the no isolation terminology.)

OK, so I got build and pip to behave nearly identically. I feel like I finally understand this!

I also run pip -v wheel and pip -vv wheel to peek under the covers and see what it's doing. Interestingly, I don't see any hint of a virtual environment or temporary directory until I go to -vv. I find it interesting that build presents details about this by default but you have to put pip in very verbose mode to get it. I'm glad I used build first because the ephemeral build environment was the source of my missing dependency and pip buried this important detail behind a ton of other output in -vv, making it much harder to discover!

Understanding How setuptools Gets Installed

When looking at pip's verbose output, I also see references to installing the setuptools and wheel packages:

Processing /Users/gps/src/python-zstandard
  Running command pip subprocess to install build dependencies
  Collecting setuptools>=40.8.0
    Using cached setuptools-68.2.2-py3-none-any.whl.metadata (6.3 kB)
  Collecting wheel
    Using cached wheel-0.41.2-py3-none-any.whl.metadata (2.2 kB)
  Using cached setuptools-68.2.2-py3-none-any.whl (807 kB)
  Using cached wheel-0.41.2-py3-none-any.whl (64 kB)
  Installing collected packages: wheel, setuptools
  Successfully installed setuptools-68.2.2 wheel-0.41.2
  Installing build dependencies ... done

There's that setuptools>=40.8.0 constraint again. (We also saw it in build.) I rg 40.8.0 my source checkout (note: the . in there are wildcard characters since 40.8.0 is a regexp so this could over match) and come up with nothing. If it's not coming from my code, where is it coming from?

In the pip documentation, Fallback behaviour says that a missing [build-system] from pyproject.toml is implicitly translated to the following:

[build-system]
requires = ["setuptools>=40.8.0", "wheel"]
build-backend = "setuptools.build_meta:__legacy__"

For build, I go to the source code and discover that similar functionality was added in May 2020.

I'm not sure if this default behavior is specified in a PEP or what. But build and pip seem to be agreeing on the behavior of adding setuptools>=40.8.0 and wheel to their ephemeral build environments and invoking setuptools.build_meta:__legacy__ as the build backend as implicit defaults if your pyproject.toml lacks a [build-system]. OK.

Being Explicit About The Build System

Perhaps I should consider defining [build-system] and being explicit about things? After all, the tools aren't printing anything indicating they are assuming implicit defaults and for all I know the defaults could change in a backwards incompatible manner in any release and break my build. (Although I would hope to see a deprecation warning before that occurs.)

So I modify my pyproject.toml accordingly:

[build-system]
requires = [
    "cffi==1.16.0",
    "setuptools==68.2.2",
    "wheel==0.41.2",
]
build-backend = "setuptools.build_meta:__legacy__"

I pinned all the dependencies to specific versions because I like determinism and reproducibility. I really don't like when the upload of a new package version breaks my builds!

Software Supply Chain Weaknesses in pyproject.toml

When I pinned dependencies in [build-system] in pyproject.toml, the security part of my brain is screaming over the lack of SHA-256 digest pinning.

How am I sure that we're using well-known, trusted versions of these dependencies? Are all the transitive dependencies even pinned?

Before pyproject.toml, I used pip-compile from pip-tools to generate a requirements.txt containing SHA-256 digests for all transitive dependencies. I would use python3 -m venv to create a virtualenv, venv/bin/python -m pip install -r requirements.txt to materialize a (highly deterministic) set of packages, then run venv/bin/python setup.py to invoke a build in this stable and securely created environment. (Some) software supply chain risks averted! But, uh, how do I do that with pyproject.toml build-system.requires? Does it even support pinning SHA-256 digests?

I skim the PEPs related to pyproject.toml and don't see anything. Surely I'm missing something.

In desperation I check the pip-tools project and sure enough they document pyproject.toml integration. However, they tell you how to feed requirements.txt files into the dynamic dependencies consumed by the build backend: there's nothing on how to securely install the build backend itself.

As far as I can tell pyproject.toml has no facilities for securely installing (read: pinning content digests for all transitive dependencies) the build backend itself. This is left as an exercise to the reader. But, um, the build frontend (which I was also instructed to download insecurely via python -m pip install) is the thing installing the build backend. How am I supposed to subvert the build frontend to securely install the build backend? Am I supposed to disable default behavior of using an ephemeral environment in order to get secure backend installs? Doesn't the ephemeral environment give me additional, desired protections for build determinism and reproducibility? That seems wrong.

It kind of looks like pyproject.toml wasn't designed with software supply chain risk mitigation as a criteria. This is extremely surprising for a build system abstraction designed in the past few years. I shrug my shoulders and move on.

Porting python setup.py develop Invocations

Now that I figure I have a working pyproject.toml, I move onto removing python setup.py invocations.

First up is a python setup.py develop --rust-backend invocation.

My setup.py performs very crude scanning of sys.argv looking for command arguments like --system-zstd and --rust-backend as a way to influence the build. We just sniff these special arguments and remove them from sys.argv so they don't confuse the setuptools options parser. (I don't believe this is a blessed way of doing custom options handling in distutils/setuptools. But it is simple and has worked since I introduced the pattern in 2016.)

Is --global-option the Answer?

With python setup.py invocations going away and a build frontend invoking setup.py, I need to find an alternative mechanism to pass settings into my setup.py.

Why you shouldn't invoke setup.py directly tells me I should use pip install -e. I'm guessing there's a way to instruct pip install to pass arguments to setup.py.

$ venv/bin/python -m pip install --help
...
  -C, --config-settings <settings>
                              Configuration settings to be passed to the PEP 517 build backend. Settings take the form KEY=VALUE. Use multiple --config-settings options to pass multiple keys to the backend.
  --global-option <options>   Extra global options to be supplied to the setup.py call before the install or bdist_wheel command.
...

Hmmm. Not really sure which of these to use. But--global-option mentions setup.py and I'm using setup.py. So I try that:

$ venv/bin/python -m pip install --global-option --rust-backend -e .
Usage:
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] <requirement specifier> [package-index-options] ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] -r <requirements file> [package-index-options] ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] [-e] <vcs project url> ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] [-e] <local project path> ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] <archive url/path> ...

no such option: --rust-backend

Oh, duh, --rust-backend looks like an argument and makes pip's own argument parsing ambiguous as to how to handle it. Let's try that again with --global-option=--rust-backend:

$ venv/bin/python -m pip install --global-option=--rust-backend -e .
DEPRECATION: --build-option and --global-option are deprecated. pip 24.0 will enforce this behaviour change. A possible replacement is to use --config-settings. Discussion can be found at https://github.com/pypa/pip/issues/11859
WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option.
Obtaining file:///Users/gps/src/python-zstandard
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Preparing editable metadata (pyproject.toml) ... done
Building wheels for collected packages: zstandard
  WARNING: Ignoring --global-option when building zstandard using PEP 517
  Building editable for zstandard (pyproject.toml) ... done
  Created wheel for zstandard: filename=zstandard-0.22.0.dev0-0.editable-cp312-cp312-macosx_14_0_x86_64.whl size=4379 sha256=05669b0a5fd8951cac711923d687d9d4192f6a70a8268dca31bdf39012b140c8
  Stored in directory: /private/var/folders/dd/xb3jz0tj133_hgnvdttctwxc0000gn/T/pip-ephem-wheel-cache-6amdpg21/wheels/eb/6b/3e/89aae0b17b638c9cdcd2015d98b85ee7fb3ef00325bb44a572
Successfully built zstandard
Installing collected packages: zstandard
Successfully installed zstandard-0.22.0.dev0

I immediately see the three DEPRECATION and WARNING lines (which are color highlighted in my terminal, yay):

DEPRECATION: --build-option and --global-option are deprecated. pip 24.0 will enforce this behaviour change. A possible replacement is to use --config-settings. Discussion can be found at https://github.com/pypa/pip/issues/11859
WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option.
WARNING: Ignoring --global-option when building zstandard using PEP 517

Yikes. It looks like --global-option is deprecated and will be removed in pip 24.0. And, later it says --global-option was ignored. Is that true?!

$ ls -al zstandard/*cpython-312*.so
-rwxr-xr-x  1 gps  staff  1002680 Oct 27 11:35 zstandard/_cffi.cpython-312-darwin.so
-rwxr-xr-x  1 gps  staff   919352 Oct 27 11:35 zstandard/backend_c.cpython-312-darwin.so

Not seeing a backend_rust library like I was expecting. So, yes, it does look like --global-option was ignored.

This behavior is actually pretty concerning to me. It certainly seems like at one time --global-option (and a --build-option which doesn't exist on the pip install command I guess) did get threaded through to setup.py. However, it no longer does.

I find an entry in the pip 23.1 changelog: Deprecate --build-option and --global-option. Users are invited to switch to --config-settings. (#11859). Deprecate. What is pip's definition of deprecate? I click the link to #11859. An open issue with a lot of comments. I scan the issue history to find referenced PRs and click on #11861. OK, it is just an advertisement. Maybe --global-option never got threaded through to setup.py? But its help usage text clearly says it is related to setup.py! Maybe the presence of [build-system] in pyproject.toml is somehow engaging different semantics that result in --global-option not being passed to setup.py? The warning message did say Ignoring --global-option when building zstandard using PEP 517.

I try commenting out the [build-system] section in my pyproject.toml and trying again. Same result. Huh? Reading the pip install --help output, I see --no-use-pep517 and try it:

$ venv/bin/python -m pip install --global-option=--rust-backend --no-use-pep517 -e .
...
$ ls -al zstandard/*cpython-312*.so
-rwxr-xr-x  1 gps  staff  1002680 Oct 27 11:35 zstandard/_cffi.cpython-312-darwin.so
-rwxr-xr-x  1 gps  staff   919352 Oct 27 11:35 zstandard/backend_c.cpython-312-darwin.so
-rwxr-xr-x  1 gps  staff  2727920 Oct 27 11:53 zstandard/backend_rust.cpython-312-darwin.so

Ahh, so pip's default PEP-517 build mode is causing --global-option to get ignored. So I guess older versions of pip honored --global-option and when pip switched to PEP-517 build mode by default --global-option just stopped working and emitted a warning instead. That's quite the backwards incompatible behavior break! I really wish tools would fail fast when making these kinds of breaks or at least offer a --warnings-as-errors mode so I can opt into fatal errors when these kinds of breaks / deprecations are introduced. I would 100% opt into this since these warnings are often the figurative needle in a haystack of CI logs and easy to miss. Especially if the build environment is non-deterministic and new versions of tools like pip get installed randomly without a version control commit.

Pip's allowing me to specify --global-option but then only issuing a warning when it is ignored doesn't sit well with me. But what can I do?

It is obvious --global-option is a non-starter here.

Attempts at Using --config-setting

Fortunately, pip's deprecation message suggests a path forward:

A possible replacement is to use --config-settings. Discussion can be found
at https://github.com/pypa/pip/issues/11859

First, kudos for actionable warning messages. However, the wording says possible replacement. Are there other alternatives I didn't see in the pip install --help output?

Anyway, I decide to go with that --config-settings suggestion.

$ venv/bin/python -m pip install --config-settings=--rust-backend -e .

Usage:
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] <requirement specifier> [package-index-options] ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] -r <requirements file> [package-index-options] ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] [-e] <vcs project url> ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] [-e] <local project path> ...
  /Users/gps/src/python-zstandard/venv/bin/python -m pip install [options] <archive url/path> ...

Arguments to --config-settings must be of the form KEY=VAL

Hmmm. Let's try adding a trailing =?

$ venv/bin/python -m pip install --config-settings=--rust-backend= -e .
Obtaining file:///Users/gps/src/python-zstandard
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Preparing editable metadata (pyproject.toml) ... done
Building wheels for collected packages: zstandard
  Building editable for zstandard (pyproject.toml) ... done
  Created wheel for zstandard: filename=zstandard-0.22.0.dev0-0.editable-cp312-cp312-macosx_14_0_x86_64.whl size=4379 sha256=619db9806bc4c39e973c3197a0ddb9b03b49fff53cd9ac3d7df301318d390b5e
  Stored in directory: /private/var/folders/dd/xb3jz0tj133_hgnvdttctwxc0000gn/T/pip-ephem-wheel-cache-gtsvw78d/wheels/eb/6b/3e/89aae0b17b638c9cdcd2015d98b85ee7fb3ef00325bb44a572
Successfully built zstandard
Installing collected packages: zstandard
  Attempting uninstall: zstandard
    Found existing installation: zstandard 0.22.0.dev0
    Uninstalling zstandard-0.22.0.dev0:
      Successfully uninstalled zstandard-0.22.0.dev0
Successfully installed zstandard-0.22.0.dev0

No warnings or deprecations. That's promising. Did it work?

$ ls -al zstandard/*cpython-312*.so
-rwxr-xr-x  1 gps  staff  1002680 Oct 27 12:11 zstandard/_cffi.cpython-312-darwin.so
-rwxr-xr-x  1 gps  staff   919352 Oct 27 12:11 zstandard/backend_c.cpython-312-darwin.so

No backend_rust extension module. Boo. So what actually happened?

$ venv/bin/python -m pip -v install --config-settings=--rust-backend= -e .

I don't see --rust-backend anywhere in that log output. I try with more verbosity:

$ venv/bin/python -m pip -vvvvv install --config-settings=--rust-backend= -e .

Still nothing!

Maybe That -- prefix is wrong?

$ venv/bin/python -m pip -vvvvv install --config-settings=rust-backend= -e .

Still nothing!

I have no clue how --config-settings= is getting passed to setup.py nor where it is seemingly getting dropped on the floor.

How Does setuptools Handle --config-settings?

This must be documented in the setuptools project. So I open those docs in my web browser and do a search for settings. I open the first three results in separate tabs:

  1. Running setuptools commands
  2. Configuration File Options
  3. develop - Deploy the project source in "Development Mode"

That first link has docs on the deprecated setuptools commands and how to invoke python setup.py directly. (Note: there is a warning box here saying that python setup.py is deprecated. I guess I somehow missed this document when looking at setuptools documentation earlier! In hindsight, it appears to be buried at the figurative bottom of the docs tree as the last item under a Backward compatibility & deprecated practice section. Talk about burying the lede!) These docs aren't useful.

The second link also takes me to deprecated documentation related to direct python setup.py command invocations.

The third link is also useless.

I continue opening search results in new tabs. Surely the answer is in here.

I find an Adding Arguments section telling me that Adding arguments to setup is discouraged as such arguments are only supported through imperative execution and not supported through declarative config.. I think that's an obtuse of saying that sys.argv arguments are only supported via python setup.py invocations and not via setup.cfg or pyproject.toml? But the example only shows me how to use setup.cfg and doesn't have any mention of pyproject.toml. So is this documentation even relevant to pyproject.toml?

Eventually I stumble across Build System Support. In the Dynamic build dependencies and other build_meta tweaks section, I notice the following example code:

from setuptools import build_meta as _orig
from setuptools.build_meta import *

def get_requires_for_build_wheel(config_settings=None):
    return _orig.get_requires_for_build_wheel(config_settings) + [...]


def get_requires_for_build_sdist(config_settings=None):
    return _orig.get_requires_for_build_sdist(config_settings) + [...]

config_settings=None. OK, this might be the --config-settings values passed to the build frontend getting fed into the build backend. I Google get_requires_for_build_wheel. One of the top results is PEP-517, which I click on.

I see that the Build backend interface consists of a handful of functions that are invoked by the build frontend. These functions all seem to take a config_settings=None argument. Great, now I know the interface between build frontends and backends at the Python API level. Where was I in this yak shave?

I remember from pyproject.toml that one of the lines is build-backend = "setuptools.build_meta:__legacy__". That setuptools.build_meta:__legacy__ bit looks like a Python symbol reference. Since the setuptools documentation didn't answer my question on how to thread --config-settings into setup.py invocations, I open the build_meta.py source code. (Aside: experience has taught me that when in doubt on how something works, consult the source code: code doesn't lie.)

I search for config_settings. I immediately see class _ConfigSettingsTranslator: whose purported job is Translate config_settings into distutils-style command arguments. Only a limited number of options is currently supported. Oh, this looks relevant. But there's a fair bit of code in here. Do I really need to grok it all? I keep scanning the source.

In a def _build_with_temp_dir() I spot the following code:

sys.argv = [
    *sys.argv[:1],
    *self._global_args(config_settings),
    *setup_command,
    "--dist-dir",
    tmp_dist_dir,
    *self._arbitrary_args(config_settings),
]

Ahh, cool. It looks to be calling self._global_args() and self._arbitrary_args() and adding the arguments those functions return to sys.argv before evaluating setup.py in the current interpreter.

I look at the definition of _arbitrary_args() and I'm onto something:

def _arbitrary_args(self, config_settings: _ConfigSettings) -> Iterator[str]:
  """
  Users may expect to pass arbitrary lists of arguments to a command
  via "--global-option" (example provided in PEP 517 of a "escape hatch").
  ...
  """
  args = self._get_config("--global-option", config_settings)
  global_opts = self._valid_global_options()
  bad_args = []

  for arg in args:
      if arg.strip("-") not in global_opts:
          bad_args.append(arg)
          yield arg

  yield from self._get_config("--build-option", config_settings)

  if bad_args:
      SetuptoolsDeprecationWarning.emit(
          "Incompatible `config_settings` passed to build backend.",
          f"""
          The arguments {bad_args!r} were given via `--global-option`.
          Please use `--build-option` instead,
          `--global-option` is reserved for flags like `--verbose` or `--quiet`.
          """,
          due_date=(2023, 9, 26),  # Warning introduced in v64.0.1, 11/Aug/2022.
      )

It looks to peek inside config_settings and handle --global-option and --build-option specially. But we clearly see --global-option is deprecated in favor of --build-option.

So is the --config-settings key name --build-option and its value the setup.py argument we want to insert?

I try that:

$ venv/bin/python -m pip install --config-settings=--build-option=--rust-backend -e .
...
$ ls -al zstandard/*cpython-312*.so
-rwxr-xr-x  1 gps  staff  1002680 Oct 27 12:54 zstandard/_cffi.cpython-312-darwin.so
-rwxr-xr-x  1 gps  staff   919352 Oct 27 12:53 zstandard/backend_c.cpython-312-darwin.so
-rwxr-xr-x  1 gps  staff  2727920 Oct 27 12:54 zstandard/backend_rust.cpython-312-darwin.so

It worked!

Disbelief Over --config-settings=--build-option=

But, um, --config-settings=--build-option=--rust-backend. We've triple encoded command arguments here. This feels exceptionally weird. Is that really the supported/preferred interface? Surely there's something simpler.

def _arbitrary_args()'s docstring mentioned escape hatch in the context of PEP-517. I open PEP-517 and search for that term, finding Config settings. Sure enough, it is describing the mechanism I just saw the source code to. And its pip example is using pip install's --global-option and --build-option arguments. So this all seems to check out. (Although these pip arguments are deprecated in favor of -C/--config-settings.)

Thinking I missed some obvious documentation, I search the setuptools documentation for --build-option. The only hits are in the v64.0.0 changelog entry. So you are telling me this feature of passing arbitrary config settings into setup.py via PEP-517 build frontends is only documented in the changelog?!

Ok, I know my setup.py is abusing sys.argv. I'm off the paved road for passing settings into setup.py. What is the preferred pyproject.toml era mechanism for passing settings into setup.py? These settings can't be file based because they are dynamic. There must be a config_settings mechanism to thread dynamic settings into setup.py that doesn't rely on these magical --build-option and --global-option settings keys.

I stare and stare at the build_meta.py source code looking for find an answer. But all I see is the def _build_with_temp_dir() calling into self._global_args() and self._arbitrary_args() to append arguments to sys.argv. Huh? Surely this isn't the only solution. Surely there's a simpler way. The setuptools documentation said Adding arguments to setup is discouraged, seemingly implying a better way of doing it. And yet the only code I'm seeing in build_meta.py for passing custom config_settings values in is literally via additional setup.py process arguments. This can't be right.

I start unwinding my mental stack and browser tabs trying to come across something I missed.

I again look at Dynamic build dependencies and other build_meta tweaks and see its code is defining a custom [build-system] backend that does a from setuptools.build_meta import * and defines some custom build backend interface APIs (which receive config_settings) and then proxy into the original implementations. While the example is related to build metadata, I'm thinking do I need to implement my own setuptools wrapping build backend that implements a custom def build_wheel() to intercept config_settings? Surely this is avoidable complexity.

Pip's Eager Deprecations

I keep unwinding context and again notice pip's warning message telling me A possible replacement is to use --config-settings. Discussion can be found at https://github.com/pypa/pip/issues/11859.

I open pip issue #11859. Oh, that's the same issue tracking the --global-option deprecation I encountered earlier. I again scan the issue timeline. It is mostly references from other GitHub projects. Telltale sign that this deprecation is creating waves.

The issue is surprisingly light on comments for how many references it has.

The comment with the most emoji reactions says:

Is there an example showing how to use --config-settings with setup.py
and/or newer alternatives? The setuptools documentation is awful and the
top search results are years/decades out-of-date and wildly contradictory.`

I don't know who you are, @alexchandel, but we're on the same wavelength.

Then the next comment says:

Something like this seems to work to pass global options to setuptools.

pip -vv install   --config-setting="--global-option=--verbose"  .

Passing --build-option in the same way does not work, as setuptools
attempts to pass these to the egg_info command where they are not supported.

So there it seemingly is, confirmation that my independently derived solution of --config-settings=--build-option=-... is in fact the way to go. But this commenter says to use --global-option, which appears to be deprecated in modern setuptools. Oof.

The next comment links to pypa/setuptools#3896 where apparently there's been an ongoing conversation since April about how setuptools should design and document a stable mechanism to pass config_settings to PEP517 backend.

If I'm interpreting this correctly, it looks like distutils/setuptools - the primary way to define Python packages for the better part of twenty years - doesn't have a stable mechanism for passing configuration settings from modern pyproject.toml [build-system] frontends. Meanwhile pip is deprecating long-working mechanisms to pass options to setup.py and forcing people to use a mechanism that setuptools doesn't explicitly document much less say is stable. This is all taking place six years after PEP-517 was accepted.

I'm kind of at a loss for words here. I understand pip's desire to delete some legacy code and standardize on the new way of doing things. But it really looks like they are breaking backwards compatibility for setup.py a bit too eagerly. That's a questionable decision in my mind, so I write a detailed comment on the pip issue explaining how the interface works and asking the pip folks to hold off on deprecation until setuptools has a stable, documented solution. Time will tell what happens.

In Summary

What an adventure that Python packaging yak shave was! I feel like I just learned a whole lot of things that I shouldn't have needed to learn in order to keep my Python package building without deprecation warnings. Yes, I scope bloated myself to understanding how things worked because that's my ethos. But even without that extra work, there's a lot here that I feel I shouldn't have needed to do, like figure out the undocumented --config-settings=--build-option= interface.

Despite having ported my python setup.py invocation to modern, PEP-517 build frontends (build and pip) and gotten rid of various deprecation messages and warnings, I'm still not sure the implications of that transition. I really want to understand the trade-offs for adopting pyproject.toml and using the modern build frontends for doing things. But I couldn't find any documentation on this anywhere! I don't know basic things like whether my adoption of pyproject.toml will break end-users stuck on older Python versions or what. I still haven't ported my project metadata from setup.py to pyproject.toml because I don't understand the implications. I feel like I'm flying blind and am bound to make mistakes with undesirable impacts to end-users of my package.

But at least I was able to remove deprecation warnings from my packaging CI with just several hours of work.

I recognize this post is light on constructive feedback and suggestions for how to improve matters.

One reason is that I think a lot of the improvements are self-explanatory - clearer warning messages, better documentation, not deprecating things prematurely, etc. I prefer to just submit PRs instead of long blog posts. But I just don't know what is appropriate in some cases: one of the themes of this post is I just don't grok the state of Python packaging right now.

This post did initially contain a few thousand words expanding on what all I thought was broken and how it should be fixed. But I stripped the content because I didn't want my (likely controversial) opinions to distract from the self-assessed user experience study documented in this post. This content is probably better posted to a PyPA mailing list anyway, otherwise I'm just another guy complaining on the Internet.

I've posted a link to this post to the packaging category on discuss.python.org so the PyPA (and other subscribed parties) are aware of all the issues I stumbled over. Hopefully people with more knowledge of the state of Python packaging see this post, empathize with my struggles, and enact meaningful improvements so others can port off setup.py with a fraction of the effort as it took me.


Achieving A Completely Open Source Implementation of Apple Code Signing and Notarization

August 08, 2022 at 08:08 AM | categories: Apple, Rust

As I've previously blogged in Pure Rust Implementation of Apple Code Signing (2021-04-14) and Expanding Apple Ecosystem Access with Open Source, Multi Platform Code signing (2022-04-25), I've been hacking on an open source implementation of Apple code signing and notarization using the Rust programming language. This takes the form of the apple-codesign crate / library and its rcodesign CLI executable. (Documentation / GitHub project / crates.io).

As of that most recent post in April, I was pretty happy with the relative stability of the implementation: we were able to sign, notarize, and staple Mach-O binaries, directory bundles (.app, .framework bundles, etc), XAR archives / flat packages / .pkg installers, and DMG disk images. Except for the known limitations, if Apple's official codesign and notarytool tools support it, so do we. This allows people to sign, notarize, and release Apple software from non-Apple operating systems like Linux and Windows. This opens up new avenues for Apple platform access.

A major limitation in previous versions of the apple-codesign crate was our reliance on Apple's Transporter tool for notarization. Transporter is a Java application made available for macOS, Linux, and Windows that speaks to Apple's servers and can upload assets to their notarization service. I used this tool at the time because it seemed to be officially supported by Apple and the path of least resistance to standing up notarization. But Transporter was a bit wonky to use and an extra dependency that you needed to install.

At WWDC 2022, Apple announced a new Notary API as part of the App Store Connect API. In what felt like a wink directly at me, Apple themselves even calls out the possibility for leveraging this API to notarize from Linux! I knew as soon as I saw this that it was only a matter of time before I would be able to replace Transporter with a pure Rust client for the new HTTP API. (I was already thinking about using the unpublished HTTP API that notarytool uses. And from the limited reversing notes I have from before WWDC it looks like the new official Notary API is very similar - possibly identical to - what notarytool uses. So kudos to Apple for opening up this access!)

I'm very excited to announce that we now have a pure Rust implementation of a client for Apple's Notary API in the apple-codesign crate. This means we can now notarize Apple software from any machine where you can get the Rust crate to compile. This means we no longer have a dependency on the 3rd party Apple Transporter application. Notarization, like code signing, is 100% open source Rust code.

As excited as I am to announce this new feature, I'm even more excited that it was largely implemented by a contributor, Robin Lambertz / @roblabla! They filed a GitHub feature request while WWDC 2022 was still ongoing and then submitted a PR a few days later. It took me a few months to get around to reviewing it (I try to avoid computer screens during summers), but it was a fantastic PR given the scope of the change. It never ceases to bring joy to me when someone randomly contributes greatness to open source.

So, as of the just-released 0.17 release of the apple-codesign Rust crate and its corresponding rcodesign CLI tool, you can now rcodesign notary-submit to speak to Apple's Notary API using a pure Rust client. No more requirements on 3rd party, proprietary software. All you need to sign and notarize Apple applications is the self-contained rcodesign executable and a Linux, Windows, macOS, BSD, etc machine to run it on.

I'm stoked to finally achieve this milestone! There are probably thousands of companies and individuals who have wanted to release Apple software from non-macOS operating systems. (The existence and popularity of tools like fastlane seems to confirm this.) The historical lack of an Apple code signing and notarization solution that worked outside macOS has prevented this. Well, that barrier has officially fallen.

Release notes, documentation, and (self-signed) pre-built executables of the rcodesign executable for major platforms are available on the 0.17 release page.


Announcing the PyOxy Python Runner

May 10, 2022 at 08:00 AM | categories: Python, PyOxidizer

I'm pleased to announce the initial release of PyOxy. Binaries are available on GitHub.

(Yes, I used my pure Rust Apple code signing implementation to remotely sign the macOS binaries from GitHub Actions using a YubiKey plugged into my Windows desktop: that experience still feels magical to me.)

PyOxy is all of the following:

  • An executable program used for running Python interpreters.
  • A single file and highly portable (C)Python distribution.
  • An alternative python driver providing more control over the interpreter than what python itself provides.
  • A way to make some of PyOxidizer's technology more broadly available without using PyOxidizer.

Read the following sections for more details.

pyoxy Acts Like python

The pyoxy executable has a run-python sub-command that will essentially do what python would do:

$ pyoxy run-python
Python 3.9.12 (main, May  3 2022, 03:29:54)
[Clang 14.0.3 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>

A Python REPL. That's familiar!

You can even pass python arguments to it:

$ pyoxy run-python -- -c 'print("hello, world")'
hello, world

When a pyoxy executable is renamed to any filename beginning with python, it implicitly behaves like pyoxy run-python --.

$ mv pyoxy python3.9
$ ls -al python3.9
-rwxrwxr-x  1 gps gps 120868856 May 10  2022 python3.9

$ ./python3.9
Python 3.9.12 (main, May  3 2022, 03:29:54)
[Clang 14.0.3 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>

Single File Python Distributions

The official pyoxy executables are built with PyOxidizer and leverage the Python distributions provided by my python-build-standalone project. On Linux and macOS, a fully featured Python interpreter and its library dependencies are statically linked into pyoxy. The pyoxy executable also embeds a copy of the Python standard library and imports it from memory using the oxidized_importer Python extension module.

What this all means is that the official pyoxy executables can function as single file CPython distributions! Just download a pyoxy executable, rename it to python, python3, python3.9, etc and it should behave just like a normal python would!

Your Python installation has never been so simple. And fast: pyoxy should be a few milliseconds faster to initialize a Python interpreter mostly because of oxidized_importer and it avoiding filesystem overhead to look for and load .py[c] files.

Low-Level Control Over the Python Interpreter with YAML

The pyoxy run-yaml command is takes the path to a YAML file defining the embedded Python interpreter configuration and then launches that Python interpreter in-process:

$ cat > hello_world.yaml <<EOF
---
allocator_debug: true
interpreter_config:
  run_command: 'print("hello, world")'
...
EOF

$ pyoxy run-yaml hello_world.yaml
hello, world

Under the hood, PyOxy uses the pyembed Rust crate to manage embedded Python interpreters. The YAML document that PyOxy uses is simply deserialized into a pyembed::OxidizedPythonInterpreterConfig Rust struct, which pyembed uses to spawn a Python interpreter. This Rust struct offers near complete control over how the embedded Python interpreter behaves: it even allows you to tweak settings that are impossible to change from environment variables or python command arguments! (Beware: this power means you can easily cause the interpreter to crash if you feed it a bad configuration!)

YAML Based Python Applications

pyoxy run-yaml ignores all file content before the YAML --- start document delimiter. This means that on UNIX-like platforms you can create executable YAML files defining your Python application. e.g.

$ mkdir -p myapp
$ cat > myapp/__main__.py << EOF
print("hello from myapp")
EOF

$ cat > say_hello <<"EOF"
#!/bin/sh
"exec" "`dirname $0`/pyoxy" run-yaml "$0" -- "$@"
---
interpreter_config:
  run_module: 'myapp'
  module_search_paths: ["$ORIGIN"]
...
EOF

$ chmod +x say_hello

$ ./say_hello
hello from myapp

This means that to distribute a Python application, you can drop a copy of pyoxy in a directory then define an executable YAML file masquerading as a shell script and you can run Python code with as little as two files!

The Future of PyOxy

PyOxy is very young. I hacked it together on a weekend in September 2021. I wanted to shore up some functionality before releasing it then. But I got perpetually sidetracked and never did the work. I figured it would be better to make a smaller splash with a lesser-baked product now than wait even longer. Anyway...

As part of building PyOxidizer I've built some peripheral technology:

  • Standalone and highly distributable Python builds via the python-build-standalone project.
  • The pyembed Rust crate for managing an embedded Python interpreter.
  • The oxidized_importer Python package/extension for importing modules from memory, among other things.
  • The Python packed resources data format for representing a collection of Python modules and resource files for efficient loading (by oxidized_importer).

I conceived PyOxy as a vehicle to enable people to leverage PyOxidizer's technology without imposing PyOxidizer onto them. I feel that PyOxidizer's broader technology is generally useful and too valuable to be gated behind using PyOxidizer.

PyOxy is only officially released for Linux and macOS for the moment. It definitely builds on Windows. However, I want to improve the single file executable experience before officially releasing PyOxy on Windows. This requires an extensive overhaul to oxidized_importer and the way it serializes Python resources to be loaded from memory.

I'd like to add a sub-command to produce a Python packed resources payload. With this, you could bundle/distribute a Python application as pyoxy plus a file containing your application's packed resources alongside YAML configuring the Python interpreter. Think of this as a more modern and faster version of the venerable zipapp approach. This would enable PyOxy to satisfy packaging scenarios provided by tools like Shiv, PEX, and XAR. However, unlike Shiv and PEX, pyoxy also provides an embedded Python interpreter, so applications are much more portable since there isn't reliance on the host machine having a Python interpreter installed.

I'm really keen to see how others want to use pyoxy.

The YAML based control over the Python interpreter could be super useful for testing, benchmarking, and general Python interpreter configuration experimentation. It essentially opens the door to things previously only possible if you wrote code interfacing with Python's C APIs.

I can also envision tools that hide the existence of Python wanting to leverage the single file Python distribution property of pyoxy. For example, tools like Ansible could copy pyoxy to a remote machine to provide a well-defined Python execution environment without having to rely on what packages are installed. Or pyoxy could be copied into a container or other sandboxed/minimal environment to provide a Python interpreter.

And that's PyOxy. I hope you find it useful. Please file any bug reports or feature requests in PyOxidizer's issue tracker.


Expanding Apple Ecosystem Access with Open Source, Multi Platform Code Signing

April 25, 2022 at 08:00 AM | categories: Apple, Rust

A little over one year ago, I announced a project to implement Apple code signing in pure Rust. There have been quite a number of developments since that post and I thought a blog post was in order. So here we are!

But first, some background on why we're here.

Background

(Skip this section if you just want to get to the technical bits.)

Apple runs some of the largest and most profitable software application ecosystems in existence. Gaining access to these ecosystems has traditionally required the use of macOS and membership in the Apple Developer Program.

For the most part this makes sense: if you want to develop applications for Apple operating systems you will likely utilize Apple's operating systems and Apple's official tooling for development and distribution. Sticking to the paved road is a good default!

But many people want more... flexibility. Open source developers, for example, often want to distribute cross-platform applications with minimal effort. There are entire programming language ecosystems where the operating system you are running on is abstracted away as an implementation detail for many applications. By creating a de facto requirement that macOS, iOS, etc development require the direct access to macOS and (often above market priced) Apple hardware, the distribution requirements imposed by Apple's software ecosystems are effectively exclusionary and prevent interested parties from contributing to the ecosystem.

One of the aspects of software distribution on Apple platforms that trips a lot of people up is code signing and notarization. Essentially, you need to:

  1. Embed a cryptographic signature in applications that effectively attests to its authenticity from an Apple Developer Program associated account. (This is signing.)
  2. Upload your application to Apple so they can inspect it, verify it meets requirements, likely store a copy of it. Apple then issues their own cryptographic signature called a notarization ticket which then needs to be stapled/attached to the application being distributed so Apple operating systems can trust it. (This is notarization.)

Historically, these steps required Apple proprietary software run exclusively from macOS. This means that even if you are in a software ecosystem like Rust, Go, or the web platform where you can cross-compile apps without direct access to macOS (testing is obviously a different story), you would still need macOS somewhere if you wanted to sign and notarize your application. And signing and notarization is effectively required on macOS due to default security settings. On mobile platforms like iOS, it is impossible to distribute applications that aren't signed and notarized unless you are running a jailbreaked device.

A lot of people (myself included) have grumbled at these requirements. Why should I be forced to involve an Apple machine as part of my software release process if I don't need macOS to build my application? Why do I have to go through a convoluted dance to sign and notarize my application at release time - can't it be more streamlined?

When I looked at this space last year, I saw some obvious inefficiencies and room to improve. So as I said then, I foolishly set out to reimplement Apple code signing so developers would have more flexibility and opportunity for distributing applications to Apple's ecosystems.

The ultimate goal of this work is to expand Apple ecosystem access to more developers. A year later, I believe I'm delivering a product capable of doing this.

One Year Later

Foremost, I'm excited to announce release of rcodesign 0.14.0. This is the first time I'm publishing pre-built binaries (Linux, Windows, and macOS) of rcodesign. This reflects my confidence in the relative maturity of the software.

In case you are wondering, yes, the macOS rcodesign executable is self-signed: it was signed by a GitHub Actions Linux runner using a code signing certificate exclusive to a YubiKey. That YubiKey was plugged into a Windows 11 desktop next to my desk. The rcodesign executable was not copied between machines as part of the signing operation. Read on to learn about the sorcery that made this possible.

A lot has changed in the apple-codesign project / Rust crate in the last year! Just look at the changelog!

The project was renamed from tugger-apple-codesign.

(If you installed via cargo install, you'll need to cargo install --force apple-codesign to force Cargo to overwrite the rcodesign executable with one from a different crate.)

The rcodesign CLI executable is still there and more powerful than ever. You can still sign Apple applications from Linux, Windows, macOS, and any other platform you can get the Rust program to compile on.

There is now Sphinx documentation for the project. This is published on readthedocs.io alongside PyOxidizer's documentation (because I'm using a monorepo). There's some general documentation in there, such as a guide on how to selectively bypass Gatekeeper by deploying your own alternative code signing PKI to parallel Apple's. (This seems like something many companies would want but for whatever reason I'm not aware of anyone doing this - possibly because very few people understand how these systems work.)

There are bug fixes galore. When I look back at the state of rcodesign when I first blogged about it, I think of how naive I was. There were a myriad of applications that wouldn't pass notarization because of a long tail of bugs. There are still known issues. But I believe many applications will successfully sign and notarize now. I consider failures novel and worthy of bug reports - so please report them!

Read on to learn about some of the notable improvements in the past year (many of them occurring in the last two months).

Support for Signing Bundles, DMGs, and .pkg Installers

When I announced this project last year, only Mach-O binaries and trivially simple .app bundles were signable. And even then there were a ton of subtle issues.

rcodesign sign can now sign more complex bundles, including many nested bundles. There are reports of iOS app bundles signing correctly! (However, we don't yet have good end-user documentation for signing iOS apps. I will gladly accept PRs to improve the documentation!)

The tool also gained support for signing .dmg disk image files and .pkg flat package installers.

Known limitations with signing are now documented in the Sphinx docs.

I believe rcodesign now supports signing all the major file formats used for Apple software distribution. If you find something that doesn't sign and it isn't documented as a known issue with an existing GitHub issue tracking it, please report it!

Support for Notarization on Linux, Windows, and macOS

Apple publishes a Java tool named Transporter that enables you to upload artifacts to Apple for notarization. They make this tool available for Linux, Windows, and of course macOS.

While this tool isn't open source (as far as I know), usage of this tool enables you to notarize from Linux and Windows while still using Apple's official tooling for communicating with their servers.

rcodesign now has support for invoking Transporter and uploading artifacts to Apple for notarization. We now support notarizing bundles, .dmg disk images, and .pkg flat installer packages. I've successfully notarized all of these application types from Linux.

(I'm capable of implementing an alternative uploader in pure Rust but without assurances that Apple won't bring down the ban hammer for violating terms of use, this is a bridge I'm not yet willing to cross. The requirement to use Transporter is literally the only thing standing in the way of making rcodesign an all-in-one single file executable tool for signing and notarizing Apple software and I really wish I could deliver this user experience win without reprisal.)

With support for both signing and notarizing all application types, it is now possible to release Apple software without macOS involved in your release process.

YubiKey Integration

I try to use my YubiKeys as much as possible because a secret or private key stored on a YubiKey is likely more secure than a secret or private key sitting around on a filesystem somewhere. If you hack my machine, you can likely gain access to my private keys. But you will need physical access to my YubiKey and to compel or coerce me into unlocking it in order to gain access to its private keys.

rcodesign now has support for using YubiKeys for signing operations.

This does require an off-by-default smartcard Cargo feature. So if building manually you'll need to e.g. cargo install --features smartcard apple-codesign.

The YubiKey integration comes courtesy of the amazing yubikey Rust crate. This crate will speak directly to the smartcard APIs built into macOS and Windows. So if you have an rcodesign build with YubiKey support enabled, YubiKeys should just work. Try it by plugging in your YubiKey and running rcodesign smartcard-scan.

YubiKey integration has its own documentation.

I even implemented some commands to make it easy to manage the code signing certificates on your YubiKey. For example, you can run rcodesign smartcard-generate-key --smartcard-slot 9c to generate a new private key directly on the device and then rcodesign generate-certificate-signing-request --smartcard-slot 9c --csr-pem-path csr.pem to export that certificate to a Certificate Signing Request (CSR), which you can exchange for an Applie-issued signing certificate at developer.apple.com. This means you can easily create code signing certificates whose private key was generated directly on the hardware device and can never be exported. Generating keys this way is widely considered to be more secure than storing keys in software vaults, like Apple's Keychains.

Remote Code Signing

The feature I'm most excited about is what I'm calling remote code signing.

Remote code signing allows you to delegate the low-level cryptographic signature operations in code signing to a separate machine.

It's probably easiest to just demonstrate what it can do.

Earlier today I signed a macOS universal Mach-O executable from a GitHub-hosted Linux GitHub Actions runner using a YubiKey physically attached to the Windows 11 machine next to my desk at home. The signed application was not copied between machines.

Here's how I did it.

I have a GitHub Actions workflow that calls rcodesign sign --remote-signer. I manually triggered that workflow and started watching the near real time job output with my browser. Here's a screenshot of the job logs:

GitHub Actions initiating remote code signing

rcodesign sign --remote-signer prints out some instructions (including a wall of base64 encoded data) for what to do next. Importantly, it requests that someone else run rcodesign remote-sign to continue the signing process.

And here's a screenshot of me doing that from the Windows terminal:

Windows terminal output from running remote-sign command

This log shows us connecting and authenticating with the YubiKey along with some status updates regarding speaking to a remote server.

Finally, here's a screenshot of the GitHub Actions job output after I ran that command on my Windows machine:

GitHub Actions initiating machine output

Remote signing enabled me to sign a macOS application from a GitHub Actions runner operated by GitHub while using a code signing certificate securely stored on my YubiKey plugged into a Windows machine hundreds of kilometers away from the GitHub Actions runner. Magic, right?

What's happening here is the 2 rcodesign processes are communicating with each other via websockets bridged by a central relay server. (I operate a default server free of charge. The server is open source and a Terraform module is available if you want to run your own server with hopefully just a few minutes of effort.) When the initiating machine wants to create a signature, it sends a message back to the signer requesting a cryptographic signature. The signature is then sent back to the initiator, who incorporates it.

I designed this feature with automated releases from CI systems (like GitHub Actions) in mind. I wanted a way where I could streamline the code signing and release process of applications without having to give a low trust machine in CI ~unlimited access to my private signing key. But the more I thought about it the more I realized there are likely many other scenarios where this could be useful. Have you ever emailed or Dropboxed an application for someone else to sign because you don't have an Apple issued code signing certificate? Now you have an alternative solution that doesn't require copying files around! As long as you can see the log output from the initiating machine or have that output communicated to you (say over a chat application or email), you can remotely sign files on another machine!

An Aside on the Security of Remote Signing

At this point, I'm confident the more security conscious among you have been grimacing for a few paragraphs now. Websockets through a central server operated by a 3rd party?! Giving remote machines access to perform code signing against arbitrary content?! Your fears and skepticism are 100% justified: I'd be thinking the same thing!

I fully recognize that a service that facilitates remote code signing makes for a very lucrative attack target! If abused, it could be used to coerce parties with valid code signing certificates to sign unwanted code, like malware. There are many, many, many wrong ways to implement such a feature. I pondered for hours about the threat modeling and how to make this feature as secure as possible.

Remote Code Signing Design and Security Considerations captures some of my high level design goals and security assessments. And Remote Code Signing Protocol goes into detail about the communications protocol, including the crypto (actual cryptography, not the fad) involved. The key takeaways are the protocol and server are designed such that a malicious server or man-in-the-middle can not forge signature requests. Signing sessions expire after a few minutes and 3rd parties (or the server) can't inject malicious messages that would result in unwanted signatures. There is an initial handshake to derive a session ephemeral shared encryption key and from there symmetric encryption keys are used so all meaningful messages between peers are end-to-end encrypted. About the worst a malicious server could do is conduct a denial of service. This is by design.

As I argue in Security Analysis in the Bigger Picture, I believe that my implementation of remote signing is more secure than many common practices because common practices today entail making copies of private keys and giving low trust machines (like CI workers) access to private keys. Or files are copied around without cryptographic chain-of-custody to prove against tampering. Yes, remote signing introduces a vector for remote access to use signing keys. But practiced as I intended, remote signing can eliminate the need to copy private keys or grant ~unlimited access to them. From a threat modeling perspective, I think the net restriction in key access makes remote signing more secure than the private key management practices by many today.

All that being said, the giant asterisk here is I implemented my own cryptosystem to achieve end-to-end message security. If there are bugs in the design or implementation, that cryptosystem could come crashing down, bringing defenses against message forgery with it. At that point, a malicious server or privileged network actor could potentially coerce someone into signing unwanted software. But this is likely the extent of the damage: an offline attack against the signing key should not be possible since signing requires presence and since the private key is never transmitted over the wire. Even without the end-to-end encryption, the system is arguably more secure than leaving your private key lingering around as an easily exfiltrated CI secret (or similar).

(I apologize to every cryptographer I worked with at Mozilla who beat into me the commandment that thou shall not roll their own crypto: I have sinned and I feel remorseful.)

Cryptography is hard. And I'm sure I made plenty of subtle mistakes. Issue #552 tracks getting an audit of this protocol and code performed. And the aforementioned protocol design docs call out some of the places where I question decisions I've made.

If you would be interested in doing a security review on this feature, please get in touch on issue #552 or send me an email. If there's one immediate outcome I'd like from this blog post it would be for some white hat^Hknight to show up and give me peace of mind about the cryptosystem implementation.

Until then, please assume the end-to-end encryption is completely flawed. Consider asking someone with security or cryptographer in their job title for their opinion on whether this feature is safe for you to use. Hopefully we'll get a security review done soon and this caveat can go away!

If you do want to use this feature, Remote Code Signing contains some usage documentation, including how to use it with GitHub Actions. (I could also use some help productionizing a reusable GitHub Action to make this more turnkey! Although I'm hesitant to do it before I know the cryptosystem is sound.)

That was a long introduction to remote code signing. But I couldn't in good faith present the feature without addressing the security aspect. Hopefully I didn't scare you away! Traditional / local signing should have no security concerns (beyond the willingness to run software written by somebody you probably don't know, of course).

Apple Keychain Support

As of today's 0.14 release we now have early support for signing with code signing certificates stored in Apple Keychains! If you created your Apple code signing certificates in Keychain Access or Xcode, this is probably where you code signing certificates live.

I held off implementing this for the longest time because I didn't perceive there to be a benefit: if you are on macOS, just use Apple's official tooling. But with rcodesign gaining support for remote code signing and some other features that could make it a compelling replacement for Apple tooling on all platforms, I figured we should provide the feature so we stop discouraging people to export private keys from Keychains.

This integration is very young and there's still a lot that can be done, such as automatically using an appropriate signing certificate based on what you are signing. Please file feature request issues if there's a must-have feature you are missing!

Better Debugging of Failures

Apple's code signing is complex. It is easy for there to be subtle differences between Apple's tooling and rcodesign.

rcodesign now has print-signature-info and diff-signatures commands to dump and compare YAML metadata pertinent to code signing to make it easier to compare behavior between code signing implementations and even multiple signing operations.

The documentation around debugging and reporting bugs now emphasizes using these tools to help identify bugs.

A Request For Users and Feedback

I now believe rcodesign to be generally usable. I've thrown a lot of random software at it and I feel like most of the big bugs and major missing features are behind us.

But I also feel it hasn't yet received wide enough attention to have confidence in that assessment.

If you want to help the development of this tool, the most important actions you can take are to attempt signing / notarization operations with it and report your results.

Does rcodesign spark joy? Please leave a comment in the GitHub discussion for the latest release!

Does rcodesign not work? I would very much appreciate a bug report! Details on how to file good bugs are in the docs.

Have general feedback? UI is confusing? Documentation is insufficient? Leave a comment in the aforementioned discussion. Or create a GitHub issue if you think it is actionable. I can't fix what I don't know about!

Have private feedback? Send me an email.

Conclusion

I could write thousands of words about all I learned from hacking on this project.

I've learned way too much about too many standards and specifications in the crypto space. RFCs 2986, 3161, 3280, 3281, 3447, 4210, 4519, 5280, 5480, 5652, 5869, 5915, 5958, and 8017 plus probably a few more. How cryptographic primitives are stored and expressed: ASN.1, OIDs, BER, DER, PEM, SPKI, PKCS#1, PKCS#8. You can show me the raw parse tree for an ASN.1 data structure and I can probably tell you what RFC defines it. I'm not proud of this. But I will say actually knowing what every field in an X.509 certificate does or the many formats that cryptographic keys are expressed in seems empowering. Before, I would just search for the openssl incantation to do something. Now, I know which ASN.1 data structures are involved and how to manipulate the fields within.

I've learned way too much around minutia around how Apple code signing actually works. The mechanism is way too complex for something in the security space. There was at least one high profile Gatekeeper bug in the past year allowing improperly signed code to run. I suspect there will be more: the surface area to exploit is just too large.

I think I'm proud of building an open source implementation of Apple's code signing. To my knowledge nobody else has done this outside of Apple. At least not to the degree I have. Then factor in that I was able to do this without access (or willingness) to look at Apple source code and much of the progress was achieved by diffing and comparing results with Apple's tooling. Hours of staring at diffoscope and comparing binary data structures. Hours of trying to find the magical settings that enabled a SHA-1 or SHA-256 digest to agree. It was tedious work for sure. I'll likely never see a financial return on the time equivalent it took me to develop this software. But, I suppose I can nerd brag that I was able to implement this!

But the real reward for this work will be if it opens up avenues to more (open source) projects distributing to the Apple ecosystems. This has historically been challenging for multiple reasons and many open source projects have avoided official / proper distribution channels to avoid the pain (or in some cases because of philosophical disagreements with the premise of having a walled software garden in the first place). I suspect things will only get worse, as I feel it is inevitable Apple clamps down on signing and notarization requirements on macOS due to the rising costs of malware and ransomware. So having an alternative, open source, and multi-platform implementation of Apple code signing seems like something important that should exist in order to provide opportunities to otherwise excluded developers. I would be humbled if my work empowers others. And this is all the reward I need.


Next Page ยป