On Algorithms and Interviewing

January 17, 2019 at 10:45 AM | categories: Personal | View Comments

As I write this, I'm hours away from starting to interview for full-time jobs in the software field. I've spoken with a number of recruiters and hiring managers and have received interview preparation materials from a handful of companies, many of which you've probably heard of.

I was hoping things would have changed since I last seriously underwent this endeavor ~7.5 years ago (I did interview periodically when I was at Mozilla in order to test waters, keep my interview skills sharp, etc). But it appears the industry is still generally fixated on algorithms and data structures in interviews. The way algorithms and similar coding tricks are emphasized in the preparation materials I've received, you'd think people in software spend a major part of their work days thinking about and implementing algorithms. But from my experience, this is very far from the case! So why are so many companies and interviewers fixated on algorithms. And is this a good thing?

When they matter, efficient algorithms, data structures, and other tricks are important and useful skills to have. But from my experience, they matter far less than you would think. If I were to make a list of important job skills and traits for software and programming, memorized knowledge of algorithms and data structures is so far down the list that I don't think I would even ask about algorithms fundamentals for most job candidates! (In fact I don't.) I think it is vastly more important to focus on behavioral qualities and potential to actually think and apply knowledge rather than regurgitate it. Algorithms and data structures, after all, are learned knowledge. All other things be equal, I'd rather have someone who knows when to ask for help with an algorithms issue or can pick up the skill than a curmudgeon algorithms genius who has an abrasive personality and clings to old habits.

In the spirit of full disclosure, I should to state that my algorithms skills are relatively weak. You can accuse me of writing this post to fulfill my own selfish interests. You wouldn't be wrong. But I know there are others like me who are good at programming yet struggle with algorithms and question the utility of algorithms in interviews. I'm attempting to write this post for all of us.

I have failed job interviews because the interviewer assessed my algorithms abilities as weak. I'm able to work through this deficiency with interviewers who care more about the behavioral traits I exhibit when in such a situation (I try to be quick about admitting my technical weaknesses and to ask for help when needed). But some interviewers aren't as interested in the behavioral traits or insist on a baseline level of memorized algorithms knowledge beyond my own. I feel like my relative algorithms weakness hasn't hurt me on the job, as I hardly find myself caring about algorithms in the work I do. In the majority of cases, the choice of an algorithm just doesn't matter for the size of the data set. Or a standard algorithm or data structure available in the standard library of the language I'm using is good enough. In the cases where I realize algorithms and data structures would matter, I run my technical questions past someone with more knowledge in the domain than me. Or if I don't do that, it often comes up during code review. Without strong algorithms and data structures knowledge, I'm able to maintain the Firefox build system, become a core contributor to a version control tool (something you think would require a lot of heavy algorithms knowledge), maintain various open source projects, diagnose and address low-level performance issues in complex software and systems. About the only impact that being weak in algorithms and data structures has had on my career is that some companies passed on hiring me because they perceived strength in this area to be important.

Albert Einstein once said, I never commit to memory anything that can easily be looked up in a book. A modern adaptation of that quote may go something like, never memorize how to implement an algorithm or data structure when you can just Google it or use a software library implementing it. If you have knowledge of how to implement various algorithms in your head, that's good for you, I suppose. But I think the bigger brain knowledge to possess is when algorithms matter and to a lesser extent, what types of algorithms are appropriate for particular problems. Answering these problems requires critical thinking. Actually implementing algorithms, by contrast, merely requires knowledge that can easily be looked up in a book (the algorithm or data structure itself) coupled with some programming knowledge for how to apply it. A capable programmer will be able to do both these things and pick up algorithms and data structure knowledge on the job, if necessary.

Some would say that algorithms are a good way to flush out coding ability. And coding ability is important to assess as part of interviewing a job candidate for a programming position! They aren't wrong. But there are much better ways to receive stronger signals about an interviewee's compatibility! On the coding front, there are infinite ways to assess programming capability without involving algorithms. So why involve algorithms as part of the interview?

One way I approach interviewing people is to imagine what the typical work day of that role will be like. How much time do they spend coding, investigating bugs, debugging, attending meetings, writing proposals, politicking with managers, etc. This produces a conceptual pie chart of that role's activities. I then try to structure the interview such that the topics covered in the interview correlate with and somewhat in proportion to activities in that job role. Is the role a heads down junior coder? A team lead or manager? When you start trying to map the time in various areas of the role to time spent in the interview, you realize that the common technical interview overly emphasizes some areas and often completely ignores others! One of the areas that is over-emphasizes is algorithms. Again, your typical programmer is going to be spending most of their typical day doing things unrelated to algorithms. So why are you spending precious interview time asking about algorithms when you could be probing an area that actually correlates to typical job activities? When viewed through this lens, the prevalence of algorithms in interviews just doesn't make much sense to me.

Perhaps knowledge of algorithms should be basic knowledge that every programmer should possess. If so, then asking about algorithms is fair game during an interview, I suppose. But I'm not comfortable with this line of thought.

I've always found it fascinating the ways that people with different backgrounds and degrees approach problems differently. From my experience, some of the best ideas and perspectives come from people with backgrounds and degrees which are minorities in the field. I've worked with programmers with degrees in philosophy and history who were some of the best programmers and overall minds in the room. One of the great things about software and programming is it is accessible to anyone, regardless of background. If you can code, you can land a (usually high-paying) job. Yes, the field is highly technical. But you don't need formal education or a degree to enter it like you do similar high-end professions, such as medicine or law. You can argue whether this is a good thing or not. But I think the accessibility of the software profession - the lack of formal gatekeeping - is something to marvel at, something that we as an industry should embrace and be proud of. Do arbitrary hurdles to joining the industry help or hinder it?

A problem with emphasizing algorithms in interviews is that algorithms are somewhat highly specialized and academic. There are entire areas of programming and software where detailed knowledge of algorithms just isn't that important. The bar for so much software is it works and it quite frankly doesn't matter if you have a quadratic algorithm instead of something better.

Most people I know are exposed to algorithms fundamentals during their university education as part of pursuing a degree in computer science or engineering. You almost certainly aren't going to have academic exposure to algorithms if you are say a liberal arts major - never mind someone who doesn't attend university at all (I also know plenty of terrific programmers who don't have degrees). From my own experience, my degree is in computer engineering. Not computer science or software engineering: computer engineering. I remember from my university days that my computer science friends seemed to have a much better grasp at algorithms and theory of software and programming than I did. When I was taking classes about how hardware and electronics work, they were learning all about the mathematical concepts underpinning the field, different approaches to programming language design, etc. I received very little of that. And on top of that, I struggled with my single algorithms course at university. So I entered the workforce without as good of a grasp on the computer science fundamentals as others I knew. (But I still probably knew more than someone in an unrelated field.) The point I'm trying to make is that because algorithms are somewhat highly specialized and academic in nature, requiring knowledge in algorithms will effectively bias your hiring towards people with strong computer science backgrounds. Stated another way, screening on algorithms knowledge undermines diversity and inclusion initiatives by excluding viable candidates who don't have strong backgrounds in computer science. Sure, if someone wants to enter the industry they can take the time to study up on algorithms. But why force them to do that? It feels like arbitrary gatekeeping given the relative non-importance of algorithms given the typical activities of the typical programmer. So why do it?

I suspect major contributing reasons to why algorithms are so prevalent in interviews are cargo culting, laziness, and lack of formal interview training / caring about diversity. As an industry, the software field is pretty bad at applying best practices and learning from our mistakes. I suspect this will change once the relatively young industry catches up to more-established industries and we're forced to cope with the realities of legal and monetary liabilities the way practically every other industry is. (We're starting to see this with monetary damages for security breaches.) Anyway, we as an industry are pretty bad at self-regulating and adopting practices with proven benefits. We like to settle for what is known. Laziness and the comfort associated with is easy. Seeking out and implementing change is harder. This is human nature. We see this with well-known people in industry rejecting the ideas of continuous testing (years ago) or fuzzing (more recently). We see it in C/C++ programmers who are delusional about their abilities to write secure code and decry e.g. Rust's safety guarantees as superfluous. The industry is disproportionately white and male (at least in the United States). And this brings with it certain personality tendencies. One is a macho attitude, which can manifest in interviews via the interviewer embarking on an ego trip proving they know some esoteric algorithm or data structure the candidate does not.

As a clear example of this, Google was known for asking brainteaser interview questions. (The practice may have been prevalent at Microsoft before Google was the darling of Silicon Valley, but that was before I entered industry.) This trend caught on and soon companies all over were asking brainteasers! The problem was that these questions didn't correlate to actual job performance! From a 2013 NYTimes interview with Google's VP of People Operations:

On the hiring side, we found that brainteasers are a complete waste of
time. How many golf balls can you fit into an airplane? How many gas
stations in Manhattan? A complete waste of time. They don’t predict
anything. They serve primarily to make the interviewer feel smart.

But the damage was done. I still heard these kinds of questions when interviewing in the wild long after Google realized they were bad questions and instructed interviewers not to ask them. I even believe I got a brainteaser when interviewing at Google after the supposed banning of these types of questions! And I won't be shocked if I'm asked a brainteaser in 2019 as part of the several interviews I'll be doing in the days ahead.

Asking questions with no correlation to job performance because a popular company asked that type of question for a while: that's textbook cargo culting. Failing to change your ways despite evidence saying you should: laziness. Insisting that your way is correct and others need to be like you: gatekeeping.

I'm not saying algorithms and data structures during interviews are intrinsically bad and that we should stop asking about them. What I am saying is that we as an industry need to examine how we interview. We need to invest in scientifically proven techniques. (Research shows that behavioral interview questions are better. Tell me about a time when, etc.) And after more than ten years in industry, my experience tells me that interviews place a disproportionate emphasis on algorithms and data structures compared to the daily activities of the typical programmer. And on top of that, due to their academic nature, I worry that screening for algorithms and data structures knowledge is undermining the diversity and inclusivity of our field by biasing towards people with strong computer science backgrounds. I think it is time we examine the role of algorithms and data structures in interviews and consider focusing on other areas instead.

Read and Post Comments

What I've Learned About Optimizing Python

January 10, 2019 at 03:00 PM | categories: Python | View Comments

I've used Python more than any other programming language in the past 4-5 years. Python is the lingua franca for Firefox's build, test, and CI tooling. Mercurial is written in mostly Python. Many of my side-projects are in Python.

Along the way, I've accrued a bit of knowledge about Python performance and how to optimize Python. This post is about sharing that knowledge with the larger community.

My experience with Python is mostly with the CPython interpreter, specifically CPython 2.7. Not all observations apply to all Python distributions or have the same characteristics across Python versions. I'll try to call this out when relevant. And this post is in no way a thorough survey of the Python performance landscape. I mainly want to highlight areas that have particularly plagued me.

Startup and Module Importing Overhead

Starting a Python interpreter and importing Python modules is relatively slow if you care about milliseconds.

If you need to start hundreds or thousands of Python processes as part of a workload, this overhead will amount to several seconds of overhead.

If you use Python to provide CLI tools, the overhead can cause enough lag to be noticeable by people. If you want instantaneous CLI tools, launching a Python interpreter on every invocation will make it very difficult to achieve that with a sufficiently complex tool.

I've written about this problem extensively. My 2014 post on python-dev outlines the problem. Posts in May 2018 and October 2018 restate and refine it.

There's not much you can do to alleviate interpreter startup overhead: fixing this mostly resides with the maintainers of the Python interpreter because they control the code that is taking precious milliseconds to complete. About the best you can do is disable the site import in your shebangs and invocations to avoid some extra Python code running at startup. However, many applications rely on functionality provided by site.py, so use at your own risk.

Related to this is the problem of module importing. What good is a Python interpreter if it doesn't have code to run! And the way code is made available to the interpreter is often through importing modules.

There are multiple steps to importing modules. And there are sources of overhead in each one.

There is overhead in finding modules and reading their data. As I've demonstrated with PyOxidizer, replacing the default find and load a module from the filesystem with an architecturally simpler solution of read the module data from an in-memory data structure makes importing the Python standard library take 70-80% of its original time! Having a single module per filesystem file introduces filesystem overhead and can slow down Python applications in the critical first milliseconds of execution. Solutions like PyOxidizer can mitigate this. And hopefully the Python community sees the overhead in the current approach and considers moving towards module distribution mechanisms that don't rely so much on separate files per module.

Another source of module importing overhead is executing code in that module at import time. Some modules have code in the module scope outside of functions and classes that runs when the module is imported. This code execution can add overhead to importing. A mitigation for this is to not run as much code at import time: only run code as needed. Python 3.7 supports a module __getattr__ that will be called when a module attribute is not found. This can be used to lazily populate module attributes on first access.

Another workaround for module importing slowness is lazy module importing. Instead of actually loading a module when it is imported, you register a custom module importer that returns a stub for that module instead. When that stub is first accessed, it will load the actual module and mutate itself to be that module.

By avoiding the filesystem and module running overhead for unused modules (modules are typically imported globally and then only used by certain functions in a module), you can easily shave dozens of milliseconds from applications importing several dozens of modules.

But lazy module importers are a bit fragile. Lots of modules have a pattern where they try: import foo; except ImportError:. A lazy module importer may never raise ImportError here because to do so, it would need to search the filesystem for a module to know if it exists and searching the filesystem would add overhead, so they don't do it! You work around this by accessing an attribute on the imported module. This forces the ImportError to be raised if the module doesn't exist but undermines the laziness of the module import! This problem is quite nasty. Mercurial's lazy module importer has to maintain a list of modules that are known to not be lazy importable to work around it. Another issue is the from foo import x, y syntax, which also undermines lazy module importing in cases where foo is a module (as opposed to a package) because in order to return a reference to x and y, the module has to be imported.

PyOxidizer, having a fixed set of modules frozen into the binary, can be efficient about raising ImportError. And Python 3.7's module __getattr__ provides additional flexibility for lazy module importers. I hope to integrate a robust lazy module importer into PyOxidizer so these gains are realized automatically.

The best solution to avoiding the interpreter startup and module import overhead problem is to run a persistent Python process. If you run Python in a daemon process (say for a web server), you pretty much get this for free. Mercurial's solution to this is to run a persistent Python process in the background which exposes a command server protocol. hg is aliased to a C (or now Rust) executable which connects to that persistent process and dispatches a command. The command server approach is a lot of work and can be a bit fragile and has security concerns. I'm exploring the idea of shipping a command server with PyOxidizer so executable can easily gain its benefits and the cost to solving the problem only needs to be paid in one central place: the PyOxidizer project.

Function Call Overhead

Function calls in Python are relatively slow. (This observation applies less to PyPy, which can JIT code execution.)

I've seen literally dozens of patches to Mercurial where we inline code or combine Python functions in order to avoid function call overhead. In the current development cycle, some effort was made to reduce the number of functions called when updating progress bars. (We try to use progress bars for any operation that could take a while so users know what is going on.) The old progress bar update code would dispatch to a handful of functions. Caching function call results and avoiding simple lookups via functions shaves dozens to hundreds of milliseconds off execution when we're talking about 1 million executions.

If you have tight loops or recursive functions in Python where hundreds of thousands or more function calls could be in play, you need to be aware of the overhead of calling an individual function, as it can add up quickly! Consider in-lining simple functions and combining functions to avoid the overhead.

Attribute Lookup Overhead

This problem is similar to function call overhead because it can actually be the same problem!

Resolving an attribute in Python can be relatively slow. (Again, this observation applies less to PyPy.)

Again, working around this issue is something we do a lot in Mercurial.

Say you have the following code:

obj = MyObject()
total = 0

for i in len(obj.member):
    total += obj.member[i]

Ignoring that there are better ways to write this example (total = sum(obj.member) should work), as written, the loop here will need to resolve obj.member on every iteration. Python has a relatively complex mechanism for resolving attributes. For simple types, it can be quite fast. But for complex types, that attribute access can silently be invoking __getattr__, __getattribute__, various other dunder methods, and even custom @property functions. What looks like it should be a fast attribute lookup can silently be several function calls, leading to function call overhead! And this overhead can compound if you are doing things like obj.member1.member2.member3 etc.

Each attribute lookup adds overhead. And since nearly everything in Python is a dictionary, it is somewhat accurate to equate each attribute lookup as a dictionary lookup. And we know from basic data structures that dictionary lookups are intrinsically not as fast as having say a pointer. Yes, there are some tricks in CPython to avoid the dictionary lookup overhead. But the general theme I want to get across is that each attribute lookup is a potential performance sink.

For tight loops - especially those over potentially hundreds of thousands of iterations - you can avoid this measurable attribute lookup overhead by aliasing the value to a local. We would write the example above as:

obj = MyObject()
total = 0

member = obj.member
for i in len(member):
    total += member[i]

Of course, this is only safe when the aliased item isn't replaced inside the loop! If that happens, your iterator will hold a reference to the old item and things may blow up.

The same trick can be used when calling a method of an object. Instead of:

obj = MyObject()

for i in range(1000000):
    obj.process(i)

Do the following:

obj = MyObject()
fn = obj.process

for i in range(1000000:)
    fn(i)

It's also worth noting that in cases where the attribute lookup is used to call a method (such as the previous example), Python 3.7 is significantly faster than previous releases. But I'm pretty sure this is due to dispatch overhead to the method function itself, not attribute lookup overhead. So things will be faster yet by avoiding the attribute lookup.

Finally, unless attribute lookup is calling functions to resolve the attribute, attribute lookup is generally less of a problem than function call overhead. And it generally requires eliminating a lot of attribute lookups for you to notice a meaningful improvement. That being said, once you add up all attribute accesses inside a loop, you may be talking about 10 or 20 attributes in the loop alone - before function calls. And loops with only thousands or low tens of thousands of iterations can quickly provide hundreds of thousands or millions of attribute lookups. So be on the lookout!

Object Overhead

From the perspective of the Python interpreter, every value is an object. In CPython, each value is a PyObject struct. Each object managed by the interpreter is on the heap and needs to have its own memory holding its reference count, its type, and other state. Every object is garbage collected. This means that each new object introduces overhead for the reference counting / garbage collection mechanism to process. (Again, PyPy can avoid some of this overhead by being more intelligent about the lifetimes of short-lived values.)

As a general rule of thumb, the more unique Python values/objects you create, the slower things are.

For example, say you are iterating over a collection of 1 million objects. You call a function to process that object into a tuple:

for x in my_collection:
    a, b, c, d, e, f, g, h = process(x)

In this example, process() returns an 8-tuple. It doesn't matter of we destructure the return value or not: this tuple requires the creation of at least 9 Python values: 1 for the tuple itself and 8 for its inner members. OK, in reality there could be fewer values if process() returns a reference to an existing value. Or there could be more if the types aren't simple types and require multiple PyObject to represent. My point is that under the hood the interpreter is having to juggle multiple objects to represent things.

From my experience, this overhead is only relevant for operations that benefit from speedups when implemented in a native language like C or Rust. The reason is the CPython interpreter is just unable to execute bytecode fast enough for object overhead itself to matter. Instead, you will likely hit performance issues with function call overhead, processing overhead, etc long before object overhead. But there are some exceptions to this, such as constructing tuples or dicts with several members.

As a concrete example of this overhead, Mercurial has C code for parsing some of the lower-level data structures. In terms of raw parsing speed, the C code runs on an order of two magnitudes faster than CPython. But once we have that C code create PyObject to present the result, the speedup drops to just a few times faster, if that. In other words, the overhead is coming from creating and managing Python values so they can be used by Python code.

A workaround for this is to produce fewer Python values. If you only need to access a single value, have a function return that single value instead of say a tuple or dict with N values. However, watch out for function call overhead!

When you have a lot of speedup code using the CPython C API and values need to be shared across different modules, pass around Python types that expose data as C structs and have the compiled code access those C structs instead of going through the CPython C API. By avoiding the Python C API for data access, you will be avoiding most of its overhead.

Treating values as data (instead of having functions for accessing everything) is more Pythonic. So another workaround for compiled code is is to lazily create PyObject instances. If you create a custom Python type (PyTypeObject) to represent your complex values, you can define the tp_members and/or tp_getset fields to register custom C functions to resolve the value for an attribute. If you are say writing a parser and you know that consumers will only access a subset of the parsed fields, you can quickly construct a type holding the raw data, return that type, and have the Python attribute lookup call a C function which resolves the PyObject. You can even defer parsing until this function is called, saving additional overhead if a parse is never required! This technique is quite rare (because it requires writing a non-trivial amount of code against the Python C API). But it can result in substantial wins.

Pre-Sizing Collections

This one applies to the CPython C API.

When creating collections like lists or dicts, use e.g. PyList_New() + PyList_SET_ITEM() to populate new collections when their size is known at collection creation time. This will pre-size the collection to have capacity to hold the final number of elements. And it skips checks when inserting elements that the collection is large enough to hold them. When creating collections of thousands of elements, this can save a bit of overhead!

Using Zero-copy in the C API

The CPython C API really likes to make copies of things rather than return references. For example, PyBytes_FromStringAndSize() copies a char* to memory owned by Python. If you are doing this for a large number of values or sufficiently large data, we could be talking about gigabytes of memory I/O and associated allocator overhead.

If writing high-performance code against the C API, you'll want to become familiar with the buffer protocol and related types, like memoryview.

The buffer protocol is implemented by Python types and allows the Python interpreter to cast a type to/from bytes. It essentially allows the interpreter's C code to get a handle on a void* of certain size representing the object. This allows you to associate any address in memory with a PyObject. Many functions operating on binary data transparently accept any object implementing the buffer protocol. And if you are coding against the C API and want to accept any object that can be treated as bytes, you should be using the s*, y* or w* format units when parsing function arguments.

By using the buffer protocol, you give the interpreter the best opportunity possible to be using zero-copy operations and avoiding having to copy bytes around in memory.

By using Python types like memoryview, you are also allowing Python to reference slices of memory by reference instead of by copy.

When you have gigabytes of data flowing through your Python program, astute use of Python types that support zero-copy can make a world of difference on performance. I once measured that python-zstandard was faster than some Python LZ4 bindings (LZ4 should be faster than zstandard) because I made heavy use of the buffer protocol and avoiding excessive memory I/O in python-zstandard!

Conclusion

This post has outlined some of the things I've learned optimizing Python programs over the years. This post is by no means a comprehensive overview of Python performance techniques and gotchas. I recognize that my use of Python is probably more demanding than most and that the recommendations I made are not applicable to many Python programs. You should not mass update your Python code to e.g. inline functions and remove attribute lookups after reading this post. As always, when it comes to performance optimization, measure first and optimize where things are observed to be slow. I highly recommend py-spy for profiling Python applications. That being said, it's hard to attach a time value to low-level activity in the Python interpreter such as calling functions and looking up attributes. So if you e.g. have a loop that you know is tight, experiment with suggestions in this post, and see if you can measure an improvement!

Finally this post should not be interpreted as a dig against Python or its performance properties. Yes, you can make arguments that Python should or shouldn't be used in particular areas because of performance properties. But Python is extremely versatile - especially with PyPy delivering exceptional performance for a dynamic programming language. The performance of Python is probably good enough for most people. For better or worse, I have used Python for uses cases that often feel like outliers across all users. And I wanted to share my experiences such that others know what life at the frontier is like. And maybe, just maybe, I can cause the smart people who actually maintain Python distributions to think about the issues I've had in more detail and provide improvements to mitigate them.

Read and Post Comments

Seeking Employment

January 07, 2019 at 03:25 PM | categories: Personal, Mozilla | View Comments

After almost seven and a half years as an employee of Mozilla Corporation, I'm moving on. I have already worked my final day as an employee.

This post is the first time that I've publicly acknowledged my departure. To any Mozillians reading this, I regret that I did not send out a farewell email before I left. But the circumstances of my departure weren't conducive to doing so. I've been drafting a proper farewell blog post. But it has been very challenging to compose. Furthermore, each passing day brings with it new insights into my time at Mozilla and a new wrinkle to integrate into the reflective story I want to tell in that post. I vow to eventually publish a proper goodbye that serves as the bookend to my employment at Mozilla. Until then, just let me say that I'm already missing working with many of you. I've connected with several people since I left and still owe responses or messages to many more. If you want to get in touch, my contact info is in my résumé.

I left Mozilla without new employment lined up. That leads me to the subject line of this post: I'm seeking employment. The remainder of this post is thus tailored to potential employers.

My résumé has been updated. But that two page summary only scratches the surface of my experience and set of skills. The Body of Work page of my website is a more detailed record of the work I've done. But even it is not complete!

Perusing through my posts on this blog will reveal even more about the work I've done and how I go about it. My résumé links to a few posts that I think are great examples of the level of understanding and detail that I'm capable of harnessing.

As far as the kind of work I want to do or the type of company I want to work for, I'm trying to keep an open mind. But I do have some biases.

I prefer established companies to early start-ups for various reasons. Dan Luu's Big companies v. startups is aligned pretty well with my thinking.

One of the reasons I worked for Mozilla was because of my personal alignment with the Mozilla Manifesto. So I gravitate towards employers that share those principles and am somewhat turned off by those that counteract them. But I recognize that the world is complex and that competing perspectives aren't intrinsically evil. In other words, I try to maintain an open mind.

I'm attracted to employers that align their business with improving the well-being of the planet, especially the people on it. The link between the business and well-being can be tenuous: a B2B business for example is presumably selling something that helps people, and that helping is what matters to me. The tighter the link between the business and improving the world will increase my attraction to a employer.

I started my university education as a biomedical engineer because I liked the idea of being at the intersection of technology and medicine. And part of me really wants to return to this space because there are few things more noble than helping a fellow human being in need.

As for the kind of role or technical work I want to do, I could go in any number of directions. I still enjoy doing individual contributor type work and believe I could be an asset to an employer doing that work. But I also crave working on a team, performing technical mentorship, and being a leader of technical components. I enjoy participating in high-level planning as well as implementing the low-level aspects. I recognize that while my individual output can be substantial (I can provide data showing that I was one of the most prolific technical contributors at Mozilla during my time there) I can be more valuable to an employer when I bestow skills and knowledge unto others through teaching, mentorship, setting an example, etc.

I have what I would consider expertise in a few domains that may be attractive to employers.

I was a technical maintainer of Firefox's build system and initiated a transition away from an architecture that had been in place since the Netscape days. I definitely geek out way too much on build systems.

I am a contributor to the Mercurial version control tool. I know way too much about the internals of Mercurial, Git, and other version control tools. I am intimately aware of scaling problems with these tools. Some of the scaling work I did for Mercurial saved Mozilla tens of thousands of dollars in direct operational costs and probably hundreds of thousands of dollars in saved people time due to fewer service disruptions and faster operations.

I have exposure to both client and server side work and the problems encountered within each domain. I've dabbled in lots of technologies, operating systems, and tools. I'm not afraid to learn something new. Although as my experience increases, so does my skepticism of shiny new things (I've been burned by technical fads too many times).

I have a keen fascination with optimization and scaling, whether it be on a technical level or in terms of workflows and human behavior. I like to ask and then what so I'm thinking a few steps out and am prepared for the next problem or consequence of an immediate action.

I seem to have a knack for caring about user experience and interfaces. (Although my own visual design skills aren't the greatest - see my website design for proof.) I'm pretty passionate that tools that people use should be simple and usable. Cognitive dissonance, latency, and distractions are real and as an industry we don't do a great job minimizing these disruptions so focus and productivity can be maximized. I'm not saying I would be a good product manager or UI designer. But it's something I've thought about because not many engineers seem to exhibit the passion for good user experience that I do and that intersection of skills could be valuable.

My favorite time at Mozilla was when I was working on a unified engineering productivity team. The team controlled most of the tools and infrastructure that Firefox developers interacted with in order to do their jobs. I absolutely loved taking a whole-world view of that problem space and identifying the high-level problems - and low-hanging fruit - to improve the overall Firefox development experience. I derived a lot of satisfaction from identifying pain points, equating them to a dollar cost by extrapolating people time wasted due to them, justifying working on them, and finally celebrating - along with the overall engineering team - when improvements were made. I think I would be a tremendous asset to a company working in this space. And if my experience at Mozilla is any indicator, I would more than offset my total employment cost by doing this kind of work.

I've been entertaining the idea of contracting for a while before I resume full-time employment with a single employer. However, I've never contracted before and need to do some homework before I commit to that. (Please leave a comment or email me if you have recommendations on reading material.)

My dream contract gig would likely be to finish the Mercurial wire protocol and storage work I started last year. I would need to type up a formal proposal, but the gist of it is the work I started has the potential to leapfrog Git in terms of both client-side and server-side performance and scalability. Mercurial would be able to open Git repositories on the local filesystem as well as consume them via the Git wire protocol. Transparent Git interoperability would enable Mercurial to be used as a drop-in replacement for Git, which would benefit users who don't have control over the server (such as projects that live on GitHub). Mercurial's new wire protocol is designed with global scalability and distribution in mind. The goal is to enable server operators to deploy scalable VCS servers in a turn-key manner by relying on scalable key-value stores and content distribution networks as much as possible (Mercurial and Git today require servers to perform way too much work and aren't designed with modern distributed systems best practices, which is why scaling them is hard). The new protocol is being designed such that a Mercurial server could expose Git data. It would then be possible to teach a Git client to speak the Mercurial wire protocol, which would result in Mercurial being a more scalable Git server than Git is today. If my vision is achieved, this would make server-side VCS scaling problems go away and would eliminate the religious debate between Git and Mercurial (the answer would be deploy a Mercurial server, allow data to be exposed to Git, and let consumers choose). I conservatively estimate that the benefits to industry would be in the millions of dollars. How I would structure a contract to deliver aspects of this, I'm not sure. But if you are willing to invest six figures towards this bet, let's talk. A good foundation of this work is already implemented in Mercurial and the Mercurial core development team is already on-board with many aspects of the vision, so I'm not spewing vapor.

Another potential contract opportunity would be funding PyOxidizer. I started the project a few months back as a side-project as an excuse to learn Rust while solving a fun problem that I thought needed solving. I was hoping for the project to be useful for Mercurial and Mozilla one day. But if social media activity is any indication, there seems to be somewhat widespread interest in this project. I have no doubt that once complete, companies will be using PyOxidizer to ship products that generate revenue and that PyOxidizer will save them engineering resources. I'd very much like to recapture some of that value into my pockets, if possible. Again, I'm somewhat naive about how to write contracts since I've never contracted, but I imagine deliver a tool that allows me to ship product X as a standalone binary to platforms Y and Z is definitely something that could be structured as a contract.

As for the timeline, I was at Mozilla for what feels like an eternity in Silicon Valley. And Mozilla's way of working is substantially different from many companies. I need some time to decompress and unlearn some Mozilla habits. My future employer will inherit a happier and more productive employee by allowing me to take some much-needed time off.

I'm looking to resume full-time work no sooner than March 1. I'd like to figure out what the next step in my career is by the end of January. Then I can sign some papers, pack up my skiis, and become a ski bum for the month of February: if I don't use this unemployment opportunity to have at least 20 days on the slopes this season and visit some new mountains, I will be very disappointed in myself!

If you want to get in touch, my contact info is in my résumé. I tend not to answer incoming calls from unknown numbers, so email is preferred. But if you leave a voicemail, I'll try to get back to you.

I look forward to working for a great engineering organization in the near future!

Read and Post Comments

PyOxidizer Support for Windows

January 06, 2019 at 10:00 AM | categories: Python, PyOxidizer, Rust | View Comments

A few weeks ago I introduced PyOxidizer, a project that aims to make it easier to produce completely self-contained executables embedding a Python interpreter (using Rust). A few days later I observed some PyOxidizer performance benefits.

After a few more hacking sessions, I'm very pleased to report that PyOxidizer is now working on Windows!

I am able to produce a standalone Windows .exe containing a fully featured CPython interpreter, all its library dependencies (OpenSSL, SQLite, liblzma, etc), and a copy of the Python standard library (both source and bytecode data). The binary weighs in at around 25 MB. (It could be smaller if we didn't embed .py source files or stripped some dependencies.) The only DLL dependencies of the exe are vcruntime140.dll and various system DLLs that are always present on Windows.

Like I did for Linux and macOS, I produced a Python script that performs ~500 import statements for the near entirety of the Python standard library. I then ran this script with both the official 64-bit Python distribution and an executable produced with PyOxidizer:

# Official CPython 3.7.2 Windows distribution.
$ time python.exe < import_stdlib.py
real    0m0.475s

# PyOxidizer with non-PGO CPython 3.7.2
$ time target/release/pyapp.exe < import_stdlib.py
real    0m0.347s

Compared to the official CPython distribution, a PyOxidizer executable can import almost the entirety of the Python standard library ~125ms faster - or ~73% of original. In terms of the percentage of speedup, the gains are similar to Linux and macOS. However, there is substantial new process overhead on Windows compared to POSIX architectures. On the same machine, a hello world Python process will execute in ~10ms on Linux and ~40ms on Windows. If we remove the startup overhead, importing the Python standard library runs at ~70% of its original time, making the relative speedup on par with that seen on macOS + APFS.

Windows support is a major milestone for PyOxidizer. And it was the hardest platform to make work. CPython's build system on Windows uses Visual Studio project files. And coercing the build system to produce static libraries was a real pain. Lots of CPython's build tooling assumes Python is built in a very specific manner and multiple changes I made completely break those assumptions. On top of that, it's very easy to encounter problems with symbol name mismatch due to the use of __declspec(dllexport) and __declspec(dllimport). I spent several hours going down a rabbit hole learning how Rust generates symbols on Windows for extern {} items. Unfortunately, we currently have to use a Rust Nightly feature (the static-nobundle linkage kind) to get things to work. But I think there are options to remove that requirement.

Up to this point, my work on PyOxidizer has focused on prototyping the concept. With Windows out of the way and PyOxidizer working on Linux, macOS, and Windows, I have achieved confidence that my vision of a single executable embedding a full-featured Python interpreter is technically viable on major desktop platforms! (BSD people, I care about you too. The solution for Linux should be portable to BSD.) This means I can start focusing on features, usability, and optimization. In other words, I can start building a tool that others will want to use.

As always, you can follow my work on this blog and by following the python-build-standalone and PyOxidizer projects on GitHub.

Read and Post Comments

Faster In-Memory Python Module Importing

December 28, 2018 at 12:40 PM | categories: Python, PyOxidizer, Rust | View Comments

I recently blogged about distributing standalone Python applications. In that post, I announced PyOxidizer - a tool which leverages Rust to produce standalone executables embedding Python. One of the features of PyOxidizer is the ability to import Python modules embedded within the binary using zero-copy.

I also recently blogged about global kernel locks in APFS, which make filesystem operations slower on macOS. This was the latest wrinkle in a long battle against Python's slow startup times, which I've posted about on the official python-dev mailing list over the years.

Since I announced PyOxidizer a few days ago, I've had some productive holiday hacking sessions!

One of the reached milestones is PyOxidizer now supports macOS.

With that milestone reached, I thought it would be interesting to compare the performance of a PyOxidizer executable versus a standard CPython build.

I produced a Python script that imports almost the entirety of the Python standard library - at least the modules implemented in Python. That's 508 import statements. I then executed this script using a typical python3.7 binary (with the standard library on the filesystem) and PyOxidizer-produced standalone executables with a module importer that loads Python modules from memory using zero copy.

# Homebrew installed CPython 3.7.2

# Cold disk cache.
$ sudo purge
$ time /usr/local/bin/python3.7 < import_stdlib.py
real   0m0.694s
user   0m0.354s
sys    0m0.121s

# Hot disk cache.
$ time /usr/local/bin/python3.7 < import_stdlib.py
real   0m0.319s
user   0m0.263s
sys    0m0.050s

# PyOxidizer with non-PGO/non-LTO CPython 3.7.2
$ time target/release/pyapp < import_stdlib.py
real   0m0.223s
user   0m0.201s
sys    0m0.017s

# PyOxidizer with PGO/non-LTO CPython 3.7.2
$ time target/release/pyapp < import_stdlib.py
real   0m0.234s
user   0m0.210s
sys    0m0.019

# PyOxidizer with PTO+LTO CPython 3.7.2
$ sudo purge
$ time target/release/pyapp < import_stdlib.py
real   0m0.442s
user   0m0.252s
sys    0m0.059s

$ time target/release/pyall < import_stdlib.py
real   0m0.221s
user   0m0.197s
sys    0m0.020s

First, the PyOxidizer times are all relatively similar regardless of whether PGO or LTO is used to build CPython. That's not too surprising, as I'm exercising a very limited subset of CPython (and I suspect the benefits of PGO/LTO aren't as pronounced due to the nature of the CPython API).

But the bigger result is the obvious speedup with PyOxidizer and its in-memory importing: PyOxidizer can import almost the entirety of the Python standard library ~100ms faster - or ~70% of original - than a typical standalone CPython install with a hot disk cache! This comes out to ~0.19ms per import statement. If we run purge to clear out the disk cache, the performance delta increases to 252ms, or ~64% of original. All these numbers are on a 2018 6-core 2.9 GHz i9 MacBook Pro, which has a pretty decent SSD.

And on Linux on an i7-6700K running in a Hyper-V VM:

# pyenv installed CPython 3.7.2

# Cold disk cache.
$ time ~/.pyenv/versions/3.7.2/bin/python < import_stdlib.py
real   0m0.405s
user   0m0.165s
sys    0m0.065s

# Hot disk cache.
$ time ~/.pyenv/versions/3.7.2/bin/python < import_stdlib.py
real   0m0.193s
user   0m0.161s
sys    0m0.032s

# PyOxidizer with PGO CPython 3.7.2

# Cold disk cache.
$ time target/release/pyapp < import_stdlib.py
real   0m0.227s
user   0m0.145s
sys    0m0.016s

# Hot disk cache.
$ time target/release/pyapp < import_stdlib.py
real   0m0.152s
user   0m0.136s
sys    0m0.016s

On a hot disk cache, the run-time improvement of PyOxidizer is ~41ms, or ~78% of original. This comes out to ~0.08ms per import statement. When flushing caches by writing 3 to /proc/sys/vm/drop_caches, the delta increases to ~178ms, or ~56% of original.

Using dtruss -c to execute the binaries, the breakdown in system calls occurring >10 times is clear:

# CPython standalone
fstatfs64                                      16
read_nocancel                                  19
ioctl                                          20
getentropy                                     22
pread                                          26
fcntl                                          27
sigaction                                      32
getdirentries64                                34
fcntl_nocancel                                106
mmap                                          114
close_nocancel                                129
open_nocancel                                 130
lseek                                         148
open                                          168
close                                         170
read                                          282
fstat64                                       403
stat64                                        833

# PyOxidizer
lseek                                          10
read                                           12
read_nocancel                                  14
fstat64                                        16
ioctl                                          22
munmap                                         31
stat64                                         33
sysctl                                         33
sigaction                                      36
mmap                                          122
madvise                                       193
getentropy                                    315

PyOxidizer avoids hundreds of open(), close(), read(), fstat64(), and stat64() calls. And by avoiding these calls, PyOxidizer not only avoids the userland-kernel overhead intrinsic to them, but also any additional overhead that APFS is imposing via its global lock(s).

(Why the PyOxidizer binary is making hundreds of calls to getentropy() I'm not sure. It's definitely coming from Python as a side-effect of a module import and it is something I'd like to fix, if possible.)

With this experiment, we finally have the ability to better isolate the impact of filesystem overhead on Python module importing and preliminary results indicate that the overhead is not insignificant - at least on the tested systems (I'll get data for Windows when PyOxidizer supports it). While the test is somewhat contrived (I don't think many applications import the entirety of the Python standard library), some Python applications do import hundreds of modules. And as I've written before, milliseconds matter. This is especially true if you are invoking Python processes hundreds or thousands of times in a build system, when running a test suite, for scripting, etc. Cumulatively you can be importing tens of thousands of modules. So I think shaving even fractions of a millisecond from module importing is important.

It's worth noting that in addition to the system call overhead, CPython's path-based importer runs substantially more Python code than PyOxidizer and this likely contributes several milliseconds of overhead as well. Because PyOxidizer applications are static, the importer can remain simple (finding a module in PyOxidizer is essentially a Rust HashMap<String, Vec<u8> lookup). While it might be useful to isolate the filesystem overhead from Python code overhead, the thing that end-users care about is overall execution time: they don't care where that overhead is coming from. So I think it is fair to compare PyOxidizer - with its intrinsically simpler import model - with what Python typically does (scan sys.path entries and looking for modules on the filesystem).

Another difference is that PyOxidizer is almost completely statically linked. By contrast, a typical CPython install has compiled extension modules as standalone shared libraries and these shared libraries often link against other shared libraries (such as libssl). From dtruss timing information, I don't believe this difference contributes to significant overhead, however.

Finally, I haven't yet optimized PyOxidizer. I still have a few tricks up my sleeve that can likely shave off more overhead from Python startup. But so far the results are looking very promising. I dare say they are looking promising enough that Python distributions themselves might want to look into the area more thoroughly and consider distribution defaults that rely less on the every-Python-module-is-a-separate-file model.

Stay tuned for more PyOxidizer updates in the near future!

(I updated this post a day after initial publication to add measurements for Linux.)

Read and Post Comments

Next Page »