Release of python-zstandard 0.9
April 09, 2018 at 09:30 AM | categories: Python, MozillaI have just released
python-zstandard 0.9.0. You can
install the latest release by running pip install zstandard==0.9.0
.
Zstandard is a highly tunable and therefore flexible compression algorithm with support for modern features such as multi-threaded compression and dictionaries. Its performance is remarkable and if you use it as a drop-in replacement for zlib, bzip2, or other common algorithms, you'll frequently see more than a doubling in performance.
python-zstandard provides rich bindings to the zstandard C library without sacrificing performance, safety, features, or a Pythonic feel. The bindings run on Python 2.7, 3.4, 3.5, 3.6, 3.7 using either a C extension or CFFI bindings, so it works with CPython and PyPy.
I can make a compelling argument that python-zstandard is one of the richest compression packages available to Python programmers. Using it, you will be able to leverage compression in ways you couldn't with other packages (especially those in the standard library) all while achieving ridiculous performance. Due to my focus on performance, python-zstandard is able to outperform Python bindings to other compression libraries that should be faster. This is because python-zstandard is very diligent about minimizing memory allocations and copying, minimizing Python object creation, reusing state, etc.
While python-zstandard is formally marked as a beta-level project and hasn't yet reached a 1.0 release, it is suitable for production usage. python-zstandard 0.8 shipped with Mercurial and is in active production use there. I'm also aware of other consumers using it in production, including at Facebook and Mozilla.
The sections below document some of the new features of python-zstandard 0.9.
File Object Interface for Reading
The 0.9 release contains a stream_reader()
API on the compressor and
decompressor objects that allows you to treat those objects as readable file
objects. This means that you can pass a ZstdCompressor
or ZstdDecompressor
around to things that accept file objects and things generally just work.
e.g.:
with open(compressed_file, 'rb') as ifh: cctx = zstd.ZstdDecompressor() with cctx.stream_reader(ifh) as reader: while True: chunk = reader.read(32768) if not chunk: break
This is probably the most requested python-zstandard feature.
While the feature is usable, it isn't complete. Support for readline()
,
readinto()
, and a few other APIs is not yet implemented. In addition,
you can't use these reader objects for opening zstandard compressed
tarball files because Python's tarfile
package insists on doing
backward seeks when reading. The current implementation doesn't support
backwards seeking because that requires buffering decompressed output and that
is not trivial to implement. I recognize that all these features are useful
and I will try to work them into a subsequent release of 0.9.
Negative Compression Levels
The 1.3.4 release of zstandard (which python-zstandard 0.9 bundles) supports negative compression levels. I won't go into details, but negative compression levels disable extra compression features and allow you to trade compression ratio for more speed.
When compressing a 6,472,921,921 byte uncompressed bundle of the Firefox Mercurial repository, the previous fastest we could go with level 1 was ~510 MB/s (measured on the input side) yielding a 1,675,227,803 file (25.88% of original).
With level -1
, we compress to 1,934,253,955 (29.88% of original) at
~590 MB/s. With level -5
, we compress to 2,339,110,873 bytes (36.14% of
original) at ~720 MB/s.
On the decompress side, level 1 decompresses at ~1,150 MB/s (measured at the output side), -1 at ~1,320 MB/s, and -5 at ~1,350 MB/s (generally speaking, zstandard's decompression speeds are relatively similar - and fast - across compression levels).
And that's just with a single thread. zstandard supports using multiple threads to compress a single input and python-zstandard makes this feature easy to use. Using 8 threads on my 4+4 core i7-6700K, level 1 compresses at ~2,000 MB/s (3.9x speedup), -1 at ~2,300 MB/s (3.9x speedup), and -5 at ~2,700 MB/s (3.75x speedup).
That's with a large input. What about small inputs?
If we take 456,599 Mercurial commit objects spanning 298,609,254 bytes from the Firefox repository and compress them individually, at level 1 we yield a total of 133,457,198 bytes (44.7% of original) at ~112 MB/s. At level -1, we compress to 161,241,797 bytes (54.0% of original) at ~215 MB/s. And at level -5, we compress to 185,885,545 bytes (62.3% of original) at ~395 MB/s.
On the decompression side, level 1 decompresses at ~260 MB/s, -1 at ~1,000 MB/s, and -5 at ~1,150 MB/s.
Again, that's 456,599 operations on a single thread with Python.
python-zstandard has an experimental API where you can pass in a collection of inputs and it batch compresses or decompresses them in a single operation. It releases and GIL and uses multiple threads. It puts the results in shared buffers in order to minimize the overhead of memory allocations and Python object creation and garbage collection. Using this mode with 8 threads on my 4+4 core i7-6700K, level 1 compresses at ~525 MB/s, -1 at ~1,070 MB/s, and -5 at ~1,930 MB/s. On the decompression side, level 1 is ~1,320 MB/s, -1 at ~3,800 MB/s, and -5 at ~4,430 MB/s.
So, my consumer grade desktop i7-6700K is capable of emitting decompressed
data at over 4 GB/s with Python. That's pretty good if you ask me. (Full
disclosure: the timings were taken just around the compression operation
itself: overhead of loading data into memory was not taken into account. See
the bench.py
script in the
source repository for more.
Long Distance Matching Mode
Negative compression levels take zstandard into performance territory that has historically been reserved for compression formats like lz4 that are optimized for that domain. Long distance matching takes zstandard in the other direction, towards compression formats that aim to achieve optimal compression ratios at the expense of time and memory usage.
python-zstandard 0.9 supports long distance matching and all the configurable parameters exposed by the zstandard API.
I'm not going to capture many performance numbers here because python-zstandard performs about the same as the C implementation because LDM mode spends most of its time in zstandard C code. If you are interested in numbers, I recommend reading the zstandard 1.3.2 and 1.3.4 release notes.
I will, however, underscore that zstandard can achieve close to lzma's
compression ratios (what the xz
utility uses) while completely smoking
lzma on decompression speed. For a bundle of the Firefox Mercurial repository,
zstandard level 19 with a long distance window size of 512 MB using 8 threads
compresses to 1,033,633,309 bytes (16.0%) in ~260s wall, 1,730s CPU.
xz -T8 -8
compresses to 1,009,233,160 (15.6%) in ~367s wall, ~2,790s CPU.
On the decompression side, zstandard takes ~4.8s and runs at ~1,350 MB/s as
measured on the output side while xz
takes ~54s and runs at ~114 MB/s.
Zstandard, however, does use a lot more memory than xz
for decompression,
so that performance comes with a cost (512 MB versus 32 MB for this
configuration).
Other Notable Changes
python-zstandard now uses the advanced compression and decompression APIs everywhere. All tunable compression and decompression parameters are available to python-zstandard. This includes support for disabling magic headers in frames (saves 4 bytes per frame - this can matter for very small inputs, especially when using dictionary compression).
The full dictionary training API is exposed. Dictionary training can now use multiple threads.
There are a handful of utility functions for inspecting zstandard frames, querying the state of compressors, etc.
Lots of work has gone into shoring up the code base. We now build with warnings as errors in CI. I performed a number of focused auditing passes to fix various classes of deficiencies in the C code. This includes use of the buffer protocol: python-zstandard is now able to accept any Python object that provides a view into its underlying raw data.
Decompression contexts can now be constructed with a max memory threshold so attempts to decompress something that would require more memory will result in error.
See the full release notes for more.
Conclusion
Since I last released a major version of python-zstandard, a lot has changed in the zstandard world. As I blogged last year, zstandard circa early 2017 was a very compelling compression format: it already outperformed popular compression formats like zlib and bzip2 across the board. As a general purpose compression format, it made a compelling case for itself. In my mind, brotli was its only real challenger.
As I wrote then, zstandard isn't perfect. (Nothing is.) But a year later, it is refreshing to see advancements.
A criticism one year ago was zstandard was pretty good as a general purpose compression format but it wasn't great if you live at the fringes. If you were a speed freak, you'd probably use lz4. If you cared about compression ratios, you'd probably use lzma. But recent releases of zstandard have made huge strides into the territory of these niche formats. Negative compression levels allow zstandard to flirt with lz4's performance. Long distance matching allows zstandard to achieve close to lzma's compression ratios. This is a big friggin deal because it makes it much, much harder to justify a domain-specific compression format over zstandard. I think lzma still has a significant edge for ultra compression ratios when memory utilization is a concern. But for many consumers, memory is readily available and it is easy to justify trading potentially hundreds of megabytes of memory to achieve a 10x speedup for decompression. Even if you aren't willing to sacrifice more memory, the ability to tweak compression parameters is huge. You can do things like store multiple versions of a compressed document and conditionally serve the one most appropriate for the client, all while running the same zstandard-only code on the client. That's huge.
A year later, zstandard continues to impress me for its set of features and its versatility. The library is continuing to evolve - all while maintaining backwards compatibility on the decoding side. (That's a sign of a good format design if you ask me.) I was honestly shocked to see that zstandard was able to change its compression settings in a way that allowed it to compete with lz4 and lzma without requiring a format change.
The more I use zstandard, the more I think that everyone should use this and that popular compression formats just aren't cut out for modern computing any more. Every time I download a zlib/gz or bzip2 compressed archive, I'm thinking if only they used zstandard this archive would be smaller, it would have decompressed already, and I wouldn't be thinking about how annoying it is to wait for compression operations to complete. In my mind, zstandard is such an obvious advancement over the status quo and is such a versatile format - now covering the gamut of super fast compression to ultra ratios - that it is bordering on negligent to not use zstandard. With the removal of the controversial patent rights grant license clause in zstandard 1.3.1, that justifiable resistance to widespread adoption of zstandard has been eliminated. Zstandard is objectively superior for many workloads and I heavily encourage its use. I believe python-zstandard provides a high-quality interface to zstandard and I encourage you to give it and zstandard a try the next time you compress data.
If you run into any problems or want to get involved with development, python-zstandard lives at indygreg/python-zstandard on GitHub.
*(I updated the post on 2018-05-16 to remove a paragraph about zstandard competition. In the original post, I unfairly compared zstandard to Snappy instead of Brotli and made some inaccurate statements around that comparison.)