I almost ignored this, because of obvious reasons. Zlib is not going to become obsolete, it has been far too common for many years, it's fast, it's open source. Brotli is more focused on web (static dictionary, very slow compression, but fast decompression). The other mentioned competitor - BitKnit - will supposedly be proprietary (will be part of Oodle). The 20% gain in the benchmark at the same speed as zLib is interesting, but for "zLib becoming obsolete", it has to do better.
Anyway, BitKnit is from ryg, the author of kkrunchy, which uses some advanced techniques, so I guess it has potential. But zLib killer? No
Damn kids. They're all alike.
No offence to Rich, who has done some great work, but perhaps the graphs rather suggest LZHAM in danger of becoming obsolete? If I understood the results, among the algorithms for offline compression there are now competitors which offer ratios similar to LZHAM, but at speeds closer to zlib.
Also, it would have been interesting to see how the latest zstd had fared.
One thing to note is that we are not doing binary context modeling yet with vanilla brotli. We do it with WOFF 2.0, where the binary context model is switched on. We already know how to do it with more expensive way (bruting), but haven't implemented good heuristics to decide over it fast. Once we add it, brotli should become a bit stronger with both binary data (and possibly with 7-bit ascii text). I'm talking about using the LSB6-context mode instead of the UTF8-mode.
Some proprietary thing from RadGameTools would not replace zlib, no matter what. Zlib took its place for quite many reasons. And e.g. permissive license was one of the factors, btw. Realistically speaking e.g. most web servers are actually using *nix like OSes - good luck with proprietary codecs, etc. Another factor is that compression ratio to speed tradeoff looks quite sensible. And it's not bloated. E.g. bringing large text-oriented dicroinaries to replace zlib in e.g. png sounds like odd thing to do.
Ratio .. is cool, etc. But does Brotli haves anything to counter e.g. zlib 1 compression speed? It like several times slower, even on fastest levels. And it going to be a problem for servers. There is already SLZ which does some smartass trick on huffman, to avoid doing huffman at all, while retaining zlib stream compatibility. So compression ratio goes down, but compression speed skyrockets and size of compression state decreases a lot, it makes server applications happy. And it something to consider, especially when targeting web, eh? Sure, you can cache static data, compressing once and serving from precompressed cache, but what about compressing dynamically-generated pages, etc?
Then, brotli about 1.5-2x slower to decompress e.g. on ARM board where I experimented with lzbench thing, just because I've set up some dev environment. Where I need speed most. Sure, it faster than LZMA. But seriously slower than zlib. Both to compress and decompress. Not like if I can call it equivalent replacement, rather takes place in between of zlib and lzma. Overall tradeoff looks quite good. But I have some technical issues calling it "zlib replacement".
That's where I would agree @Inikep opinion about the fact speed matters, and being way too much inclined on ratio could be a bad idea. There're plenty of heavy monsters with impressive ratios. But what about speed? Good ratio is only fun if it comes with decent speed. Ideally, on VARIOUS platforms, e.g. ARM, MIPS, PowerPC, ... . Somehow, zlib proves to be not a bad tradeoff on varoius things. Probably it has been not so badly optimized over time. E.g. zstd beats zlib to the dust on x86. But on my ARM board it shows just 20% better speed with ratio more or less comparable to zlib. Improvement? Generally, yes. But I do not think everyone would be tempted to rewrite their programs here and now. Especially if it happens on ARM.
P.S. Yes, I like small, neat and FAST decompressors, what a badass I am . Monstrous compressors are ... a bit less issue, but still, as SLZ example shows, it matters. Especially in web.
When reading the title I expected to see ZSTD as I've been thinking for a long time already that this is the first codec that has a chance of obsoleting zlib.
And so far I view it as the only codec with such potential.
Sure, it's a bit of a silly title. But that seems to work well to attract attention from the mainstream web sites, so maybe it's a clever title.
Neither BitKnit nor Brotli will "make zlib obsolete".
If you really cared about compression ratio & speed, zlib has been obsolete for 20+ years. LZX killed it thoroughly almost immediately.
What makes zlib so ubiquitous is that it's simple, reasonably fast, low memory usage, portable, and open source.
Really the only zlib-killer around is ZStd.
Last edited by cbloom; 3rd August 2016 at 20:01.
Ironically, you still have to download web pages being compressed by this 20+ years old zlib thing. I mean gzip content encoding. And pngs, even on this web site are relying on zlib as well. LZX can be cool, etc, but since it proprietary it only got used by MS in some few selected cases. Nowhere close to zlib. Same would happen to BitKnit, etc.If you really cared about compression ratio & speed, zlib has been obsolete for 20+ years. LZX killed it thoroughly almost immediately.
And zstd is very strange kind of zlib killer. It would make sense if there was no zlib. But rewiring all programs to get more or less same ratio? And speed up only on x86? Hmm... x86's aren't exactly short on CPU power.