Results 1 to 8 of 8

Thread: New Compression Codecs Risk Making Zlib Obsolete

  1. #1
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    456
    Thanks
    46
    Thanked 164 Times in 118 Posts

    New Compression Codecs Risk Making Zlib Obsolete


  2. #2
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    I almost ignored this, because of obvious reasons. Zlib is not going to become obsolete, it has been far too common for many years, it's fast, it's open source. Brotli is more focused on web (static dictionary, very slow compression, but fast decompression). The other mentioned competitor - BitKnit - will supposedly be proprietary (will be part of Oodle). The 20% gain in the benchmark at the same speed as zLib is interesting, but for "zLib becoming obsolete", it has to do better.

    Anyway, BitKnit is from ryg, the author of kkrunchy, which uses some advanced techniques, so I guess it has potential. But zLib killer? No
    http://schnaader.info
    Damn kids. They're all alike.

  3. #3
    Member jibz's Avatar
    Join Date
    Jan 2015
    Location
    Denmark
    Posts
    114
    Thanks
    91
    Thanked 69 Times in 49 Posts
    No offence to Rich, who has done some great work, but perhaps the graphs rather suggest LZHAM in danger of becoming obsolete? If I understood the results, among the algorithms for offline compression there are now competitors which offer ratios similar to LZHAM, but at speeds closer to zlib.

    Also, it would have been interesting to see how the latest zstd had fared.

  4. #4
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    One thing to note is that we are not doing binary context modeling yet with vanilla brotli. We do it with WOFF 2.0, where the binary context model is switched on. We already know how to do it with more expensive way (bruting), but haven't implemented good heuristics to decide over it fast. Once we add it, brotli should become a bit stronger with both binary data (and possibly with 7-bit ascii text). I'm talking about using the LSB6-context mode instead of the UTF8-mode.

  5. #5
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    83
    Thanks
    25
    Thanked 15 Times in 13 Posts
    Some proprietary thing from RadGameTools would not replace zlib, no matter what. Zlib took its place for quite many reasons. And e.g. permissive license was one of the factors, btw. Realistically speaking e.g. most web servers are actually using *nix like OSes - good luck with proprietary codecs, etc. Another factor is that compression ratio to speed tradeoff looks quite sensible. And it's not bloated. E.g. bringing large text-oriented dicroinaries to replace zlib in e.g. png sounds like odd thing to do.

    Ratio .. is cool, etc. But does Brotli haves anything to counter e.g. zlib 1 compression speed? It like several times slower, even on fastest levels. And it going to be a problem for servers. There is already SLZ which does some smartass trick on huffman, to avoid doing huffman at all, while retaining zlib stream compatibility. So compression ratio goes down, but compression speed skyrockets and size of compression state decreases a lot, it makes server applications happy. And it something to consider, especially when targeting web, eh? Sure, you can cache static data, compressing once and serving from precompressed cache, but what about compressing dynamically-generated pages, etc?

    Then, brotli about 1.5-2x slower to decompress e.g. on ARM board where I experimented with lzbench thing, just because I've set up some dev environment. Where I need speed most. Sure, it faster than LZMA. But seriously slower than zlib. Both to compress and decompress. Not like if I can call it equivalent replacement, rather takes place in between of zlib and lzma. Overall tradeoff looks quite good. But I have some technical issues calling it "zlib replacement".

    That's where I would agree @Inikep opinion about the fact speed matters, and being way too much inclined on ratio could be a bad idea. There're plenty of heavy monsters with impressive ratios. But what about speed? Good ratio is only fun if it comes with decent speed. Ideally, on VARIOUS platforms, e.g. ARM, MIPS, PowerPC, ... . Somehow, zlib proves to be not a bad tradeoff on varoius things. Probably it has been not so badly optimized over time. E.g. zstd beats zlib to the dust on x86. But on my ARM board it shows just 20% better speed with ratio more or less comparable to zlib. Improvement? Generally, yes. But I do not think everyone would be tempted to rewrite their programs here and now. Especially if it happens on ARM.

    P.S. Yes, I like small, neat and FAST decompressors, what a badass I am . Monstrous compressors are ... a bit less issue, but still, as SLZ example shows, it matters. Especially in web.

  6. #6
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    When reading the title I expected to see ZSTD as I've been thinking for a long time already that this is the first codec that has a chance of obsoleting zlib.
    And so far I view it as the only codec with such potential.

  7. #7
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Sure, it's a bit of a silly title. But that seems to work well to attract attention from the mainstream web sites, so maybe it's a clever title.

    Neither BitKnit nor Brotli will "make zlib obsolete".

    If you really cared about compression ratio & speed, zlib has been obsolete for 20+ years. LZX killed it thoroughly almost immediately.

    What makes zlib so ubiquitous is that it's simple, reasonably fast, low memory usage, portable, and open source.

    Really the only zlib-killer around is ZStd.
    Last edited by cbloom; 3rd August 2016 at 20:01.

  8. #8
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    83
    Thanks
    25
    Thanked 15 Times in 13 Posts
    If you really cared about compression ratio & speed, zlib has been obsolete for 20+ years. LZX killed it thoroughly almost immediately.
    Ironically, you still have to download web pages being compressed by this 20+ years old zlib thing. I mean gzip content encoding. And pngs, even on this web site are relying on zlib as well. LZX can be cool, etc, but since it proprietary it only got used by MS in some few selected cases. Nowhere close to zlib. Same would happen to BitKnit, etc.

    And zstd is very strange kind of zlib killer. It would make sense if there was no zlib. But rewiring all programs to get more or less same ratio? And speed up only on x86? Hmm... x86's aren't exactly short on CPU power.

Similar Threads

  1. Fast Zlib compression
    By gildor in forum Data Compression
    Replies: 14
    Last Post: 20th February 2017, 18:32
  2. Zlib-ng: a performance-oriented fork of zlib
    By dnd in forum Data Compression
    Replies: 0
    Last Post: 5th June 2015, 14:29
  3. Intel's Zlib
    By JamesB in forum Data Compression
    Replies: 9
    Last Post: 21st June 2014, 03:36
  4. Compression benchmarking: 64 bit images and 24 bit codecs
    By m^2 in forum The Off-Topic Lounge
    Replies: 6
    Last Post: 30th November 2011, 17:01
  5. inflate for zlib v1.2.3
    By bartek in forum Data Compression
    Replies: 1
    Last Post: 15th December 2009, 13:18

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •