Page 1 of 2 12 LastLast
Results 1 to 30 of 41

Thread: smallz4

  1. #1
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts

    smallz4

    I posted a few times about my LZ4 compatible compresser which is based on optimal parsing.
    You can find the full source code on my web site at http://create.stephan-brumme.com/smallz4/ along with a short explanation how optimal parsing can be applied to LZ4.

    lz4 r128
    509,454,875 bytes 6 seconds

    lz4 r128 at level 9
    374,905,570 bytes 33 seconds

    lz4 r128 at level 9 (and --no-frame-crc -BD )
    374,153,457 bytes 33 seconds

    LZ4X 1.12 at level 9
    372,068,631 bytes 164 seconds

    smallz4 0.5 at level 9
    371,681,075 bytes 242 seconds


    @admins: Maybe it's a good idea to move some or all of my postings in encode's thread to this new thread. Thanks !

  2. The Following 4 Users Say Thank You to stbrumme For This Useful Post:

    Bulat Ziganshin (1st September 2016),comp1 (1st September 2016),Cyan (1st September 2016),Marsu42 (1st November 2016)

  3. #2
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    Stefan Atev suggested a few changes http://encode.ru/threads/2447-LZ4X-A...ll=1#post49802 to the smallz4 match finder and indeed, his code along with a few modifications from my side led to a significant speed-up:
    - compressing enwik9 is done in 4 minutes instead of 6 minutes (actually 242 vs. 362 seconds)
    - compression ratio didn't change at all because the match finder still returns the same matches as before - just faster

    Version 0.5 of smallz4 is now available on my website http://create.stephan-brumme.com/smallz4/ incl. full source code and a x64 Windows binary (scroll down on my website or - for the lazy ones - just click here).

    Stefan observed speed improvements by removing the constructor of Match. While this could be true on his Visual C++ installation, I didn't measure any differences using GCC (Linux x64) and CLang (Windows x64).

    However, he compared strings, i.e. potential matches, in a different way than I did.
    I describe his approach in my code comments, too:

       // let's introduce a new pointer atLeast that points to the first "new" byte of a potential longer match
    // the idea is to split the comparison algorithm into 2 phases
    // (1) scan backward from atLeast to current, abort if mismatch
    // (2) scan forward until a mismatch is found and store length/distance of this new best match
    // current atLeast
    // | |
    // -<<<<<<<< phase 1 <<<<<<<<
    // >>> phase 2 >>>


    Phase 1 only relies on 4-bytes-at-once comparisons.
    A pointer named compare starts at atLeast and runs backward as long as it's bigger than current.
    In my code you will find:

      // note: - the first four bytes always match
    // - in the last iteration, compare is either current + 1 or current + 2 or current + 3
    // - therefore we compare a few bytes twice => but a check to skip these checks is more expensive


    If all comparisons succeeded, we have a new best match.
    A simple forward scan then figures out how many more bytes still match.

  4. #3
    Programmer
    Join Date
    May 2008
    Location
    PL
    Posts
    307
    Thanks
    68
    Thanked 166 Times in 63 Posts
    Quote Originally Posted by stbrumme View Post
    lz4 r128 at level 9 (and --no-frame-crc -BD )
    374,153,457 bytes 33 seconds
    How does it compare with lz4 at level 16 (the last one)?

  5. The Following User Says Thank You to inikep For This Useful Post:

    stbrumme (1st September 2016)

  6. #4
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    I wasn't aware that LZ4 supports levels above 9. It's not mentioned in the help (lz4 -h) or the extended help (lz4 -H).

    Anyway, here are the results:
    lz4 r128 at level  9 (and --no-frame-crc -BD ) => 374,153,457 bytes 
    lz4 r128 at level 16 (and --no-frame-crc -BD ) => 374,031,753 bytes

    Repeating compression at LZ4's level 16 produces different files , that's something I don't see on the lower compression levels (they always produce the same result):
    run 1: 374,031,753 bytes
    run 2: 374,031,771 bytes
    run 3: 374,031,763 bytes
    Last edited by stbrumme; 1st September 2016 at 18:52.

  7. The Following 2 Users Say Thank You to stbrumme For This Useful Post:

    Cyan (1st September 2016),inikep (2nd September 2016)

  8. #5
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    it will be really great if you can move the development to github

  9. #6
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    There's a GIT repository on my website, too:

    git clone http://create.stephan-brumme.com/smallz4/.git

  10. #7
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    i know. and it makes me wonder why you don't like to use githib instead. it's so much easier to watch/participate projects playing on the single platform

  11. #8
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    41
    Thanks
    9
    Thanked 16 Times in 11 Posts
    Glad you found the improvements worthwhile; it may be a good idea to make SmallLZ into a header-only library - it would be easier to include in other C++ projects. BTW, I had found it faster to scan forward a byte at a time rather than 4 at a time + handing of remainder, but that may be VS 2015 issue; it probably also depends on how much you get to extend the match. It's wothk trying only the "slow loop".

  12. #9
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    > Repeating compression at LZ4's level 16 produces different files

    I tried to reproduce your observation, but failed.

    Result was a stable 374031491 at all attempts.

    Not sure what could be different ...

  13. #10
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    The same behavior was observed on other levels > 9, too.
    Running CentOS6 with lz4.x86_64.r131-1.el6 RPM from EPEL repository (binary reports r128 on command line).
    I tested on two machines with almost identical hardware, both Core i7 2600, and the same OS.

    Edit: On a completely different machine, an old Xeon X3323 running Debian and lz4 compiled from scratch (github HEAD) in x32 mode with GCC 4.4, lz4 -16 --no-frame-crc -BD produces stable output: 374031726 bytes. Always the same in 4 consecutive runs - but not your filesize. Level 9: 374153421 bytes and without additional options: 374905570 bytes.

    Edit2: my little Odroid C1 (Debian, lz4 compiled from scratch): lz4 -16 --no-frame-crc -BD => 374031773 bytes. Again, size didn't change when repeating the process. And unfortunately another different filesize. Level 9: 374153468 bytes and without additional options: 374905570 bytes.

    Running lz4 | lz4 -d generates the original file's hash without any errors in all cases. Plain lz4 -9 is consistent across all my systems, but lz4 -16 --no-frame-crc isn't.

    md5sum enwik9
    e206c3450ac99950df65bf70ef61a12d enwik9
    Last edited by stbrumme; 1st September 2016 at 23:06. Reason: ran tests on more computers

  14. The Following User Says Thank You to stbrumme For This Useful Post:

    Cyan (2nd September 2016)

  15. #11
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    Quote Originally Posted by Stefan Atev View Post
    [...] it may be a good idea to make SmallLZ into a header-only library - it would be easier to include in other C++ projects [...]
    And I just finished refactoring the code to be a header-only library ... today version 0.6 was released !
    There are no changes to the compression algorithm.

    You have to provide two function: getBytesFromIn and sendBytesToOut:
    /// read several bytes and store at "data", return number of actually read bytes (return only zero if end of data reached)
    size_t getBytesFromIn(void* data, size_t numBytes);
    /// write a block of bytes
    void sendBytesToOut(const void* data, size_t numBytes);

    Then you call a single static function:
    #include "smallz4.h"
    smallz4::lz4(GET_BYTES, SEND_BYTES);

    Take a look at the test program smallz4.cpp

    The only "new" feature: if you start the program without any parameters and it's not connected to any pipes, then the help screen will be displayed (same as ./smallz4 -h).

    In addition to the Windows x64 binaries, there are statically linked Linux binarie available as well: http://create.stephan-brumme.com/smallz4/#download
    (it's my first time building static Linux binaries, please report problems !)

  16. The Following 2 Users Say Thank You to stbrumme For This Useful Post:

    Bulat Ziganshin (9th September 2016),Stefan Atev (9th September 2016)

  17. #12
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    41
    Thanks
    9
    Thanked 16 Times in 11 Posts
    That's great! My final suggestion would be to not use function pointers for the get/send bytes but simply accept functors for them - that way any call to these functions will be inlined:


    template <typename GetBytesFunctor, typename SendBytesFunctor>
    void compress(GetBytesFunctor get_bytes, SendBytesFunctor send_bytes);


    Since you call these functions frequently, and they may do almost nothing (in the case of buffered I/O), it is probably worth it to not have the indirect call that function pointers entail. The caller can pass in C++ lambdas, random functors, static functions, and function pointers so you don't lose any functionality.

  18. #13
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    There are actually not that many reads/writes. Using all the default settings, enwik8 needs 1528 reads and 125 writes.
    Enlarging BufferSize (line 77 in smallz4.h) reduces the number of reads => default is reading 64k at once. I have to admit, there is no good reason why I chose 64k because these 64k chunks are immediately merged into a single 4M block.
    The program currently needs 5 writes per block (=compressed result of 4MB input data) because the 4-byte size is written as four single bytes, each a call to the write functions (on little-endian systems, these four bytes could be written at once). Plus one write for the compressed data.

  19. The Following User Says Thank You to stbrumme For This Useful Post:

    Stefan Atev (12th September 2016)

  20. #14
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    41
    Thanks
    9
    Thanked 16 Times in 11 Posts
    You're right - I did not look carefully enough; for such a number of calls there would be no measurable overhead.

  21. #15
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    The program is now under MIT license - that's the only reason I bumped its version number to 1.0.

    There are no changes to the algorithm but I added a simple Makefile.

    Windows x64 executable: http://create.stephan-brumme.com/sma...allz4-v1.0.exe
    Git repo: http://create.stephan-brumme.com/smallz4/.git

  22. The Following 3 Users Say Thank You to stbrumme For This Useful Post:

    Bulat Ziganshin (25th October 2016),Cyan (25th October 2016),Marsu42 (1st November 2016)

  23. #16
    Member
    Join Date
    Oct 2016
    Location
    Berlin
    Posts
    9
    Thanks
    8
    Thanked 0 Times in 0 Posts

    Lightbulb

    Quote Originally Posted by stbrumme View Post
    I posted a few times about my LZ4 compatible compresser which is based on optimal parsing.
    Innocent question from one observer: What's the performance in comparison to other algos? That's b/c trusty Wikipedia states in https://en.wikipedia.org/wiki/LZ4_(c...ion_algorithm)

    The algorithm gives a slightly worse compression ratio than the LZO algorithm – which in turn is worse than algorithms like gzip. However, compression speeds are similar to LZO and several times faster than gzip while decompression speeds can be significantly faster than LZO.
    Is your optimized lz4 still worse than lzo in compression ratio?

    Quote Originally Posted by stbrumme View Post
    (it's my first time building static Linux binaries, please report problems !)
    Well, that's the opportunity to get acquainted to package formats like snap or flatpak :->

  24. #17
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    The LZO-compressed enwik9 test file (max. compression / level 9) has 366,349,780 bytes, that's about 1.5% more than the corresponding LZ4 file produced by smallz4.
    (on the Wikipedia page I found a link to tests with ARM-Linux kernels which were 8% larger when compressed with LZ4 instead of LZO).

    My program compresses about 3 MByte per second, lzop -9 on the same machine is about 3 times faster. However, decompression speed is far more important when using maximum compression levels: a quick test shows that decompressing enwik9.lz4 is about twice as fast compared to enwik9.lzo. And that's the reason they went with LZ4 for the Linux kernel.

  25. #18
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    410
    Thanks
    37
    Thanked 60 Times in 37 Posts
    @stbrumme:

    the program description sounds very interesting and "decompressing 2 times faster then lzo"

    but the program seems not to work for me ...

    results:
    ---
    Microsoft Windows [Version 10.0.14393]
    (c) 2016 Microsoft Corporation. Alle Rechte vorbehalten.


    c:\COMPRESS\PGM>smallz4-v1.0.exe -9 test.iso test.iso.lz4
    4 0 64

    c:\COMPRESS\PGM>
    ---
    i am using "Windows 10 x64"

    the program outputs only: "4 0 64"

    can you give me a hint?

    best regards

  26. #19
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    Oh sh*t, I uploaded a binary which contained test code to detect whether the program was started from command-line or is fed by a pipe (which is a bit different on Windows systems than Linux, my preferred dev system). The library itself wasn't affected at all, just the separate smallz4.cpp which handles basic stuff like file I/O and, well, command-line stuff.

    Download the correct Windows binary again (same URL): http://create.stephan-brumme.com/sma...allz4-v1.0.exe

    PS: When referring to decompression speed, I have the reference implementation of LZ4 in mind. My simplified smallz4cat implementation is a bit slower.

  27. #20
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    410
    Thanks
    37
    Thanked 60 Times in 37 Posts
    @stbrumme: thank you very much for your quick answer
    but now the program says ... missing
    libgcc_s_seh 64-1.dll and libstdc++_64-6.dll

    can you please upload the nessecary dll-files?

  28. #21
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    it's better to compile with "-static -s"

  29. #22
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by stbrumme View Post
    The LZO-compressed enwik9 test file (max. compression / level 9) has 366,349,780 bytes, that's about 1.5% more than the corresponding LZ4 file produced by smallz4.
    (on the Wikipedia page I found a link to tests with ARM-Linux kernels which were 8% larger when compressed with LZ4 instead of LZO).

    My program compresses about 3 MByte per second, lzop -9 on the same machine is about 3 times faster. However, decompression speed is far more important when using maximum compression levels: a quick test shows that decompressing enwik9.lz4 is about twice as fast compared to enwik9.lzo. And that's the reason they went with LZ4 for the Linux kernel.
    1. Which variant of LZO? There's a dozen.
    2. Most variants go to 999.
    At least if you use a library.

  30. #23
    Member
    Join Date
    Apr 2009
    Location
    here
    Posts
    202
    Thanks
    165
    Thanked 109 Times in 65 Posts
    here some static windows builds. i wonder why no one else does share...

    both x64 and x86 included, GCC 6.2


    not tested.
    Attached Files Attached Files

  31. #24
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    I give up messing around with LLVM/CLang on Windows - and installed Visual C++ Express.
    A static x64 Windows binary without any dependencies, not even MSVCRT.DLL, can be found at the same URL as usual: http://create.stephan-brumme.com/sma...allz4-v1.0.exe

    A static MinGW build was about 700 kb, this static VC++ build is is little more than 130 kb.

    PS: when I talked about LZO I meant the LZOP reference implementation by Oberhumer (v1.03)

  32. #25
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by stbrumme View Post
    I give up messing around with LLVM/CLang on Windows - and installed Visual C++ Express.
    A static x64 Windows binary without any dependencies, not even MSVCRT.DLL, can be found at the same URL as usual: http://create.stephan-brumme.com/sma...allz4-v1.0.exe

    A static MinGW build was about 700 kb, this static VC++ build is is little more than 130 kb.

    PS: when I talked about LZO I meant the LZOP reference implementation by Oberhumer (v1.03)
    My memory served me wrong, 999 is not a level, but a variant, which has a level too.
    Reading the code, it seems that LZOP level 9 is lzo1x_999, level 9 which is indeed the strongest lzo1x.

  33. #26
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    83
    Thanks
    25
    Thanked 15 Times in 13 Posts
    Quote Originally Posted by stbrumme View Post
    The LZO-compressed enwik9 test file (max. compression / level 9) has 366,349,780 bytes, that's about 1.5% more than the corresponding LZ4 file produced by smallz4.
    (on the Wikipedia page I found a link to tests with ARM-Linux kernels which were 8% larger when compressed with LZ4 instead of LZO).
    When it comes to Linux kernels, my experience suggests LZO compression (by LZOP -9, which uses LZO1X as far as I understand) yields noticeably better ratio compared to original LZ4 (including hc modes). So at least if using "stock" tools, LZO wins when it comes to ratio. On LZ4 side, decompression is faster, on both x86_64 and ARM. Overall they hit somewhat different points in terms of speed vs ratio. This is true both x86_64 and ARM.

    I've been curious what LZ4X could do, but it choked on large chunk of zeros. After all, uncompressed kernel image could contain like 1 megabyte of zeros. And overall LZ4 bitstream isn't really meant to represent something like this in anyhow efficient manner. Even merely encoding integer in order of 1M using LZ4 representation implies several KiB of redundant identical bytes, disregarding any other limits. Which hurts ratio, obviously. Not to mention it could be better idea to use larger window (e.g. there're quite some matching messages at considerable distances). This said, LZ5 (v 1.5) manages to reach quite impressive ratios, probably due to larger window and decent matchfinder (despite limitations of 1.x bitstream similar to LZ4) while decompression speed is more or less similar to LZO1X. Getting LZO-like decompression speed with considerably smaller data size is a good idea . Now there is LZ5 v2, its ratio somewhat suffered but at least on x86_64 its speed has seriously improved (I haven't tried LZ5v2 on ARM thinsg yet). So it also hits somewhat different point. So I can't readily name competing algos for both LZ5 v1 and v2, at least across opensource things.

    Another interesting observation is the fact LZO1Z gives similar decompression speeds but beats LZO1X on this particular kind of data any time when it comes to ratio. So LZO1X isn't best one could get from LZO on Linux kernel. I guess it's only used since there is lzop (which is easy to install on Linux).

    I guess it could be interesting to see how smallz4 performs against LZO. Somehow I've missed it.

  34. #27
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by xcrh View Post
    So I can't readily name competing algos for both LZ5 v1 and v2, at least across opensource things.
    LZO :P
    Also, Shrinker. There may be something in the rich Nakamichi family too, I don't know.

  35. #28
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    83
    Thanks
    25
    Thanked 15 Times in 13 Posts
    Btw, I've gave this thing a try. And yeah, it beats LZ4HC and unlike LZ4X compression finishes within sane amounts of time
    lz4hc r131 -16 compresses my "standard crash test dummy" aka some x86_64 linux kernel down to 8732628 bytes.

    What smalllz could do? (levels 8 and 9)
    8904176 kernel-smalllz4-8
    8715263 kernel-smalllz4-9 <- this one is a win

    For the reference, lzop -9 "out of the box" (distro version; kernel build system calls it exactly like this) would do this:
    7978505 kernel-lzo-9

    As for decompression speed, on x86-64 laptop LZ4 is MUCH faster at decompression than LZO, like 3x (!!!). So it makes a lot of sense. But when I've ran tests on ARM board tradeoffs were diferent and LZ4 only scored modest decompression speed win, and ratio loss is noticeable. So I wouldn't insist LZO got strong competition in this area. It still gets some point.

    Quote Originally Posted by m^3 View Post
    LZO :P
    Not really, at least not "as is". Just because there is no way to get comparable ratios while decompression speed is comparable. I guess it would do better with larger window, but at this point it wouldn't be LZO just like LZ5 isn't LZ4 anymore. LZ5 v2 targets somewhat different but quite viable point as well, beating LZ4 in terms of ratios and/or speeds. At least on x86-64 laptop it looked like this (I guess I should give it a try on ARM).

    Also, Shrinker.
    If you refer "shrinker 0.1" (as seen in inikep's lzbench), it only gives rather boring 9349930 bytes on this crash test dummy. And it decompression speed does not beats LZ4 either. I haven't spotted levels or something in its code as well. Are there some improved versions or is it some different algo? At first glance LZ4 could get better ratio and faster decompression at once. Sure, LZ4 is good at it in its domain.

    When it comes to ratio, I've scored below 7000000 bytes using this dummy with tweaked LZ5 levels (its cheating, esp. full 24-bit window, but stock version is quite close and decompressor still works). So it seems it could be one of tightest opensource byte-aligned LZs around. Reaching decompression speeds in range of LZO but ratios are usually better. In cases like Linux kernels where it is compress once, decompress many it sounds like fairly good tradeoff.

    There may be something in the rich Nakamichi family too, I don't know.
    Is this thing opensource under some sane license? Is it cross platform? And if yes, where it located? Straightforward search leads me to stockpiles of manuals for nakamichi devices. Have this company published some compression algo?

  36. #29
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by xcrh View Post
    Not really, at least not "as is". Just because there is no way to get comparable ratios while decompression speed is comparable. I guess it would do better with larger window, but at this point it wouldn't be LZO just like LZ5 isn't LZ4 anymore. LZ5 v2 targets somewhat different but quite viable point as well, beating LZ4 in terms of ratios and/or speeds. At least on x86-64 laptop it looked like this (I guess I should give it a try on ARM).
    There's a large number of LZO variants, lzo1x is not the strongest one. Try them all and you'll likely to find some pareto-frontiers.

    Quote Originally Posted by xcrh View Post
    If you refer "shrinker 0.1" (as seen in inikep's lzbench), it only gives rather boring 9349930 bytes on this crash test dummy. And it decompression speed does not beats LZ4 either. I haven't spotted levels or something in its code as well. Are there some improved versions or is it some different algo? At first glance LZ4 could get better ratio and faster decompression at once. Sure, LZ4 is good at it in its domain.
    It's slightly stronger but slower than LZ4, just like LZ5.
    Though it doesn't have advanced encoders, unlike LZ5. If you disregard compression speed, it's a clear looser indeed.

    Quote Originally Posted by xcrh View Post
    Is this thing opensource under some sane license? Is it cross platform? And if yes, where it located? Straightforward search leads me to stockpiles of manuals for nakamichi devices. Have this company published some compression algo?
    Public domain. Some are cross-platform. Some are definitely tweaked for intel, but I don't know if they are unportable. It doesn't matter much really as the codecs are not production quality anyway and good QA is going to cost more effort than porting.
    http://www.sanmayce.com/Nakamichi/
    I included a number of early codecs in fsbench:
    https://chiselapp.com/user/Justin_be...decs/nakamichi
    None was LZ5 class, but after I stopped working on that, Sanmayce published a lot of new variants, some clearly superior in his tests.

  37. #30
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Quote Originally Posted by xcrh View Post
    As for decompression speed, on x86-64 laptop LZ4 is MUCH faster at decompression than LZO, like 3x (!!!).
    of top of my head, zstd is 2x slower at decomrpression than lz4, so you may try it too. try with small and large dictionary sizes since this greatly affects the decompression speed

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •