Results 1 to 30 of 30

Thread: Fastest in memory compression c++

  1. #1
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts

    Fastest in memory compression c++

    Hello,

    our company is going to develop a new product prototype (to be exactly mainly my colleague, me and and an external colleague).
    My colleague did some first test today and noticed that in 1s we are going to produce about 200KB data in text format (in binary not yet tested). That would be about 700MB/h, which is way to much. First thing I suggested is compression, so...

    ... a question to the experts here, which is the fastest in memory compression library, which fulfulls also the following aspects:

    - the library should be the fastest one (real time)
    - c/c++ (target system is an embedded linux ARM environment, 1Ghz, 512MB RAM)
    - open source
    - licence allows commercial use


    Thank you in advance


    edit:

    Oh yeah, forgot to mention: we have to deliver the first prototype at the end of July ... "sounds like fun"
    Last edited by JangoFatXL; 5th April 2014 at 00:23.

  2. #2
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    It seems like any library should easily keep up with 200 KB/sec. I think zlib, libbsc, libzling, and libzpaq are all free and open source allowing commercial use. libzpaq is public domain but optimized for x86. I know others have run zpaq on an ARM.

  3. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    JangoFatXL (7th April 2014)

  4. #3
    Programmer
    Join Date
    May 2008
    Location
    PL
    Posts
    307
    Thanks
    68
    Thanked 166 Times in 63 Posts
    You can also try LZ4, LZ4HC, and liblzma
    https://code.google.com/p/lz4/
    http://tukaani.org/xz/

  5. The Following User Says Thank You to inikep For This Useful Post:

    JangoFatXL (7th April 2014)

  6. #4
    Member
    Join Date
    Oct 2013
    Location
    Filling a much-needed gap in the literature
    Posts
    350
    Thanks
    177
    Thanked 49 Times in 35 Posts
    Are you compressing mainly text-like data, or mainly in-memory kinds of data representations? Big blocks of data or small ones like VM pages or disk blocks?

    If the latter on both counts, have a look at WKdm, which is what's mainly used for compressing virtual memory pages in Mac OS X Mavericks. The original C code is free, and Apple gives away a tweaked x86 assembly version that's faster under some open-source license. (Not sure how liberal. It's whatever license the Darwin version of OSX is distributed under.) If you like it, an ARM version could be sped up similarly easily with a little hand-tweaking. (They didn't do anything but a little hand-tweaking of the C compiler assembly output from my ancient C source, but they brag about how fast it is.)

    Another thing to look at is the new version of lzo that Markus Oberhumer came up with for use inside the Linux kernel---it's both faster than his earlier lzo1x and w/simplified linkage for kernel-space. (It's already there in some people's kernels, but I don't know if it's part of the main tree yet. Last I heard it wasn't, for lack of a committed maintainer, but it may be by now. As I understand it, it should be, because it rocks, but Markus's code is not as readable and maintainable as mine so maybe nobody has volunteered.)

    BTW I'm pretty sure I know how to improve on LZO or WKdm for in-memory data like in OSX "compressed memory" or Linux/Android "compressed swap" (zram, zswap et al.).

    Can you assume that your target HW has NEON (ARM's SIMD) instructions? I think there are some potential wins there, maybe fairly significant ones. (Apple's tweaked WKdm assembly code for x86 does not use SIMD instructions, BTW, just a bit better use of pipelines, etc.)

    Also, how crucial is speed vs. ratio to you? (For compressed caching VM, they're both very important all the time, but for a lot of things once it's "fast enough," better compression is better than faster.)

    How symmetrical is your compression/decompression? Do you compress once and decompress a lot, or compress and uncompress all the time? Is your main figure of merit compression speed, decompression speed, or a roughly equal balance of both?

    Are you working in userland, or in the Linux kernel? Multi- or single-threaded?

  7. The Following User Says Thank You to Paul W. For This Useful Post:

    JangoFatXL (7th April 2014)

  8. #5
    Member
    Join Date
    Oct 2013
    Location
    Filling a much-needed gap in the literature
    Posts
    350
    Thanks
    177
    Thanked 49 Times in 35 Posts
    Also, are you targeting a specific variety of ARM, or any old ARM? Some versions of ARM support things like not-word-aligned loads pretty well, most don't, and that might affect whether C code that's mostly optimized for x86's compiles to fast code or not.

    Are you assuming GCC?

  9. #6
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    456
    Thanks
    46
    Thanked 164 Times in 118 Posts
    Facts and only facts on: https://sites.google.com/site/powturbo/home/benchmark with links to several compressors at the at the buttom of the page.

  10. #7
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    @dnd:
    Yeah, right. Are your results correct? Maybe, but you're biased, they are not verifiable and therefore not trustworthy. Not to say that they are so good that they go well past being suspicious and that they are irrelevant - because the OP asked for permissively licensed open source software. It looks like by positing here you're just increasing your page rank.
    @Paul W
    LZO is GPLed
    WKdm is closed source
    Both are precisely useless to the OP.

    @OP:
    The fastest algorithms are way faster than you need.
    I expect RLE64 to run @ very roughly 200 MB/s consumption.
    But with *very* low compression ratio.
    I think you should specify your needs better. Why do you ask for the fastest when the data rate requirements mean that you can get well into the stronger end?
    Do you have constraints (or expectations) on % of CPU that you can use for compression?
    Last edited by m^2; 6th April 2014 at 22:06.

  11. #8
    Member
    Join Date
    Oct 2013
    Location
    Filling a much-needed gap in the literature
    Posts
    350
    Thanks
    177
    Thanked 49 Times in 35 Posts
    LZO is GPLed
    WKdm is closed source
    Both are precisely useless to the OP.
    WKdm is NOT closed source---I designed and wrote it (with Scott Kaplan) and have been giving it away for 15 years or so.

    (Apple doesn't pay me anything to use it in OS X. They used their own ARM implementation in the Newton, before we even published it, but I don't know if that code's around anywhere. I wouldn't be surprised if it's in IOS somewhere.)

    You seem to be right about lzo, though. I thought that at least lzo1x-1 was under a very liberal license, but it looks like straight GPL v2. My mistake.

    On the other hand, if they have it in their Linux kernel, they may be able to do a kernel call to get the kernel to compress stuff using the in-kernel version (or LZ4, which IIRC is also built into recent kernels, or can be configured to be). Dunno if that's worth it; it depends on what they're doing.
    Last edited by Paul W.; 7th April 2014 at 00:57.

  12. #9
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    they may be able to do a kernel call to get the kernel to compress stuff using the in-kernel version
    Accessing kernel-space functions from user-space can be very difficult, if not simply impossible. Appropriate user->kernel API must exist, and it is likely buried under layers of other kernel-space modules.

    It seems easier for this case to link user-space dynamic libraries.

    Otoh, simply including the source code looks simple enough to me, and probably less troublesome (no dependence to package manager).

  13. #10
    Member
    Join Date
    Oct 2013
    Location
    Filling a much-needed gap in the literature
    Posts
    350
    Thanks
    177
    Thanked 49 Times in 35 Posts
    Yann,

    Yes, all other things being equal, the easiest thing to do is to copy the source for some simple little compressor (with a suitably liberal license) into their stuff.

    One reason I asked if they were working in the kernel is that a lot of embedded systems involve linking something the kernel, if only a little special-purpose file system or device driver or swap manager or whatever. Depending on what exactly what they're actually doing, it might or might not make sense to use a compressor that's already in the kernel.

    My impression was that the in-kernel compressors in Linux are not deeply buried, because they're pretty generally useful, but that could be a misimpression.

  14. #11
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    My impression was that the in-kernel compressors in Linux are not deeply buried, because they're pretty generally useful, but that could be a misimpression.
    Indeed, when working in kernel space, accessing the LZ4 module is "straightforward" : just use the API designed by the kernel programmer (which by the way, is not strictly the same as the "main" LZ4 API, kernel space follows a different naming convention, so pay attention).

    But really kernel space programming tend to be "hardcore", and I would be surprised if this project does really require it. You don't get to kernel space unless you absolutely have to. It's way too restrictive.

  15. #12
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    437
    Thanks
    137
    Thanked 152 Times in 100 Posts
    One thing to note is the asymmetry of LZ type compressors. They often spend an order of magnitude more time compressing than decompressing (in particular lzma which I wouldn't recommend for your task). This can be appropriate for archives where something is compressed once and decompressed many times, but not so much if it is part of a transfer protocol designed to reduce bandwidth (where the process is compress, transfer, decompress with the bottleneck being whatever is slowest).

    At the fast end of the spectrum LZ4 and Snappy are both BSD licence and very quick, although they do not have tremendous compression ratios. That's fine though for a lot of applications.

    If you have more CPU available tools like libbsc can offer extremely good compression without costing the earth in CPU time, also open source. Inbetween there are a myriad of compressors, like Zlib or bzip2.

    You might want to look over http://encode.ru/threads/1371-Filesystem-benchmark to see what other alternatives there are. Maybe lzjb also fits the bill, but I don't know if the licence fits the bill. This is something you'll want to take care with. GPL is out for commercial software (unless you want to opensource that too), but LGPL is OK. Read carefully

  16. The Following User Says Thank You to JamesB For This Useful Post:

    JangoFatXL (7th April 2014)

  17. #13
    Member RichSelian's Avatar
    Join Date
    Aug 2011
    Location
    Shenzhen, China
    Posts
    156
    Thanks
    18
    Thanked 50 Times in 26 Posts
    libzling (https://github.com/richox/libzling) is going to be a "header-only" C++ library, for easiest embbed into other programs. thanks for paying attention.

  18. The Following User Says Thank You to RichSelian For This Useful Post:

    JangoFatXL (7th April 2014)

  19. #14
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts
    @all: Thank you very much

  20. #15
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts
    @Paul W
    Are you compressing mainly text-like data, or mainly in-memory kinds of data representations? Big blocks of data or small ones like VM pages or disk blocks?...
    Small data, it is going to be measurement results, up to 10 times/s (first prototype should deliver fresh data every 1s)

    Can you assume that your target HW has NEON (ARM's SIMD) instructions? I think there are some potential wins there, maybe fairly significant ones. (Apple's tweaked WKdm assembly code for x86 does not use SIMD instructions, BTW, just a bit better use of pipelines, etc.)
    Honest answer: No idea My colleague was testing the stuff. At the moment I'm working on a different project for the next 2 weeks. Unfortunately we are a tiny team and do not have the time optimize that much. As far as I know our marketing guy promised the prototype to be delivered in February (that genius)and we did not yet have the time even to start code a single line...

    Also, how crucial is speed vs. ratio to you? (For compressed caching VM, they're both very important all the time, but for a lot of things once it's "fast enough," better compression is better than faster.)
    Speeeeed, because we have to do on that 1Ghz singlecore more stuff, it has to be realtime

    How symmetrical is your compression/decompression? Do you compress once and decompress a lot, or compress and uncompress all the time? Is your main figure of merit compression speed, decompression speed, or a roughly equal balance of both?
    Decompression will be on a regular x86 laptop, so quite much computing power. The data is going to be refreshed every 1s in the webinterface (later probably faster) -> compress and uncompress all the time

    Are you working in userland, or in the Linux kernel? Multi- or single-threaded?
    singlethreaded - me, personally, user

    Also, are you targeting a specific variety of ARM, or any old ARM?
    I think it is a pretty new model

    Are you assuming GCC?
    yes
    Last edited by JangoFatXL; 7th April 2014 at 21:46.

  21. #16
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts
    Quote Originally Posted by RichSelian View Post
    libzling (https://github.com/richox/libzling) is going to be a "header-only" C++ library, for easiest embbed into other programs. thanks for paying attention.
    Thanks, sounds nice

  22. #17
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts
    Quote Originally Posted by m^2 View Post
    ...
    @OP:
    The fastest algorithms are way faster than you need.
    I expect RLE64 to run @ very roughly 200 MB/s consumption.
    But with *very* low compression ratio.
    I think you should specify your needs better. Why do you ask for the fastest when the data rate requirements mean that you can get well into the stronger end?
    Do you have constraints (or expectations) on % of CPU that you can use for compression?
    It is going to be a realtime measurement device, so the compression should be VERY fast and do not influence the measurent ability too much (at all)

  23. #18
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    i should note that "dnd" from "worldwide" is really Hamidi from Iran who is known to regularly steal other's work and sell it
    I'm working now since late 2009 as a programmer (so comparing to most of you still a newbie ) and meanwhile I have an idea of how much work just the implementation of on algorithm is, not to mention INVENTING one! I would never steal the work of someone else and set it up as my own - ESPECIALLY NOT FOR MONEY.

    So if it is true:
    @dnd: shame on you

  24. The Following User Says Thank You to JangoFatXL For This Useful Post:

    Bulat Ziganshin (7th April 2014)

  25. #19
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    I know I'm a bit late to respond, sorry about that—I don't check encode.ru daily.

    You might be interested in the benchmarks I did for Squash since they include a few different ARM platforms—the results can be quite different from Intel CPUs. I also have raw results which show all compression levels instead of just the default for each codec which are (temporarily) at https://github.com/nemequ/squash-ben...ee/master/data. There is no interface for viewing those yet (someone is working on that), but you should be able to grab the CSV and load it into LibreOffice/OpenOffice Calc, Gnumeric, KOffice, Excel, etc.

  26. The Following User Says Thank You to nemequ For This Useful Post:

    Cyan (8th April 2014)

  27. #20
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    456
    Thanks
    46
    Thanked 164 Times in 118 Posts
    well, bulat is propagating only lies again and again:
    - i'm not from Iran and as usual, does have bulat any proof for that? Nothing...
    And even if it is true, where is the problem?

    - Actually i'm selling nothing illegal.

    - If you want to sell GPL code, then there is nobody that can prevent you from doing this.
    See http://www.gnu.org/philosophy/selling.en.html

    - @JangoFatXL You don't need to have a PhD. to write an lz77. It is just few lines,
    but people like Bulat are telling you it is something particularly difficult.
    Here is one example: https://code.google.com/p/data-shrinker/
    The decoding loop in lzturbo is just less than 10 lines!

    - @m^2 you can verify the results yourself. Why are you not doing that, before complaining each time ?
    You can also read the benchmarks from "Sportman" in this forum:
    Ex.http://encode.ru/threads/1909-Tree-a...ll=1#post37436
    What is your problem, if i want to inform people about facts?
    In my post i'm referring to a link list of several packages at the bottom of the benchmark page.

    - This will be my single post about this subject in this thread.
    Last edited by dnd; 10th April 2014 at 11:54.

  28. #21
    Member
    Join Date
    Nov 2013
    Location
    US
    Posts
    131
    Thanks
    31
    Thanked 29 Times in 19 Posts
    Quote Originally Posted by dnd View Post
    - If you want to sell GPL code, then there is nobody that can prevent you.
    See http://www.gnu.org/philosophy/selling.en.html
    Scroll down on that page:
    one exception is in the case where binaries are distributed without the corresponding complete source code. Those who do this are required by the GNU GPL to provide source code on subsequent request...

  29. #22
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    456
    Thanks
    46
    Thanked 164 Times in 118 Posts
    Quote Originally Posted by cade View Post
    Scroll down on that page:
    Yes, but i want to mean, if someone want to sell GPL software, he doesn't need to steal the coding or ask the original authors, he can just sell copies of the original (providing source code to the customers on request). This is similar what's happening with 7-zip.

  30. #23
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    Quote Originally Posted by dnd View Post
    Facts and only facts
    really ? or rather :

    Quote Originally Posted by dnd View Post
    Ad and only Ad
    Ah yes, this one sounds more right...

  31. #24
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    These sorts of arguments used to be settled by duels. Now that people no longer duel, they just go on forever...

  32. #25
    Member RichSelian's Avatar
    Join Date
    Aug 2011
    Location
    Shenzhen, China
    Posts
    156
    Thanks
    18
    Thanked 50 Times in 26 Posts
    Quote Originally Posted by dnd View Post
    Yes, but i want to mean, if someone want to sell GPL software, he doesn't need to steal the coding or ask the original authors, he can just sell copies of the original (providing source code to the customers on request). This is similar what's happening with 7-zip.
    is lzturbo GPL software?

  33. #26
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    It depends on who you ask

  34. #27
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Paul W. View Post
    WKdm is NOT closed source---I designed and wrote it (with Scott Kaplan) and have been giving it away for 15 years or so.

    (Apple doesn't pay me anything to use it in OS X. They used their own ARM implementation in the Newton, before we even published it, but I don't know if that code's around anywhere. I wouldn't be surprised if it's in IOS somewhere.)
    Good to know, thanks. Another algo to add to fsbench.



    Quote Originally Posted by dnd View Post
    - @m^2 you can verify the results yourself. Why are you not doing that, before complaining each time ?
    You can also read the benchmarks from "Sportman" in this forum:
    Ex.http://encode.ru/threads/1909-Tree-a...ll=1#post37436
    What is your problem, if i want to inform people about facts?
    In my post i'm referring to a link list of several packages at the bottom of the benchmark page.
    My apologises, I mixed up this benchmark with the entropy coders one, it's indeed verifiable.
    Still, it's irrelevant for the OP.

    @OP
    Interesting options:
    * libssc is probably the fastest LZ out there. I failed to make it work in my benchmark though.
    * LZ4 is rather mature, portable, fast. Recommended
    * Snappy is somewhat used too, but inferior to LZ4 nowadays
    * RLE64 - the speed king, but rarely strong enough to be worthwhile. You may test it on your data, but don't be surprised if you save 1%.

    There's also a huge number of options that I wouldn't recommend for that they have at least one of the following properties:
    * inefficient
    * closed source
    * copyleft-encumbered
    * immature

    The only ones that I know of, haven't benchmarked and have had indication that they may be worth some are:
    * WKdm
    * wflz
    * SynLZ

    Others that I know of, haven't benchmarked, know nothing positive (and often know nothing negative either) about:
    * https://github.com/zfy0701/Parallel-LZ77
    * https://github.com/matuba/LZSS
    * https://github.com/Wuestenschiff/myLz77
    * https://github.com/m3h/lz77
    * https://github.com/Cereal84/LZ77/tree/master/source
    * https://github.com/ky1000ky2000/lz77/tree/master/lz77
    * https://github.com/stevesan/lz77
    * http://encode.ru/threads/1562-Urban-Compressor
    * http://encode.ru/threads/1619-TinyLZ...LZP-compressor
    * http://mattmahoney.net/dc/text.html#3062
    * http://freecode.com/projects/compress
    * http://code.google.com/p/arx/source/browse/
    Last edited by m^2; 10th April 2014 at 20:04.

  35. The Following 2 Users Say Thank You to m^2 For This Useful Post:

    Bulat Ziganshin (10th April 2014),Cyan (10th April 2014)

  36. #28
    Member
    Join Date
    Oct 2013
    Location
    Filling a much-needed gap in the literature
    Posts
    350
    Thanks
    177
    Thanked 49 Times in 35 Posts
    m^2:

    Good to know, thanks. Another algo to add to fsbench.
    Cool, but don't expect too much in normal benchmarks. WKdm is NOT meant to compress normal text OR variable-instruction-length code well. It's designed to compress 4KB blocks of typical non-text, non-code in-memory data, and maybe RISC code, for use in compressed caching for virtual memory. (A.k.a. "compressed memory" in Apple marketing jargon.)

    It can be combined with any good compression algorithm(s) for text and/or code to do a better job for various mixed kinds of data that appear in typical benchmarks. (Lots of text and maybe Intel code.) I and others have several unpublished versions that do that, but I don't think anybody is publishing any yet.

    Mine at present are too krufty and ugly to give away, but just last night I had a couple of insights that may lead to an elegant combo that's pretty good.

    (I intend to plug it together with something like LZ4 either using sampling & bellwethers to tell if a whole 4KB block should go to WK or to LZ, and for mixed-looking pages, just passing the unmatched 4-byte "wurds" from WK into LZ. That works better than I'd have expected, and I think I finally realized why and that it should work more robustly than I thought. In effect, it does a certain kind of bellwether discrimination that does a pretty good job of detecting text-like stuff and piping almost all of it to LZ, fairly reliably and at very fine grain. It doesn't mangle the strings going to LZ nearly as much as you'd think.)

    One thing that's slowing me down is that for compressed memory, I need pretty good LZ performance for small blocks, which entails using either shortish LZ contexts that are suboptimal for large files, or decent little preloaded dictionaries to avoid too much cold-start cost, and ideally both. (That's easier than you'd think in compressed VM caching, because at the onset of paging you already have a bunch of pages in RAM that you can sample to construct good preloads for that particular run of that particular program.) Another is that I want it to be robust across character encodings, and you want longer contexts for wide characters, which are considerably more common in in-memory data than in files.

    Eventually it will be robust and fast, and not too big or ugly.

  37. #29
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Paul W. View Post
    m^2:
    Cool, but don't expect too much in normal benchmarks. WKdm is NOT meant to compress normal text OR variable-instruction-length code well. It's designed to compress 4KB blocks of typical non-text, non-code in-memory data, and maybe RISC code, for use in compressed caching for virtual memory. (A.k.a. "compressed memory" in Apple marketing jargon.)
    That's OK. fsbench is quite flexible, one can feed it with any data and ask to process it in 4K chunks.

  38. #30
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by m^2 View Post
    Good to know, thanks. Another algo to add to fsbench.





    My apologises, I mixed up this benchmark with the entropy coders one, it's indeed verifiable.
    Still, it's irrelevant for the OP.

    @OP
    Interesting options:
    * libssc is probably the fastest LZ out there. I failed to make it work in my benchmark though.
    * LZ4 is rather mature, portable, fast. Recommended
    * Snappy is somewhat used too, but inferior to LZ4 nowadays
    * RLE64 - the speed king, but rarely strong enough to be worthwhile. You may test it on your data, but don't be surprised if you save 1%.

    There's also a huge number of options that I wouldn't recommend for that they have at least one of the following properties:
    * inefficient
    * closed source
    * copyleft-encumbered
    * immature

    Of course, first they should try zlib, and if that meets their needs, they should not try anything else.

Similar Threads

  1. Replies: 109
    Last Post: 29th August 2016, 20:40
  2. Replies: 4
    Last Post: 8th March 2014, 15:50
  3. Fastest compression for high-speed I/O?
    By wheezil in forum Data Compression
    Replies: 4
    Last Post: 22nd July 2011, 13:53
  4. Replies: 8
    Last Post: 12th April 2009, 02:39
  5. Can't allocate memory required for (de)compression..help!
    By Duarte in forum Data Compression
    Replies: 19
    Last Post: 18th July 2008, 18:14

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •