Page 1 of 2 12 LastLast
Results 1 to 30 of 48

Thread: Nanozip LZT ?

  1. #1
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts

    Nanozip LZT ?

    Is there any information around about the "LZT" in nanozip?

    Is there any way to run nanozip and force it to only use LZT ?

    It appears to be LZMA-like but better compression and slower.

    I'm trying to find the best compressor in the world for structured data that can decompress at at least 20 MB/sec (*)

    For example :

    raw : 24,700,820
    7z : 9,344,463 (-mx9 -m0=lzma:d24)
    me : 9,165,008 (LZA)
    nz : 8,841,150 (-cO)

    * = actually nanozip doesn't qualify. On that file it only decodes at 4 MB/sec. 7z and I both decode the same file at 25 MB/sec.

    The obvious way to beat LZMA (and be slower) is to use MML 4 and something like o2 or o3 CM for literals. But I have no idea what LZT is doing.
    Last edited by cbloom; 3rd August 2016 at 20:41.

  2. #2
    Member just a worm's Avatar
    Join Date
    Aug 2013
    Location
    planet "earth"
    Posts
    96
    Thanks
    29
    Thanked 6 Times in 5 Posts
    Mr. Runsas never had much information on his website nanozip.net. But the website was bigger in the past and has been made smaller (probably to save traffic costs). You can't get the source code to look into it because the project is not open source. So you might have to write a mail (sami runsas at google's gmail).

    Nanozip was an active project in the past. In the last time it became somewhat inactive. It's possible that his website is going to be closed at the end of this month and with that maybe also the support for nanozip. He is already looking for donations to pay the bill of the webhoster (http://www.dreamhost.com/donate.cgi?id=17736).

  3. #3
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Quote Originally Posted by cbloom View Post
    Is there any information around about the "LZT" in nanozip?
    Is there any way to run nanozip and force it to only use LZT ?
    Shortly speaking NO and NO. Nanozip is quite modest in its tunability and there are no known undocumented options in it. In latest 0.09 version I mean.

    Quote Originally Posted by cbloom View Post
    I'm trying to find the best compressor in the world for structured data that can decompress at at least 20 MB/sec (*)
    You can try to speedup decompression by using -pN switch, where N is the number of parallel compressors. It will hurt the ratio of course (though not so much sometimes) but will make decompression faster.
    For example on my system 469,991,936 bytes have been decompressed in 14.250s which is ~31 MB\s. The compression took 44.625s with nz09 -nm -cO -t8 -p8 -m256m.
    Of course there is not so much to expect from such parallelization and LZMA is anyway faster but its the only way to have a speedup of -cO decompression.

    Also, could you please provide the sample of such structured data? I just want to play with it a little. Thanks.
    Last edited by Skymmer; 19th June 2014 at 22:47.

  4. #4
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by Skymmer View Post
    Also, could you please provide the sample of such structured data? I just want to play with it a little. Thanks.
    Sure, here's one :

    https://drive.google.com/file/d/0B-y...it?usp=sharing

    This is the file that I used as an example before in this post :

    http://cbloomrants.blogspot.com/2010...ured-data.html

    It consists mostly of 72-byte structures (which have 4 and 8 byte structure within them).

    Original 3,471,552
    Nanozipped it's 1,196,966
    Last edited by cbloom; 3rd August 2016 at 20:41.

  5. #5
    Member
    Join Date
    Oct 2013
    Location
    Filling a much-needed gap in the literature
    Posts
    350
    Thanks
    177
    Thanked 49 Times in 35 Posts
    Quote Originally Posted by Skymmer View Post
    Also, could you please provide the sample of such structured data? I just want to play with it a little. Thanks.
    A couple of other nice examples from standard benchmark suites are geo from the Calgary corpus and sao from the Silesia corpus.

    http://encode.ru/threads/1943-visual...simple-filters

    http://encode.ru/threads/1949-atxd-A...-visualization

    These are the kinds of things that ought to be HDF5 files these days, and HDF5 ought to have better compression to exploit the simple, easily detectable regularities (like linear predictability) than it does with its braindead strategy of gzipping columns as though they were textual.

  6. #6
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    It seems like it's hard to reason about algorithms for structured data without a theory of where structured data comes from, i.e. what kind(s) of software produce it and why. If all you can anticipate are that the symbol distributions repeat mod N, then the solution set is pretty well understood (once you discover N, which seems to be a fairly well-understood problem itself, at least from an academic theory standpoint). Doing better would seem to require a bit of insight into how records are structured internally, so you can choose what patterns to optimize for.
    Last edited by nburns; 20th June 2014 at 11:38.

  7. #7
    Member
    Join Date
    Apr 2010
    Location
    CZ
    Posts
    81
    Thanks
    5
    Thanked 7 Times in 5 Posts
    Don't really know, my guess would be something similar to lrzip? But that would possibly be much slower, or possibly it's combination of more algorithms.

  8. #8
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by nburns View Post
    It seems like it's hard to reason about algorithms for structured data without a theory of where structured data comes from, i.e. what kind(s) of software produce it and why. If all you can anticipate are that the symbol distributions repeat mod N, then the solution set is pretty well understood (once you discover N, which seems to be a fairly well-understood problem itself, at least from an academic theory standpoint). Doing better would seem to require a bit of insight into how records are structured internally, so you can choose what patterns to optimize for.
    For me, files like geo with a single static pattern are not very interesting.

    A good "structured" coder needs to be able to detect the *local* structure of the region and work with that. It should be able to change even from byte to byte.

    At the moment I'm most interested in files that are generated by serializing C structs. (and the use of "struct" here is a bit of an over-use of the word; it should not be considered the same as "structured data" though there are overlaps)

    That is, something like :

    struct
    {
    unsigned char x;
    float y;
    int32 z;
    char w[12];
    int32 a;
    };

    It's far more than just detecting the struct size and using a [-N] predictor.

    For example in this kind of data, the ints "z" and "a" might have strong dependence on the value of the char "x" which maybe flags something about the structure.

    Of course you have location-specific statistics, like the order-0 statistics of the float member are completely different than the order-0 statistics of the ints and should not be combined together in the model.

    Any correlation with the float will depend most on its exponent and top byte, and not the other two, so some kind of special context is needed.

    etc. etc. there are just tons of issues here that no current compressor handles at all.

    And of course you can have files with mixed or heterogeneous structures. Perhaps with flag types, things like :

    struct
    {
    char type;

    union
    {
    struct type1 { ... }
    struct type2 { ... }
    }
    };

    where a compressor would have to learn to look at the "type" byte to know what kind of structure follows.
    Last edited by cbloom; 3rd August 2016 at 20:41.

  9. #9
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    So basically you want some kind of artifical intelligence?

  10. #10
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    So basically you want some kind of artifical intelligence?
    Yes.

  11. #11
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    437
    Thanks
    137
    Thanked 152 Times in 100 Posts
    Essentially you can start with two things:

    1) Determining the stride size(s). Your "rant" on this before had some good ideas.

    2) Some form of correlation analysis between all items within a stride. This may be as nasty as an all vs all thing, at least for the first few blocks of data until you've gathered enough statistics.

    Eg if you can do a correlation of byte 1 and byte 7 and come up with a strong +ve or -ve correlation, then you know byte 1 needs to be used as a context when encoding byte 7. It's messy, and I'm not even sure the classical Pearson correlation method is appropriate. It may even be bits as you mention. It would identify the type+union concepts though, as you'll get strong correlation between byte 1 and everything else.

    I don't know of anything that does this though.

  12. #12
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts

  13. #13
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by JamesB View Post
    2) Some form of correlation analysis between all items within a stride. This may be as nasty as an all vs all thing, at least for the first few blocks of data until you've gathered enough statistics.

    Eg if you can do a correlation of byte 1 and byte 7 and come up with a strong +ve or -ve correlation, then you know byte 1 needs to be used as a context when encoding byte 7. It's messy, and I'm not even sure the classical Pearson correlation method is appropriate. It may even be bits as you mention. It would identify the type+union concepts though, as you'll get strong correlation between byte 1 and everything else.
    I wonder how far toward an efficient solution you could get with FFT-based techniques.

    I'm thinking that if you could determine which fields are correlated, the arrows of causation might naturally form a hierarchy. If you had that info, you could order the fields according to the hierarchy and sort.

  14. #14
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Quote Originally Posted by cbloom View Post
    Sure, here's one :
    https://drive.google.com/file/d/0B-y...it?usp=sharing

    Original 3,471,552
    Nanozipped it's 1,196,966
    Its possible to improve the lzma result on this file.
    Code:
    7z 9.32 -mx9						1 262 024
    7z 9.32 -m0=LZMA:a1:d4m:lc2:fb273:mf=bt4:lp3:pb3	1 205 452
    For a few extra bytes its possible to utilize currently deprecated Patricia Tree matchfinder.
    Code:
    7z 4.32 -m0=LZMA:a2:d4m:lc2:fb273:mf=pat4h:lp3:pb3	1 203 387
    I was able to beat nanozip on this one and the resulting file is decompressed faster than NZ.
    Code:
    nz                       1 196 966
    quark 0.95r -m1 -l13     1 182 230
    Bad thing is that Quark is not open source.

  15. #15
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    So basically you want some kind of artifical intelligence?
    As Matt says - Yes.

    Any time a human can look at the data and find ways to improve the compression, that means the compressor is not doing as well as it could.
    Last edited by cbloom; 3rd August 2016 at 20:39.

  16. #16
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by lunaris View Post
    Good question. Would be very interested if it is. People often reuse letters so dunno if it's the same LZT.

    LZT from what I gather reduces some of the redundancy in the LZ77 coding. It send the match length as the excess beyond the best prefix string with a lower offset.

    That is, any time you send a higher offset, it must have been because match length exceeded the length at the lower offset with the same prefix.

    Seems hard to make a fast decoder. You have to essentially maintain a suffix tree in the decoder. LZFG territory. Or full ACB craziness. See for example :

    http://cbloomrants.blogspot.com/2008/09/09-27-08-2.html
    Last edited by cbloom; 3rd August 2016 at 20:38.

  17. #17
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by cbloom View Post
    As Matt says - Yes.

    Any time a human can look at the data and find ways to improve the compression, that means the compressor is not doing as well as it could.
    Artificial intelligence would be more cost-effective than using natural intelligence, I would think. When I took AI, the algorithms didn't seem all that intelligent. I think AI is defined by the problem more than the solution. If the problem looks to somebody like a person should be doing it, then it's AI. Never mind that most of the time, people can't do the task. How is a person going to index the whole internet?

    Seriously, I think AI is a terrible name for the field. It sounds like Star Trek and it means nothing. The field is a hodgepodge of algorithms that have little commonality between them. I had no idea of what to expect when I took the class, and I didn't learn much in it. I started taking a 500-level Machine Learning class, but I had to drop it. I think that class would have been more useful.

    I don't think it helps to anthropomorphize computers. Brains have limitations and so do computers. They complement each other.
    Last edited by nburns; 22nd June 2014 at 02:39.

  18. #18
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    I had some ideas about how the LZ-Tamayo could be done fast. The key is that it doesn't need to be exact. You don't have to find the longest shared prefix of lower offset, you can use *any* lower offset shared prefix, with a small coding loss.

    But I'm skeptical that there's much win there. I guess try and see. But all the bits in LZ are in literals and offsets, not match lengths.

    And I still have no idea of the "LZT" of nanozip has anything to do with LZ-Tamayo...
    Last edited by cbloom; 3rd August 2016 at 20:38.

  19. #19
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    Quote Originally Posted by cbloom View Post
    As Matt says - Yes.

    Any time a human can look at the data and find ways to improve the compression, that means the compressor is not doing as well as it could.
    I think one of the main problems with people new to compression is that they think they see some simple way to improve compression and they are wrong.

    Of course one could always write a general compression that compresses a given file to a single byte. But that seldom does much good if you have several files to compress. Since often what makes one file compress smaller makes another one compress (or expand) to a larger length.

    As anther example take any of my bijective file compressors. Since any file can be an output or input you could look at an output file that would happen to be a 1000 bytes of all zero. The human brain would say gee that obviously a bad compression yet that is the nature of bijective compression. Given any infinite set of files ordered by whatever criteria of value one has there will always be an output for one of those files that compresses to 1000 bytes of all zero. Yet it would be optimum for the criterion used no matter what any human brain thinks.

  20. #20
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    I once took a class in AI and it wasn't very useful. We covered algorithms for search and game playing, but none of the really interesting problems like language or vision. The teacher said it was impossible. A later course in machine learning proved more useful.

    Humans can figure out the structure of a file by searching the internet for documentation, reading it, and doing experiments that involve writing code. Automating this would be very difficult. For example, for silesia/sao, I knew from docs that it was a star catalog, found online docs, and wrote a program to display the file in human readable format. I know that stars are distributed randomly so I had some idea what kind of distributions to expect for right ascension, declination, annual movement, and magnitude, and whether or not there should be any correlation within or between records. The coding experiments revealed additional details like whether the numbers were in big-endian or little-endian formats. You also have to know the typical formats for storing integers and floats.

    A similar analysis of silesia/osdb reveals that the data is synthetic, that some of the fields are variable length, and some fields may be omitted in rare but predictable cases. You can even find the generating source code online ( http://www.tpc.org/ ) and possibly write a very compact representation if you can reverse engineer the random number function and the options used to invoke the program. But doing any of this in a compressor would require solving a lot of hard AI.

    There is no easy solution. Kolmogorov proved that the general data compression problem is not computable, as you probably know. Legg went further, showing that any good predictor necessarily has to have a lot of code. http://arxiv.org/abs/cs/0606070

  21. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    schnaader (23rd June 2014)

  22. #21
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    Legg went further, showing that any good predictor necessarily has to have a lot of code. http://arxiv.org/abs/cs/0606070
    "A lot" is not a very specific number. I would start by extracting the part of DNA responsible for human intelligence and compute the entropy of that part I know that sounds impossible :P

  23. #22
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Matt Mahoney View Post
    There is no easy solution. Kolmogorov proved that the general data compression problem is not computable, as you probably know. Legg went further, showing that any good predictor necessarily has to have a lot of code. http://arxiv.org/abs/cs/0606070
    From the abstract:

    This alone makes their theoretical analysis problematic, however it is further shown that beyond a moderate level of complexity the analysis runs into the deeper problem of Goedel incompleteness.
    I had a feeling that Goedel incompleteness might crop up somehow in universal compression. I'm glad somebody else unraveled the theory, though.

    I'm not sure what Kolmogorov's contribution was, exactly. He adopted an uncomputable model of compression and found that it was uncomputable. I don't really see what useful insight comes from that.

    This question is more pertinent (from the paper):

    Could there exist elegant computable prediction algorithms that are in some sense universal, or at least universal over large sets of simple sequences?
    Last edited by nburns; 24th June 2014 at 01:57.

  24. #23
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Kolmogorov, Solomonoff, and Levin all came up with the idea of a universal probability distribution based on algorithmic complexity in the 1960's, each unaware of the other's work. It is a simple idea, really. The information content of an object is the length of the shortest description. The proof of its uncomputability is simple too. If such an algorithm existed, then you could describe "the first string that can't be described in less than a million bytes", when you just did.

    Anyway, I estimated the complexity of AI by compressing the human genome (because AI requires both a brain and a body) and comparing it to the compressed size of a lot of source code. It is about 300 million lines. And that's the easy part. You still need several petaflops of CPU, maybe a petabyte of memory, and years of training. If AI were easy, we wouldn't still be paying people worldwide USD $70 trillion per year to do work that machines aren't smart enough to do. https://docs.google.com/document/d/1...pCkpWFn9IglW3o

  25. #24
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Matt Mahoney View Post
    Kolmogorov, Solomonoff, and Levin all came up with the idea of a universal probability distribution based on algorithmic complexity in the 1960's, each unaware of the other's work. It is a simple idea, really. The information content of an object is the length of the shortest description. The proof of its uncomputability is simple too. If such an algorithm existed, then you could describe "the first string that can't be described in less than a million bytes", when you just did.
    I don't think that proves that it's uncomputable. I think that proves that, as stated, it's not a well-defined concept. My understanding is that the uncomputable version holds that the information content is equal to the length of the shortest program in some language. For a Turing-complete programming language, that's well-defined, but uncomputable (trivially).*

    Anyway, I estimated the complexity of AI by compressing the human genome (because AI requires both a brain and a body) and comparing it to the compressed size of a lot of source code. It is about 300 million lines. And that's the easy part. You still need several petaflops of CPU, maybe a petabyte of memory, and years of training. If AI were easy, we wouldn't still be paying people worldwide USD $70 trillion per year to do work that machines aren't smart enough to do. https://docs.google.com/document/d/1...pCkpWFn9IglW3o
    We should all be grateful, because if $70T/yr were sucked out, the world economy would collapse.

    * Edit: Actually, I shouldn't say that that's trivial. If your method of finding the shortest is exhaustive search, that's uncomputable. Conceivably, there could be some other way of finding the shortest, though.
    Last edited by nburns; 24th June 2014 at 05:01.

  26. #25
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    I gave an informal description, but Kolmogorov complexity is well defined. Given some universal Turing machine, it is the shortest program that outputs the string. Such a program must always exist because you can always write a program that simply prints the string. But there is not a general procedure for finding the shortest possibility. Sometimes you can, but sometimes you can't. Exhaustive search won't work because some programs might not halt and you won't know which ones.

    The complexity depends on the choice of universal Turing machine, but only up to a constant that does not depend on the string being tested. That is because you can always append a program that simulates any other machine.

    The formal version of the proof is that if there was a function K(x) that gave you the Kolmogorov complexity of string x, then you could write a program to enumerate strings to find the first string with K(x) > N for some N that could be longer than your program. Therefore K(x) cannot exist.

    > We should all be grateful, because if $70T/yr were sucked out, the world economy would collapse.

    Not actually, because humans would still own the machines, so you would have an income without having to work. Job automation has been going on for hundreds of years and it has only made our life better.

  27. #26
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    My point was that Kolmogorov compression is uncomputable by construction. So where's the insight?

  28. #27
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    I'm sure that everything seems obvious in hindsight. Where would we be without the foundations of information theory?

  29. #28
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    I suspect nowhere.

    http://en.wikipedia.org/wiki/Standin...ders_of_giants

    OTOH, I think everyone is replaceable, in the sense that if someone dies before he has chance to discover something, someone else at some point in future will come up with the same discovery.



    Going back to on-topic:
    LZT could mean anything. For example Lempel-Ziv-Tarsa (hahaha ). I've once started making a LZ77 compressor: http://asembler.republika.pl/bin/lzt0.zip (with description here: http://asembler.republika.pl/programy.html ) - most things are in Polish so be warned.

  30. #29
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Matt Mahoney View Post
    I'm sure that everything seems obvious in hindsight. Where would we be without the foundations of information theory?
    I think Turing did the hard work in this case with respect to uncomputability.

    Without Shannon's work, information theory would be nowhere. Without Kolmogorov entropy -- I can't see what the contribution was. Maybe there was one that I haven't seen yet. But let's not derail this thread any more.

  31. #30
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Kolmogorov, Solomonoff, and Levin (working independently in the early 1960's) certainly could not have formalized algorithmic information theory without Turing's model of computation and Shannon's notion of entropy as a measure of information, both around 1950. But Shannon did not say how to calculate probabilities. We just assumed they were known somehow. What Kolmogorov et al did was to show two things: that there is a universal distribution and it is not computable. Maybe it seems trivial in hindsight (not to me), but it is the whole reason that data compression is hard. Or more specifically, coding is easy but modeling is hard.

Page 1 of 2 12 LastLast

Similar Threads

  1. NanoZip - a new archiver, using bwt, lz, cm, etc...
    By Sami in forum Data Compression
    Replies: 280
    Last Post: 29th November 2015, 11:46
  2. Preprocessors and filters and Nanozip??
    By kampaster in forum Data Compression
    Replies: 18
    Last Post: 9th July 2010, 20:42
  3. Nanozip decompression data troubles
    By SvenBent in forum Data Compression
    Replies: 11
    Last Post: 12th January 2009, 23:25
  4. NanoZip huge efficiency issue
    By m^2 in forum Data Compression
    Replies: 9
    Last Post: 10th September 2008, 21:51
  5. enwik9 benchmark nanozip, bliz, m99, dark
    By Sami in forum Data Compression
    Replies: 6
    Last Post: 31st July 2008, 20:24

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •