Page 1 of 3 123 LastLast
Results 1 to 30 of 65

Thread: Google: Compress Data More Densely with Zopfli

  1. #1
    Member
    Join Date
    May 2008
    Location
    HK
    Posts
    160
    Thanks
    4
    Thanked 25 Times in 15 Posts

    Google: Compress Data More Densely with Zopfli

    It is an implementation of the Deflate compression algorithm that creates a smaller output size compared to previous techniques.
    The output generated by Zopfli is typically 3~8% smaller compared to zlib at maximum compression.

    http://google-opensource.blogspot.co...th-zopfli.html
    Last edited by roytam1; 1st March 2013 at 09:09. Reason: typo

  2. #2
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Zopfli Compression Algorithm is a new zlib (gzip, deflate) compatible compressor. This compressor takes more time (~100x slower), but compresses around 5% better than zlib and better than any other zlib-compatible compressor we have found.

    Code license: Apache License 2.0

    http://zopfli.googlecode.com/files/D...ing_Zopfli.pdf contains benchmarks - it's 1-2% better and several times slower than 7-zip -mx
    Last edited by Bulat Ziganshin; 1st March 2013 at 09:33.

  3. #3
    Member
    Join Date
    Oct 2009
    Location
    usa
    Posts
    56
    Thanks
    1
    Thanked 9 Times in 6 Posts
    I would like to try it, but can't even access the source. Is anyone able to provide a compiled standalone win32 binary?

  4. #4
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Download sources: git clone https://code.google.com/p/zopfli/

    I've attached archive with the sources and compiled exe
    Attached Files Attached Files

  5. The Following User Says Thank You to Bulat Ziganshin For This Useful Post:

    encode (31st January 2015)

  6. #5
    Member Bloax's Avatar
    Join Date
    Feb 2013
    Location
    Dreamland
    Posts
    52
    Thanks
    11
    Thanked 2 Times in 2 Posts
    You mean I can waste a lot more time per archive now? Oh boy!

  7. #6
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Unlike gipfeli, I managed to make it work.
    Some results:
    Code:
    pcbsd-8973% ./fsbench -w0 -i1 -s1 -b131072 -t4 zlib,9 7z-deflate,9 zopfli/zlib,5/ zopfli/zlib,10/  zopfli/zlib,15/  zopfli/zlib,25/  zopfli/zlib,50/  zopfli/zlib,100/  zopfli/zlib,250/  zopfli/zlib,500/  zopfli/zlib,1000/  zopfli/zlib,0/  ~/bench/scc.tar
    memcpy: 153 ms, 211927552 bytes = 1320 MB/s
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    zlib                                    1.2.7        9
       69111193 (x 3.066)        25 MB/s 1041 MB/s        17e6  702e6
    7z-deflate                              9.20         9
       66182590 (x 3.202)      3357 KB/s  515 MB/s      2308e3  354e6
    zopfli/zlib                             2013-03-01/1.2.7 5/6
       65921333 (x 3.215)       608 KB/s  874 MB/s       418e3  602e6
    zopfli/zlib                             2013-03-01/1.2.7 10/6
       65867360 (x 3.217)       446 KB/s  990 MB/s       307e3  682e6
    zopfli/zlib                             2013-03-01/1.2.7 15/6
       65854408 (x 3.218)       351 KB/s  990 MB/s       242e3  682e6
    zopfli/zlib                             2013-03-01/1.2.7 25/6
    ^C
    pcbsd-8973% ./fsbench -w0 -i1 -s1 -b131072 -t4 zlib,9 7z-deflate,9 zopfli/zlib,5/ zopfli/zlib,10/  zopfli/zlib,15/  zopfli/zlib,25/  zopfli/zlib,50/  zopfli/zlib,100/  zopfli/zlib,250/  zopfli/zlib,500/  zopfli/zlib,1000/  zopfli/zlib,0/  ~/bench/calgary.tar 
    
    WARNING: This file is too small, use at least 16 MB per thread.
    
    memcpy: 2 ms, 3152896 bytes = 1503 MB/s
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    zlib                                    1.2.7        9
        1044022 (x 3.020)        26 MB/s 1002 MB/s        17e6  670e6
    7z-deflate                              9.20         9
         999293 (x 3.155)      3244 KB/s  501 MB/s      2216e3  342e6
    zopfli/zlib                             2013-03-01/1.2.7 5/6
         993528 (x 3.173)       862 KB/s 1002 MB/s       590e3  686e6
    zopfli/zlib                             2013-03-01/1.2.7 10/6
         993045 (x 3.175)       565 KB/s 1002 MB/s       387e3  686e6
    zopfli/zlib                             2013-03-01/1.2.7 15/6
         992933 (x 3.175)       417 KB/s  751 MB/s       285e3  514e6
    zopfli/zlib                             2013-03-01/1.2.7 25/6
         992773 (x 3.176)       238 KB/s 1002 MB/s       163e3  686e6
    zopfli/zlib                             2013-03-01/1.2.7 50/6
         992636 (x 3.176)       126 KB/s 1002 MB/s        86e3  686e6
    zopfli/zlib                             2013-03-01/1.2.7 100/6
         992624 (x 3.176)        65 KB/s  601 MB/s        44e3  412e6
    zopfli/zlib                             2013-03-01/1.2.7 250/6
         992382 (x 3.177)        30 KB/s  601 MB/s        20e3  412e6
    zopfli/zlib                             2013-03-01/1.2.7 500/6
         992304 (x 3.177)        14 KB/s  501 MB/s        10e3  343e6
    zopfli/zlib                             2013-03-01/1.2.7 1000/6
         992138 (x 3.178)      7930  B/s  601 MB/s      5435e0  412e6
    Please note that the speed measurements are really rough.
    Also, there's a parameter (blocksplittinglast) available in the library and not exposed in the binary, the comment reads 'depending on file either first or last gives the best compression'.
    fsbench doesn't allow passing multiple parameters to the codec yet, so I didn't test it, but at some point it might be better to try both values instead of doubling number of iterations.
    And another, 'blocksplittingmax'; they set the limit because their algorithm has some bad corner cases that blow the size up.
    So overall, it seems to have potential to save even more.
    Last edited by m^2; 1st March 2013 at 12:27.

  8. #7
    Member
    Join Date
    May 2008
    Location
    HK
    Posts
    160
    Thanks
    4
    Thanked 25 Times in 15 Posts
    I made a simple test:

    zopfli is compiled with Intel C++ 9.1 icl /Fezopfli.exe /fast /Qunroll /Qparallel /MT *.c

    Code:
    F:\jatf\zopfli>f:\app_related\timer i:\minigzip.exe -9 111.bmp
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    
    Kernel Time  =     0.000 =    0%
    User Time    =     0.984 =   93%
    Process Time =     0.984 =   93%
    Global Time  =     1.049 =  100%
    
    F:\jatf\zopfli>f:\app_related\timer zopfli.exe 111.bmp
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    
    Kernel Time  =     0.125 =    0%
    User Time    =    42.312 =   99%
    Process Time =    42.437 =   99%
    Global Time  =    42.598 =  100%
    
    F:\jatf\zopfli>f:\app_related\timer "c:\Program Files\7-Zip\7z.exe" a -mx=9 -mfb=258 -mpass=2 111.7z.gz 111.bmp
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    
    7-Zip 9.30 alpha  Copyright (c) 1999-2012 Igor Pavlov  2012-10-26
    
    Scanning
    
    Updating archive 111.7z.gz
    
    Compressing  111.bmp
    
    Everything is Ok
    
    Kernel Time  =     0.015 =    0%
    User Time    =    15.625 =   99%
    Process Time =    15.640 =   99%
    Global Time  =    15.704 =  100%
    
    F:\jatf\zopfli>dir|find "111"
    01/03/2013  21:07         1,855,729 111.7z.gz
    01/03/2013  21:09         1,855,869 111.zopfli.gz
    01/03/2013  20:52         1,922,005 111.mgz.gz
    01/03/2013  20:50         5,760,054 111.bmp
    Last edited by roytam1; 1st March 2013 at 16:23.

  9. #8
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Another test, pushed the number of iters to 10K which improved the result further.
    Code:
    pcbsd-8973% ./fsbench -w0 -i1 -s1 -b131072 -t4 zopfli/zlib,10000/  ~/bench/calgary.tar     
    
    WARNING: This file is too small, use at least 16 MB per thread.
    
    memcpy: 2 ms, 3152896 bytes = 1503 MB/s
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    zopfli/zlib                             2013-03-01/1.2.7 10000/6
         991915 (x 3.179)       797  B/s 7621 KB/s       546e0 5223e3
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    done... (1x1 iteration(s)).

  10. #9
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts
    On enwik8:
    36 445 248 bytes (gzip -9)

    35 102 976 bytes (7-zip)

    35 025 767 bytes (kzip)

    34 995 756 bytes (Zopfli)

    34 932 433 bytes (kzip+huffmix+deflopt+defluff running mhhh... 3 months)


    Zopfli output can be further compressed using Deflopt:
    Code:
    ***                 DeflOpt V2.07                 ***
    ***       Built on Wed Sep  5 18:56:30 2007       ***
    ***  Copyright (C) 2003-2007 by Ben Jos Walbeehm  ***
    
    
    
    "Z:/home/caveman/zopfli/enwik8.gz"
    Number of bytes saved: 1,745 (34,995,756 --> 34,994,011) (13,960 bits)
    File rewritten.
    
    
    Number of files processed  :        1
    Number of files rewritten  :        1
    Total number of bytes saved:    1,745
    Nevertheless it's interesting to have a new open source implementation of Deflate, even if I personally think the web should move to modern compression algorithms to compress text (HTML/CSS/JS/SVG) files... Deflate64 anyone?
    Since HTML5 only allows UTF-8 something specially tailored for it could bring even more benefits especially for languages using 3 bytes long code points.
    Using better and slower implementations of Deflate will increase latency before first byte and increase CPU load on servers, not sure this is the way to go.

    I've also managed to shrink book1 down to 298536 bytes using the same kzip+huffmix+deflopt+defluff combo.

    http://www.filedropper.com/enwik8-book1
    Last edited by caveman; 1st March 2013 at 19:40. Reason: Tried Deflopt on Zopfli output

  11. #10
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by caveman View Post
    Nevertheless it's interesting to have a new open source implementation of Deflate, even if I personally think the web should move to modern compression algorithms to compress text (HTML/CSS/JS/SVG) files... Deflate64 anyone?
    I've been thinking along these lines too... when you're designing a new deflate implementation, is it hard to make it generate deflate64 too?
    Quote Originally Posted by caveman View Post
    Since HTML5 only allows UTF-8 something specially tailored for it could bring even more benefits especially for languages using 3 bytes long code points.
    Using better and slower implementations of Deflate will increase latency before first byte and increase CPU load on servers, not sure this is the way to go.
    I don't think it's supposed to be used this way...IIRC the authors talked about static content, so it's something executed before clients' requests. Anyway such uses seem the only sensible to me. And I wonder why the first encoder they did was gzip and not png.

  12. #11
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts
    PNG is a bit more complicated since you also have to handle the filter applyed to each image raw, it would be nice to bring PNGwolf and Zopfli together.

  13. #12
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts
    Quote Originally Posted by m^2 View Post
    when you're designing a new deflate implementation, is it hard to make it generate deflate64 too?
    It's not difficult, but there are no gains on files smaller than 32k. Deflate64 only brings a larger search window (64k vs 32k) and it can handle larger LZ77 matches.

  14. #13
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts
    Quote Originally Posted by m^2 View Post
    And I wonder why the first encoder they did was gzip and not png.
    Perhaps to foster WebP, PNG is really a thing of the past compared to it.

  15. #14
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    And I wonder why the first encoder they did was gzip and not png.
    Well, I think that Google by default gzips all static JavaScript and CSS code and that code occupies much more space than PNGs. So Zopfli can bring some (negligible?) gains without PNGs.

  16. #15
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by caveman View Post
    PNG is a bit more complicated since you also have to handle the filter applyed to each image raw, it would be nice to bring PNGwolf and Zopfli together.
    Yeah, I didn't think about wolf specifically, but assumed pluggin the backend to some existing library. There's one with the 7z engine (advimg?).
    Quote Originally Posted by caveman View Post
    It's not difficult, but there are no gains on files smaller than 32k. Deflate64 only brings a larger search window (64k vs 32k) and it can handle larger LZ77 matches.
    Sure. Still, a gain.
    Quote Originally Posted by caveman View Post
    Perhaps to foster WebP, PNG is really a thing of the past compared to it.
    Maybe...

  17. #16
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    there are lot of improved lzh compressors: lzmh, zhuff, slug

  18. #17
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    there are lot of improved lzh compressors: lzmh, zhuff, slug
    Sure, but none of them is standard and all that you mentioned except for one are closed source.
    Yann said that he intended to open zhuff, but he's been quiet on the topic ever since.
    There are a few more:
    * gipfeli (doesn't seem to work)
    * mmini (not really good)
    * tornado (Bulat, I'm sure you haven't thought about this one) - this one's actually good, but for large streams only

  19. #18
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts
    Zopfli works a bit like Ken Silverman's Kflate engine, it first picks block "split points" (aka "boundaries") and apparently they never move afterwards (this is DeflateSplittingFirst first behavior, not DeflateSplittingLast), both engines apparently pick different block boundaries (at least on book1).
    KZIP:
    Code:
    defdb book1-best.gz
    T Boundary   Tokens   h.size   b.size
    2        0      109      272      825
    2       6d     4671      508    41041
    1     2cd1       28        3      288
    2     2d06     1592      421    17020
    2     40c3     2743      481    34429
    1     6a90       38        3      379
    2     6ac8     1295      424    17674
    2     81b3     1138      450    16416
    2     9623   164680      634  2260072
    2388144 bits long (9 blocks)
    Zopfli:
    Code:
    defdb book1-zopfli-huf.gz
    T Boundary   Tokens   h.size   b.size
    1        0      108        3      892
    2       72     1915      429    13468
    2      da8     3975      518    41528
    2     3d01   160854      628  2333520
    2389408 bits long (4 blocks)
    But from what I have seen increasing the compression level in Zopfli does not ensure blocks become smaller for instance:
    Code:
    zopfli -v --i500 book1
    Saving to: book1.gz
    block split points: 114 3496 15617 (hex: 72 da8 3d01)
    compressed block size: 111 (0k) (unc: 114)
    treesize: 54
    compressed block size: 1630 (1k) (unc: 3382)
    treesize: 64
    compressed block size: 5127 (5k) (unc: 12121)
    treesize: 78
    compressed block size: 291612 (284k) (unc: 753154)
    Original Size: 768771, Compressed: 298695, Compression: 61.146427% Removed
    
    zopfli -v --i1000 book1
    Saving to: book1.gz
    block split points: 114 3496 15617 (hex: 72 da8 3d01)
    compressed block size: 111 (0k) (unc: 114)
    treesize: 54
    compressed block size: 1629 (1k) (unc: 3382)
    treesize: 65
    compressed block size: 5130 (5k) (unc: 12121)
    treesize: 80
    compressed block size: 291640 (284k) (unc: 753154)
    Original Size: 768771, Compressed: 298730, Compression: 61.141874% Removed
    This looked a bit awkward to me, here is what defdb reports:
    Code:
    defdb book1-zop-i500.gz
    T Boundary   Tokens   h.size   b.size
    1        0      108        3      892  ->    111.500 bytes
    2       72     1919      429    13469  ->   1638.625 bytes
    2      da8     3975      518    41528  ->   5191.000 bytes 
    2     3d01   160854      628  2333520  -> 291690.000 bytes
    2389409 bits long (4 blocks)
    
    defdb book1-zop-i1000.gz
    T Boundary   Tokens   h.size   b.size
    1        0      108        3      892  ->    111.500 bytes
    2       72     1915      429    13468  ->   1638.500 bytes
    2      da8     3859      524    41568  ->   5196.000 bytes
    2     3d01   172034      644  2333762  -> 291720.250 bytes
    2389690 bits long (4 blocks)
    First surprise the block sizes reported by Zopfli do not really match those reported by defdb.
    I have an explanation for this Zopfli: reports the block size without the block header (the Huffman tables definitions) in defdb b.size (block size) also holds h.size (header size).

    And has you have probably noticed most the blocks from book1-zop-i500.gz have the same size or are smaller than those from book1-zop-i1000.gz excepted the second one... of course huffmix can be used to pick only the smallest blocks and produce a Deflate stream that is one bit smaller:
    Code:
    huffmix -v book1-zop-i500.gz book1-z
    i1000.gz book1-zopfli-huf.gz
    book1-zop-i500.gz (298695 bytes)
    
    Block boundaries: 0,72,da8,3d01 (4 blocks)
    
    book1-zop-i1000.gz (298730 bytes)
    
    Block boundaries: 0,72,da8,3d01 (4 blocks)
    
     File Type C-Offset C-Length U-Offset U-Length
       A    1         0      892        0      114
       A    2       892    13469       72     3382
       A    2     14361    41528      da8    12121
       A    2     55889  2333520     3d01   753154
    
     File Type C-Offset C-Length U-Offset U-Length
       B    1         0      892        0      114
       B    2       892    13468       72     3382
       B    2     14360    41568      da8    12121
       B    2     55928  2333762     3d01   753154
    
     File C-Offset C-Length
       B         0    14360
       A     14361  2375048
    
    Saved 1 bit, output file size 298694 bytes
    Last edited by caveman; 4th March 2013 at 23:48.

  20. #19
    Member
    Join Date
    May 2008
    Location
    HK
    Posts
    160
    Thanks
    4
    Thanked 25 Times in 15 Posts
    patch for accepting any numbers:
    Code:
    --- zopfli.c.orig	2013-03-01 20:42:12.357625000 +0800
    +++ zopfli.c	2013-03-02 12:58:25.667375000 +0800
    @@ -151,19 +151,20 @@
         if (StringsEqual(argv[i], "-v")) options.verbose = 1;
         else if (StringsEqual(argv[i], "-c")) output_to_stdout = 1;
         else if (StringsEqual(argv[i], "--deflate")) output_type = OUTPUT_DEFLATE;
         else if (StringsEqual(argv[i], "--zlib")) output_type = OUTPUT_ZLIB;
         else if (StringsEqual(argv[i], "--gzip")) output_type = OUTPUT_GZIP;
    -    else if (StringsEqual(argv[i], "--i5")) options.numiterations = 5;
    +    else if (strncmp(argv[i], "--i",3)==0) options.numiterations = atoi(argv[i]+3);
    +/*    else if (StringsEqual(argv[i], "--i5")) options.numiterations = 5;
         else if (StringsEqual(argv[i], "--i10")) options.numiterations = 10;
         else if (StringsEqual(argv[i], "--i15")) options.numiterations = 15;
         else if (StringsEqual(argv[i], "--i25")) options.numiterations = 25;
         else if (StringsEqual(argv[i], "--i50")) options.numiterations = 50;
         else if (StringsEqual(argv[i], "--i100")) options.numiterations = 100;
         else if (StringsEqual(argv[i], "--i250")) options.numiterations = 250;
         else if (StringsEqual(argv[i], "--i500")) options.numiterations = 500;
    -    else if (StringsEqual(argv[i], "--i1000")) options.numiterations = 1000;
    +    else if (StringsEqual(argv[i], "--i1000")) options.numiterations = 1000;*/
         else if (StringsEqual(argv[i], "-h")) {
           fprintf(stderr, "Usage: zopfli [OPTION]... FILE\n"
               "  -h    gives this help\n"
               "  -c    write the result on standard output, instead of disk"
               " filename + '.gz'\n"
    1-pass comparsion:
    Code:
    13:05 F:\jatf\zopfli>f:\app_related\timer zopfli --i1 111.bmp
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    
    Kernel Time  =     0.062 =    0%
    User Time    =     9.359 =   99%
    Process Time =     9.421 =   99%
    Global Time  =     9.446 =  100%
    
    13:05 F:\jatf\zopfli>f:\app_related\timer "c:\Program Files\7-Zip\7z.exe" a -mx=9 111.7z.gz 111.bmp
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    
    7-Zip 9.30 alpha  Copyright (c) 1999-2012 Igor Pavlov  2012-10-26
    Scanning
    
    Creating archive 111.7z.gz
    
    Compressing  111.bmp
    
    Everything is Ok
    
    Kernel Time  =     0.000 =    0%
    User Time    =     4.078 =   99%
    Process Time =     4.078 =   99%
    Global Time  =     4.108 =  100%
    
    13:06 F:\jatf\zopfli>dir|find "111"
    02/03/2013  13:05         1,857,815 111.7z.gz
    02/03/2013  13:05         1,866,188 111.bmp.gz
    01/03/2013  20:50         5,760,054 111.bmp
    EDIT: attached patched ICC 9.1 binary with patch above applied
    Attached Files Attached Files
    Last edited by roytam1; 2nd March 2013 at 08:11.

  21. #20
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Tested zopfli against other max zip formats at genefile generated test file (with all possible paterns):

    Input harddisk RAID:
    574,219,767 bytes, Genefile generate file

    Output harddisk RAID:
    478,953,566 bytes arc zip maximum -mx9
    478,341,054 bytes gzip -9
    478,002,325 bytes rar zip best
    474,828,451 bytes 7z zip ultra
    473,449,239 bytes kzip xtreme default
    472,758,740 bytes zopfli default
    472,653,871 bytes zopfli --i1000 (took 19 hours 23 min. 51 sec.)

  22. #21
    Member
    Join Date
    Apr 2011
    Location
    Russia
    Posts
    168
    Thanks
    163
    Thanked 9 Times in 8 Posts
    It was very interesting to test compression PNG
    Who can make an application for compression of PNG based Zopfli
    Is something like advdef

  23. #22
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    PngWolf seems to have pluggable backends, so making it use zopfli shouldn't be hard.

  24. #23
    Member
    Join Date
    Apr 2011
    Location
    Russia
    Posts
    168
    Thanks
    163
    Thanked 9 Times in 8 Posts

  25. #24
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    What is it based on?
    Source?

  26. #25
    Member
    Join Date
    Apr 2011
    Location
    Russia
    Posts
    168
    Thanks
    163
    Thanked 9 Times in 8 Posts
    Program to optimize PNG based Zopfli

  27. #26
    Member
    Join Date
    May 2008
    Location
    France
    Posts
    78
    Thanks
    436
    Thanked 22 Times in 17 Posts
    Latest ScriptPNG with Zopfli included: http://www.css-ig.net/scriptpng

  28. #27
    Member
    Join Date
    May 2012
    Location
    United States
    Posts
    323
    Thanks
    174
    Thanked 51 Times in 37 Posts
    Quote Originally Posted by Sportman View Post
    Tested zopfli against other max zip formats at genefile generated test file (with all possible paterns):

    Input harddisk RAID:
    574,219,767 bytes, Genefile generate file

    Output harddisk RAID:
    478,953,566 bytes arc zip maximum -mx9
    478,341,054 bytes gzip -9
    478,002,325 bytes rar zip best
    474,828,451 bytes 7z zip ultra
    473,449,239 bytes kzip xtreme default
    472,758,740 bytes zopfli default
    472,653,871 bytes zopfli --i1000 (took 19 hours 23 min. 51 sec.)
    Your results are interesting. Because for me, with my reference file corpus in a SHAR file, 7-Zip's deflate wins.

    Maybe I am missing something here but it seems as if this is not an improvement on 7-Zip's already very powerful deflate compression.

    Can someone else do some benchmarks?

  29. #28
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by comp1 View Post
    Your results are interesting. Because for me, with my reference file corpus in a SHAR file, 7-Zip's deflate wins.

    Maybe I am missing something here but it seems as if this is not an improvement on 7-Zip's already very powerful deflate compression.

    Can someone else do some benchmarks?
    You have my results above, on calgary and scc cut to 128K chunks zopfli won.

  30. #29
    Member
    Join Date
    May 2012
    Location
    United States
    Posts
    323
    Thanks
    174
    Thanked 51 Times in 37 Posts
    Quote Originally Posted by m^2 View Post
    You have my results above, on calgary and scc cut to 128K chunks zopfli won.
    Yes I see... But I have tried zopfli on my reference files and it never wins. Files are as follows:

    Code:
    acrord32.exe 
    book1
    E.coli
    enwik6
    flashmx5m.pcf
    fp5m.log
    geo
    lena.ppm
    obj2
    ohs.doc
    penderecki-capriccio.wav
    vcfiu.hlp
    zhwik6
    So my point is that perhaps 7-Zip does something with certain types of files that zopfli doesn't? Preprocessing perhaps?

  31. #30
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by comp1 View Post
    Yes I see... But I have tried zopfli on my reference files and it never wins. Files are as follows:

    Code:
    acrord32.exe 
    book1
    E.coli
    enwik6
    flashmx5m.pcf
    fp5m.log
    geo
    lena.ppm
    obj2
    ohs.doc
    penderecki-capriccio.wav
    vcfiu.hlp
    zhwik6
    So my point is that perhaps 7-Zip does something with certain types of files that zopfli doesn't? Preprocessing perhaps?
    How do you use 7-zip?
    Can you give some command line?
    My results:
    Code:
    pcbsd-8973% ./fsbench -w0 -i1 -s1 7z-deflate,9 zopfli/nop,5 ~/bench/book1
    
    WARNING: This file is too small, use at least 16 MB per thread.
    
    memcpy: 1 ms, 768771 bytes = 733 MB/s
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    7z-deflate                              9.20         9
         299731 (x 2.565)       801 KB/s  122 MB/s       488e3   74e6
    zopfli/nop                              2013-03-01/0 5/
         299527 (x 2.567)       439 KB/s  733 MB/s       268e3  447e6
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    done... (1x1 iteration(s)).
    pcbsd-8973% ./fsbench -w0 -i1 -s1 7z-deflate,9 zopfli/nop,5 ~/bench/obj2 
    
    WARNING: This file is too small, use at least 16 MB per thread.
    
    memcpy: 1 ms, 246814 bytes = 235 MB/s
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    7z-deflate                              9.20         9
          78175 (x 3.157)       675 KB/s  117 MB/s       461e3   80e6
    zopfli/nop                              2013-03-01/0 5/
          77795 (x 3.173)       261 KB/s  235 MB/s       179e3  161e6
    Codec                                   version      args
    C.Size      (C.Ratio)        C.Speed   D.Speed      C.Eff. D.Eff.
    done... (1x1 iteration(s)).
    Zopfli won in both cases. What numbers do you get? I use raw deflate, so yours should be slightly larger.

Page 1 of 3 123 LastLast

Similar Threads

  1. loseless data compression method for all digital data type
    By rarkyan in forum Data Compression
    Replies: 157
    Last Post: 9th July 2019, 17:28
  2. Google released Snappy compression/decompression library
    By Sportman in forum Data Compression
    Replies: 11
    Last Post: 16th May 2011, 12:31
  3. Interested in Google-Wave?
    By Vacon in forum The Off-Topic Lounge
    Replies: 2
    Last Post: 29th November 2009, 20:11
  4. Compress-LZF
    By spark in forum Data Compression
    Replies: 2
    Last Post: 16th October 2009, 00:08
  5. Did you know the google hashmap
    By thometal in forum Forum Archive
    Replies: 0
    Last Post: 4th February 2007, 16:21

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •