Page 1 of 4 123 ... LastLast
Results 1 to 30 of 101

Thread: zpaq benchmarks

  1. #1
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts

    zpaq benchmarks

    Did an other test with more duplicate data in it, took from 3 different systems the "Program Files" "Program Files (x86)" "ProgramData" directories with round 60, 45 and 45 software packages installed:

    Input:
    11,710,996,808 bytes, 115,020 files, 14,934 folders
    54,491 duplicate files, 3.38GB

    Output:
    5,679,407,538 bytes, 31.639 sec, nz -cF
    5,427,941,951 bytes, 130.708 sec, rar -m1
    4,988,601,911 bytes, 191.351 sec, rar -m2
    4,760,016,695 bytes, 229.773 sec, rar -m3
    4,754,627,864 bytes, 239.360 sec, rar -m4
    4,742,852,804 bytes, 131.085 sec, 7z -mx1 -t7z
    4,537,256,058 bytes, 20.649 sec, nz -cf
    4,525,490,586 bytes, 139.808 sec, 7z -mx2 -t7z
    4,275,310,587 bytes, 100.294 sec, exdupe
    4,199,492,742 bytes, 174.231 sec, 7z -mx3 -t7z
    3,797,186,089 bytes, 194.333 sec, 7z -mx4 -t7z
    3,642,918,008 bytes, 118.215 sec, zpaq -method 1
    3,483,168,571 bytes, 38.232 sec, nz -cd
    3,437,105,960 bytes, 90.018 sec, nz -cdp
    3,419,375,532 bytes, 94.312 sec, nz -cdP
    3,357,056,473 bytes, 77.419 sec, nz -cD
    3,344,721,803 bytes, 132.407 sec, nz -cDp
    3,332,127,917 bytes, 139.904 sec, nz -cDP
    3,224,133,815 bytes, 974.393 sec, nz -co
    3,180,221,654 bytes, 33,153 sec, arc -m1
    3,142,186,287 bytes, 372.343 sec, 7z -mx5 -t7z
    3,104,595,641 bytes, 277.912 sec, zpaq -method 3
    3,071,692,753 bytes, 184.804 sec, zpaq -method 2
    2,941,259,991 bytes, 393.906 sec, 7z -mx6 -t7z
    2,934,908,086 bytes, 55.927 sec, arc -m2
    2,780,804,463 bytes, 94.623 sec, arc -m3
    2,776,783,553 bytes, 523.237 sec, 7z -mx7 -t7z
    2,733,154,655 bytes, 956.786 sec, zpaq -method 4
    2,645,520,388 bytes, 171.431 sec, arc -m4
    2,488,027,327 bytes, 1089.377 sec, arc -m5
    Last edited by Sportman; 31st July 2012 at 19:33.

  2. #2
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Sportman: btw, trying to approach nz -cf efficiency, i have constructed the following arc option: -m=rep:c512:384m+4x4:tor:1:256k:h32k. please give it a try next time

    a bit of my own tests:
    I:\>nz a a office.dll -cf
    Compressed 810 411 321 into 453 286 993 in 0.87s, 882 MB/s
    Global Time = 1.514 = 00:00:01.514 = 100%

    I:\>nz a a office.dll -cf -t8
    Compressed 810 411 321 into 453 961 563 in 0.24s, 3207 MB/s
    Global Time = 1.139 = 00:00:01.139 = 100%

    I:\>Arc a a office.dll -m=rep:c512:384m+4x4:tor:1:256k:h32k
    Compressed 1 file, 810,411,321 => 465,598,969 bytes. Ratio 57.4%
    Global Time = 1.279 = 00:00:01.279 = 100%

    I:\>Arc a a office.dll -m1
    Compressed 1 file, 810,411,321 => 377,480,096 bytes. Ratio 46.5%
    Global Time = 2.184 = 00:00:02.184 = 100%
    i believe that it would start outperforming nz-cf if i will replace outdated tor:1 codec with brilliant LZ4 algorithm
    Last edited by Bulat Ziganshin; 7th August 2012 at 16:53.

  3. #3
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    Sportman: btw, trying to approach nz -cf efficiency, i have constructed the following arc option: -m=rep:c512:384m+4x4:tor:1:256k:h32k. please give it a try next time
    arc a -m1 -mt12 out.arc in\

    FreeArc 0.67 (May 22 2012) creating archive: out.arc
    Compressed 129,955 files, 11,710,996,808 => 3,179,452,012 bytes. Ratio 27.1%
    Compression time: cpu 118.06 secs, real 29.22 secs. Speed 400,819 kB/s
    All OK

    Elapsed Time: 00 00:00:29.369 (29.369 Seconds)

    -------------------------------------------------------------

    arc a -m=rep:c512:384m+4x4:tor:1:256k:h32k -mt12 out.arc in\

    FreeArc 0.67 (May 22 2012) creating archive: out.arc
    Compressed 129,955 files, 11,710,996,808 => 3,954,612,464 bytes. Ratio 33.7%
    Compression time: cpu 62.63 secs, real 27.65 secs. Speed 423,597 kB/s
    All OK

    Elapsed Time: 00 00:00:27.797 (27.797 Seconds)

    -------------------------------------------------------------

    nz a -cf -r -t12 out.nz in\

    NanoZip 0.09 alpha/Win64 (C) 2008-2011 Sami Runsas www.nanozip.net
    Intel(R) Core(TM) i7-3960X CPU @ 3.30GHz|42857 MHz|#6+HT|30157/32743 MB
    Archive: out.nz
    Threads: 12, memory: 512 MB, IO-buffers: 20+4 MB
    Compressor #0: nz_lzpf [48 MB]
    Compressor #1: nz_lzpf [48 MB]
    Compressor #2: nz_lzpf [48 MB]
    Compressor #3: nz_lzpf [48 MB]
    Compressor #4: nz_lzpf [48 MB]
    Compressor #5: nz_lzpf [48 MB]
    Compressor #6: nz_lzpf [48 MB]
    Compressor #7: nz_lzpf [48 MB]
    Compressor #8: nz_lzpf [48 MB]
    Compressor #9: nz_lzpf [48 MB]
    Compressor #10: nz_lzpf [48 MB]
    Compressor #11: nz_lzpf [48 MB]
    Compressed 11 710 996 808 into 4 537 239 496 in 0.00s, 11 TB/s
    IO-in: 7.44s, 1501 MB/s. IO-out: 2.01s, 2152 MB/s

    Elapsed Time: 00 00:00:09.030 (9.030 Seconds)

  4. #4
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Removed duplicate post.
    Last edited by Sportman; 7th August 2012 at 23:33.

  5. #5
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    oh, freearc is limited by I/O in this case. and nz is really beautiful. i should try m/t I/O

  6. #6
    Member
    Join Date
    May 2008
    Location
    Antwerp , country:Belgium , W.Europe
    Posts
    487
    Thanks
    1
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    Sportman: btw, trying to approach nz -cf efficiency, i have constructed the following arc option: -m=rep:c512:384m+4x4:tor:1:256k:h32k. please give it a try next time

    a bit of my own tests:
    Code:
    Arc a a office.dll -m=rep:c512:384m+4x4:tor:1:256k:h32k
    i believe that it would start outperforming nz-cf if i will replace outdated tor:1 codec with brilliant LZ4 algorithm
    I wonder why you use the classic "REP" as pre-processor and why not SREP ?
    Would SREP not be more efficient (ratio/time-unit) then REP ?

  7. #7
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    srep is much slower (~100 mb/s rather than 500+ mb/s)

  8. #8
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Single file with redundant data in it:

    Input:
    6,390,022,524 bytes db file

    Output:
    1,506,424,321 bytes, 56.027 sec, zpaq -method 1
    1,505,306,076 bytes, 16.857 sec, lz4 -c0
    1,340,678,786 bytes, 37.746 sec, lz4 -c1
    1,256,034,792 bytes, 36.485 sec, exd
    1,238,012,074 bytes, 21.694 sec, lz4 -c2
    939,393,656 bytes, 5.089 sec, nz -cF
    798,208,392 bytes, 3.103 sec, nz -cf
    550,765,919 bytes, 14.520 sec, arc -m2
    536,595,387 bytes, 5.318 sec, nz -cd
    526,220,916 bytes, 66.169 sec, nz -cdP
    521,511,962 bytes, 25.943 sec, rar -m1
    520,642,917 bytes, 14.557 sec, nz -cD
    515,298,742 bytes, 7.066 sec, arc -m1
    513,702,659 bytes, 59.504 sec, nz -cdp
    488,510,961 bytes, 126.033 sec, nz -cDP
    484,287,222 bytes, 41.631 sec, rar -m2
    405,703,266 bytes, 52.897 sec, bsc -m0
    377,833,430 bytes, 68.509 sec, zpaq -method 2
    369,444,303 bytes, 42.218 sec, rar -m3
    369,280,901 bytes, 46.949 sec, rar -m4
    369,229,683 bytes, 52.089 sec, rar -m5
    300,674,291 bytes, 18.410 sec, 7z -mx2 -t7z
    298,955,468 bytes, 21.067 sec, 7z -mx4 -t7z
    298,385,604 bytes, 19.984 sec, 7z -mx3 -t7z
    297,513,762 bytes, 16.701 sec, 7z -mx1 -t7z
    282,956,120 bytes, 77.982 sec, mcomp_x64 -mw
    276,793,713 bytes, 483.631 sec, zpaq -method 4
    268,657,800 bytes, 160.082 sec, zpaq -method 3
    262,837,712 bytes, 36.288 sec, arc -m3
    226,576,387 bytes, 228.481 sec, 7z -mx5 -t7z
    224,878,644 bytes, 248.085 sec, 7z -mx6 -t7z
    216,255,020 bytes, 15.509 sec, bsc -m7
    213,618,810 bytes, 24.900 sec, bsc -m6
    210,900,806 bytes, 377.845 sec, 7z -mx7 -t7z
    207,776,016 bytes, 108.233 sec, arc -m4
    200,314,764 bytes, 16.288 sec, bsc -m3
    191,948,065 bytes, 1442.940 sec, arc -m5
    189,817,880 bytes, 20.518 sec, bsc -m5
    185,118,682 bytes, 16.637 sec, bsc -m4
    143,021,562 bytes, 798.543 sec, zcm -m0
    139,073,269 bytes, 824.625 sec, zcm -m1
    132,407,523 bytes, 947.924 sec, zcm -m7
    110,092,324 bytes, 94.742 sec, nz -co
    99,539,156 bytes, 1073.631, sec nz -cO
    Last edited by Sportman; 12th August 2012 at 01:54.

  9. #9
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    Hi Sportman
    Just stumbled into the serie of tests you performed.

    Would you please share with us
    some information on the test config ?

    It seems to be a 12-cores system (real ? hyper-threaded ?)
    which CPU, which OS,

    and what about the underlying I/O ?
    It seems extremely fast (and, btw, nz results are pretty amazing),
    and it seems this is this part which makes a difference between tested programs.
    Is that a RAM disk, a Raid on SSD, or anything else ?

    last point :
    what about the LZ4 version used in this test ? Is it this one => http://fastcompression.blogspot.fr/p/lz4.html ?
    Does it correctly detect 12 cores ?

  10. #10
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Cyan View Post
    Would you please share with us some information on the test config ?
    Intel Core i7-3960X Extreme (OC) 4.4GHz+ 5.7GHz turbo 6 core (12 hyper-threads)
    32GB PC3-17000 2133MHz DDR3
    nVIDIA GeForce GTX 680
    Corsair Performance Pro 256GB single SSD
    Windows 7 Pro SP1 64-bit

    7z [64] 9.28 alpha 2012-06-20
    arc 0.67 May 22 2012
    bsc 3.1.0 8 July 2012
    exdupe 0.3.3 beta
    lz4 v1.3 Mar 16 2012
    mcomp_x64 v2.00
    nz 0.09 alpha/win64
    rar 4.20 9 juni 2012
    zcm v.050a

    For all tests I used the same hardware (hyper-threads enabled) and took the latest software versions I could find preferable 64-bit.
    I set parameter threads in most cases to 12 when option was available, LZ4 do detect 12 cores right by default, I thought only Nanozip use 6 instead of 12 cores by default, ZCM is single core. With all tests GPU (and CUDA) where disabled by remote desktop accept BSC -m7 it was tested via console.
    For later tests with folders I used RAM-disk to store console output, so not for in or output of files, that was done on a single SSD in all tests. I guess IO speed is fast because Windows (default install) use memory for caching read/write to SSD, otherwise I can not explain Nanozip speed is faster then theoretical SSD speed (515MB/s). I tested this by repeating the same tests at a single harddisk (210MB/s) and got similar high speeds with Nanozip.

    I think Nanozip can even be faster when the left task(s) can be split over the finished threads, because I see some threads are already 100% while others are only 60-80%.
    Last edited by Sportman; 13th August 2012 at 03:33.

  11. #11
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    OK, thanks Sportman,

    So, the primary reason for nz performance is
    its capability to read from cache and write to cache
    without suffering any delay from the real storage device behind, whatever it is.
    (at least in its 64-bits version, maybe it could be different for the 32-bits one ?)
    The main process probably "quits" before data is really written to disk.

    Have you got some kind of "process time" measurement ?
    This would help determining how much is the share of I/O wait.

    Regards

  12. #12
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts

    zpaq benchmarks

    Sportman:
    1. running all tests on RAM disk would make results more repeatable and error-prone
    2. i suggest to run nz with -t12 switch and continue to check freearc's fastest mode i suggested above

    Cyan:
    1. write caching is OS feature automatically used by all programs
    2. nz strength is that it separates task into N threads, each reading, compressing and writing data independently. afaik, it's the only archiver with m/t i/o

  13. #13
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    2. nz strength is that it separates task into N threads, each reading, compressing and writing data independently. afaik, it's the only archiver with m/t i/o
    A while back, i did some tests in order to check this strategy,
    and it turned out to be worse, overall,
    than having a single thread in charge of i/o
    (working in parallel of compression, of course).

    I simply assumed then that
    the OS was handling i/o better
    when it was a sequential problem,

    in contrast, by writing/reading i/o in no particular order,
    it could create some kind of performance deterioration, for example regarding look-ahead assumptions,
    like in a "random seek" situation.

    Now, it could also be that my tested implementation was not good enough...

  14. #14
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    Sportman:
    1. running all tests on RAM disk would make results more repeatable and error-prone
    2. i suggest to run nz with -t12 switch and continue to check freearc's fastest mode i suggested above
    1. My RAM-disk software allow max. 4GB (free version), not enough for this big file tests.
    I did 6 times a test behind each other to see different timings at single SSD as before:
    nz -cf -t12
    2.941 sec
    3.001 sec
    3.011 sec
    2.948 sec
    3.066 sec
    2.959 sec

    2. I used -t12 switch for nz in the last test. I tried with the last test but arc a -m=rep:c512:384m+4x4:tor:1:256k:h32k -mt12 out.arc in.db hang after 25 seconds CPU time and 1,415,403,206 bytes output file and console saying "Processed 99.7%" but never finish, only CTRL-C can end it.

  15. #15
    Member
    Join Date
    Apr 2012
    Location
    Denmark
    Posts
    65
    Thanks
    21
    Thanked 16 Times in 14 Posts
    I don't think you can rely very much on "NUMBER_OF_PROCESSORS" variable. You might consider the "ordinary" way - at least as a fallback:
    Code:
      // In Windows return %NUMBER_OF_PROCESSORS% or get from system
      const char* p=getenv("NUMBER_OF_PROCESSORS");
      if (p) rc=atoi(p);
      else {
        SYSTEM_INFO sysinfo;
        GetSystemInfo(&sysinfo);
        rc = sysinfo.dwNumberOfProcessors;
      }
    This will include hyperthreading virtual cores, but it still makes a better default.

  16. #16
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    nz is too smart f.e. it detects my system as:

    Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz|12800 MHz|#4+HT|6180/16361 MB

    it just prefers to not take into account HT-provisioned cpu threads




    Sportman, can you please retry rep:c512:384m+4x4:tor:1:256k:h32k test with latest alpha: http://freearc.org/Download-Alpha.aspx

  17. #17
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    Sportman, can you please retry rep:c512:384m+4x4:tor:1:256k:h32k test with latest alpha: http://freearc.org/Download-Alpha.aspx
    Ok now:
    1,415,540,201 bytes, 6.628 sec (arc report => 1,415,674,715 bytes)
    Last edited by Sportman; 14th August 2012 at 12:40.

  18. #18
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    I tried to find a way to avoid Windows caching by removing page files, command line command to clean Windows cache, software for Windows cache cleaning, software to optimize Windows memory but nothing worked. So I wrote a program myself http://www.metacompressor.com/download/cleanmem.zip to clean memory. It's a .NET 2.x or higher combined GUI/command line application for example "cleanmem 1024" clean 1024MB of memory, always select less (round 2GB) then max physical memory.

    I repeated the last test at a single harddisk, single SSD and RAM-disk and run cleanmem before every archiver (all at 12 threads):

    Input harddisk:
    6,390,022,524 bytes db file

    Output harddisk:
    1,506,424,321 bytes, 64.521 sec, zpaq -method 1
    1,505,306,076 bytes, 43.403 sec, lz4 -c0
    1,416,030,535 bytes, 44.806 sec, arc -mxtor:1
    1,415,540,201 bytes, 44.465 sec, arc -m=rep:c512:384m+4x4:tor:1:256k:h32k
    1,262,409,546 bytes, 42.762 sec, qpress64 -L1
    1,257,660,820 bytes, 42.267 sec, qpress64 -L2
    1,256,034,792 bytes, 49.915 sec, exdupe
    1,198,830,656 bytes, 41.661 sec, qpress64 -L3
    798,208,427 bytes, 65.830 sec, nz -cf
    521,511,962 bytes, 37.833 sec, rar -m1
    515,298,446 bytes, 37.332 sec, arc -m1
    405,703,266 bytes, 63.284 sec, bsc -m0
    297,513,762 bytes, 36.061 sec, 7z -mx1 -t7z
    282,956,120 bytes, 82.687 sec, mcomp_x64 -mw


    Input SSD:
    6,390,022,524 bytes db file

    Output SSD:
    1,506,424,321 bytes, 58.897 sec, zpaq -method 1
    1,505,306,076 bytes, 18.265 sec, lz4 -c0
    1,416,030,535 bytes, 14.111 sec, arc -mxtor:1
    1,415,540,201 bytes, 14.050 sec, arc -m=rep:c512:384m+4x4:tor:1:256k:h32k
    1,262,409,546 bytes, 15.993 sec, qpress64 -L1
    1,257,660,820 bytes, 15.652 sec, qpress64 -L2
    1,256,034,792 bytes, 36.707 sec, exdupe
    1,198,830,656 bytes, 15.604 sec, qpress64 -L3
    798,208,392 bytes, 12.816 sec, nz -cf
    521,511,962 bytes, 25.758 sec, rar -m1
    515,298,446 bytes, 13.355 sec, arc -m1
    405,703,266 bytes, 56.445 sec, bsc -m0
    297,513,762 bytes, 17.774 sec, 7z -mx1 -t7z
    282,956,120 bytes, 85.972 sec, mcomp_x64 -mw


    Input RAM-disk:
    6,390,022,524 bytes db file

    Output RAM-disk:
    1,506,424,321 bytes, 55.684 sec, zpaq -method 1
    1,505,306,076 bytes, 1.607 sec, lz4 -c0
    1,416,030,535 bytes, 3.570 sec, arc -mxtor:1
    1,415,540,201 bytes, 4.715 sec, arc -m=rep:c512:384m+4x4:tor:1:256k:h32k
    1,262,409,546 bytes, 2.552 sec, qpress64 -L1
    1,257,660,820 bytes, 2.921 sec, qpress64 -L2
    1,256,034,792 bytes, 33.872 sec, exdupe
    1,198,830,656 bytes, 7.671 sec, qpress64 -L3
    798,208,427 bytes, 3.274 sec, nz -cf
    521,511,962 bytes, 25.686 sec, rar -m1
    515,298,446 bytes, 5.647 sec, arc -m1
    405,703,266 bytes, 53.934 sec, bsc -m0
    297,513,762 bytes, 16.807 sec, 7z -mx1 -t7z
    282,956,120 bytes, 89.108 sec, mcomp_x64 -mw

    Update 1: added RAM-disk results
    Update 2: added arc -mxtor:1 and qpress64 -L1 till L3
    Last edited by Sportman; 21st August 2012 at 22:53.

  19. #19
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    Thanks Sportman for this very precise report

    It's interesting :
    The relative ranking of nz is different, depending on SSD or HDD :
    the -cf conf is the fastest for SSD, while it is among the slowest on HDD. Quite a huge difference.

    It could point towards the "seek cost" mentioned earlier :
    if nz has several reader processes in parallel, each one in charge of its own chunk,
    then the "read-head" of HDD may have some troubles in seeking different parts of the files,
    while the "seek-time" for SSD is almost zero. So the strategy works better here.

  20. #20
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Cyan View Post
    HDD may have some troubles in seeking different parts of the files,
    while the "seek-time" for SSD is almost zero.
    I managed to install a trial version of RAM-disk software what allow more then 4GB and added results for RAM-disk only. RAM-disk read/write speed is 12-16 times faster then single SSD.

  21. #21
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Sportman, please try next time with -mxtor:1. it seems that REP is the brake in this situation

  22. #22
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    please try next time with -mxtor:1.
    Done also added qpress64, if there are more multi-thread archivers I can add them too.

  23. #23
    Member
    Join Date
    Nov 2011
    Location
    Germany
    Posts
    16
    Thanks
    2
    Thanked 0 Times in 0 Posts
    Hey, sorry - I don't want to be rude, but this here is about ZPAQ standard and I'm much interessted in it. But many of the recent posts did not relate to zpaq and maybe you should make up a new thread for your testing of compression software.

    Thanks

  24. #24
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by pothos View Post
    But many of the recent posts did not relate to zpaq
    The tests I did where special designed to test ZPAQ new file duplicate feature. To compare ZPAQ I took other multi-thread archivers with an options to compress folders. I repeated the test with more duplicate data to see if ZPAQ could be better with that, as last I took a highly redundant single db file so it was possible to compare ZPAQ also against archivers with no option to compress folders. Most reaction where because bugs where found or to exclude mistakes in testing. It was not my intentions to flood this thread, results where different then I expected. If this test need an update I shall do it in a new thread, if there are other multi-thread archivers to compare ZPAQ against, write it then in a personal message, I'm also interested if there are already existing multi-server archivers because I have an almost equal test system, write it then also in a PM.

  25. #25
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    I'm interested in why zpaq did poorly on the db file.

  26. #26
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts

    zpaq benchmarks

    A second highly redundant test, this time with a single IIS log file, all test 12 threads accept ZCM:

    Input RAM-disk:
    6,350,954,695 bytes IIS log file

    Output RAM-disk:
    707,717,722 bytes, 2.544 sec, qpress64 -L1
    686,695,906 bytes, 4.722 sec, arc -m=rep:c512:384m+4x4:tor:1:256k:h32k
    685,731,833 bytes, 7.280 sec, qpress64 -L3
    669,287,685 bytes, 2.644 sec, qpress64 -L2
    660,119,338 bytes, 1.350 sec, lz4 -c0
    645,389,433 bytes, 6.402 sec, exdupe
    642,118,794 bytes, 3.446 sec, arc -mxtor:1
    563,364,998 bytes, 56.258 sec, zpaq -method 1
    561,153,312 bytes, 13.205 sec, lz4 -c1
    524,216,965 bytes, 10.175 sec, lz4 -c2
    504,639,418 bytes, 3.332 sec, nz -cF
    482,657,975 bytes, 2.837 sec, nz -cf
    422,870,131 bytes, 12.769 sec, 7z -mx1 -t7z
    414,425,494 bytes, 11.830 sec, nz -cd
    413,560,428 bytes, 6.775 sec, arc -m1
    401,720,461 bytes, 56.568 sec, zpaq -method 2
    394,753,746 bytes, 22.938 sec, rar -m1
    377,455,039 bytes, 13.509 sec, 7z -mx2 -t7z
    365,981,531 bytes, 95.911 sec, nz -cdP
    348,080,713 bytes, 77.980 sec, nz -cdp
    335,800,507 bytes, 15.877 sec, 7z -mx3 -t7z
    331,075,486 bytes, 159.577 sec, nz -cDP
    300,597,983 bytes, 33.223 sec, rar -m2
    327,471,460 bytes, 29.644 sec, nz -cD
    316,523,508 bytes, 18.338 sec, 7z -mx4 -t7z
    289,165,732 bytes, 42.121 sec, rar -m3
    286,768,217 bytes, 49.320 sec, rar -m4
    286,005,283 bytes, 55.504 sec, rar -m5
    274,811,627 bytes, 82.273 sec, nz -co
    270,751,539 bytes, 102.292 sec, 7z -mx5 -t7z
    267,135,129 bytes, 108.796 sec, 7z -mx6 -t7z
    253,265,714 bytes, 198.876 sec, 7z -mx7 -t7z
    233,403,066 bytes, 8.434 sec, bsc -m3
    184,467,335 bytes, 5.111 sec, arc -m2
    182,324,844 bytes, 9,254 sec, arc -m3
    181,258,600 bytes, 8.450 sec, bsc -m4
    179,496,411 bytes, 209.309 sec, zpaq -method 3
    177,904,210 bytes, 11.026 sec, arc -m4
    166,755,616 bytes, 85.650 sec, mcomp_x64 -mw
    166,696,258 bytes, 11.169 sec, bsc -m5
    166,534,126 bytes, 31.368 sec, bsc -m0
    162,232,102 bytes, 13.636 sec, bsc -m6
    161,944,830 bytes, 73.287 sec, arc -m5
    158,919,806 bytes, 9.312 sec, bsc -m7
    158,792,821 bytes, 161.790 sec, arc -m6
    134,952,205 bytes, 637.304 sec, zpaq -method 4
    112,565,035 bytes, 455.473 sec, zcm -m0
    96,768,550 bytes, 478.817 sec, zcm -m7
    Last edited by Sportman; 21st August 2012 at 22:59.

  27. #27
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Sportman, i'm still trying to understand why fa is so slow. in my tests xtor:1 is faster than nz-cf

    please check that facompress.dll is placed in the same directory as arc.exe

  28. #28
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    please check that facompress.dll is placed in the same directory as arc.exe
    Njet , only Arc.exe, added facompress.dll and facompress_mt.dll and repeated both RAM-disk tests, both from 6.x to 3.x sec, I updated results.

  29. #29
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    thanks! what about -m1/-mrep...? they may improve too

  30. #30
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    772
    Thanks
    63
    Thanked 270 Times in 190 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    what about -m1/-mrep...? they may improve too
    They did, all updated.

Page 1 of 4 123 ... LastLast

Similar Threads

  1. zpaq updates
    By Matt Mahoney in forum Data Compression
    Replies: 2527
    Last Post: 4th May 2019, 12:33
  2. UCLC (Ultimate Command Line Compressors) Benchmarks
    By osmanturan in forum Data Compression
    Replies: 2
    Last Post: 4th September 2015, 11:30
  3. Greetings, Questions, and Benchmarks
    By musicdemon in forum Data Compression
    Replies: 4
    Last Post: 8th January 2012, 22:45
  4. zpaq 1.02 update
    By Matt Mahoney in forum Data Compression
    Replies: 11
    Last Post: 10th July 2009, 00:55
  5. MaximumCompression.com Benchmarks
    By osmanturan in forum Data Compression
    Replies: 29
    Last Post: 5th May 2009, 10:31

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •