Page 1 of 3 123 LastLast
Results 1 to 30 of 67

Thread: Fastest decompressor!?

  1. #1
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Fastest decompressor!?

    Hi to all LZ maniacs.

    My diagnose: 'Maniacal and unfading TEXT extra decompression speed fondness.'
    I have some dreams about brute-force English TEXTs searching which are closely bounded with what ... of course variants of Lempel-Ziv plain C implementations.
    I should like to ask: "Who is the current LZ king? That is: fastest decompressor tool and his author."
    As far as I know this is Lasse Reinhold's QuickLZ(regarding RAM-to-RAM decompression performance indeed), but I feel there are few new kings coming.

    Statements like: "Compression mode does not affect decompression time." are music to my ears.
    In my insufficient knowledge this is the case only with LZ family.

    Also don't you think, in order to avoid 200+bytes of hardware & software specifications, it is better to use an unconditional measure 'decompression ratio' similarly to 'compression ratio' but regarding speed of RAM-to-RAM decompression compared to 'memcpy()' instead of compressed size compared to uncompressed size.
    For instance 'Everest' benchmarking program on my machine gives 3724MB/s Memory copy while 'memcpy()' which I use: 1256MB/s - ONE THIRD, what a waste!? Whether Linux or Windows, 32bit or 64bit, Triple or Double memory channel, 2GHz or 4GHz my careometer is zero, I prefer a speaking of itself measure.

    Following is a real-world test for English text(sorted by decompression time in descending order):

    BALZ v1.15 by Ilia Muraviev:
    [Pseudo RAM-to-RAM(from cache & all 'fwrite' in comments) decompression took: 17185ms(206908949/17185 = 11MB/s); Grmbl: what LZ is this!?]
    009,769,446 Folio VIP - Osho's Books on CD-ROM Part 1.txt.IM_balz
    009,359,126 Folio VIP - Osho's Books on CD-ROM Part 2.txt.IM_balz
    009,638,183 Folio VIP - Osho's Books on CD-ROM Part 3.txt.IM_balz
    009,390,510 Folio VIP - Osho's Books on CD-ROM Part 4.txt.IM_balz
    008,913,081 Folio VIP - Osho's Books on CD-ROM Part 5.txt.IM_balz
    047,070,346 bytes in total i.e. Compression Ratio: 22.7%; Decompression Ratio: 0.8%

    Small PPMII compressor with 1024KB memory heap, variant I rev.1 by Dmitry Shkarin, 'order=3' compression level:
    [Pseudo RAM-to-RAM(from cache & 'putc' in comment) decompression took: 14140ms(206908949/14140 = 14MB/s)]
    09,877,008 Folio VIP - Osho's Books on CD-ROM Part 1.txt.DS_ppms_O3
    09,467,546 Folio VIP - Osho's Books on CD-ROM Part 2.txt.DS_ppms_O3
    09,752,451 Folio VIP - Osho's Books on CD-ROM Part 3.txt.DS_ppms_O3
    09,507,375 Folio VIP - Osho's Books on CD-ROM Part 4.txt.DS_ppms_O3
    08,897,278 Folio VIP - Osho's Books on CD-ROM Part 5.txt.DS_ppms_O3
    047,501,658 bytes in total i.e. Compression Ratio: 22.9%; Decompression Ratio: 1.1%

    LZMA Utility 4.65 by Igor Pavlov:
    [Pseudo RAM-to-RAM(from cache & 'write' in comment) decompression took: 4250ms(206908949/4250 = 46MB/s)]
    008,867,853 Folio VIP - Osho's Books on CD-ROM Part 1.txt.IP_LZMA
    008,552,195 Folio VIP - Osho's Books on CD-ROM Part 2.txt.IP_LZMA
    008,820,098 Folio VIP - Osho's Books on CD-ROM Part 3.txt.IP_LZMA
    008,538,068 Folio VIP - Osho's Books on CD-ROM Part 4.txt.IP_LZMA
    008,163,054 Folio VIP - Osho's Books on CD-ROM Part 5.txt.IP_LZMA
    042,941,268 bytes in total i.e. Compression Ratio: 20.7%; Decompression Ratio: 3.6%

    minigzip(zlib 1.2.5) Jean-loup Gailly & Mark Adler, maximum compression level:
    [Pseudo RAM-to-RAM(from cache & 'fwrite' in comment) decompression took: 1700ms(206908949/1700 = 116MB/s)]
    013,265,195 Folio VIP - Osho's Books on CD-ROM Part 1.txt.gz
    012,858,271 Folio VIP - Osho's Books on CD-ROM Part 2.txt.gz
    013,104,281 Folio VIP - Osho's Books on CD-ROM Part 3.txt.gz
    012,639,984 Folio VIP - Osho's Books on CD-ROM Part 4.txt.gz
    011,736,155 Folio VIP - Osho's Books on CD-ROM Part 5.txt.gz
    063,603,886 bytes in total i.e. Compression Ratio: 30.7%; Decompression Ratio: 9.2%

    QuickLZ 1.4.0 by Lasse Reinhold, maximum compression level:
    [RAM-to-RAM decompression took: 640ms(206908949/640 = 308MB/s)]
    016,330,610 Folio VIP - Osho's Books on CD-ROM Part 1.txt.lasse
    015,736,568 Folio VIP - Osho's Books on CD-ROM Part 2.txt.lasse
    016,064,789 Folio VIP - Osho's Books on CD-ROM Part 3.txt.lasse
    015,735,147 Folio VIP - Osho's Books on CD-ROM Part 4.txt.lasse
    014,765,031 Folio VIP - Osho's Books on CD-ROM Part 5.txt.lasse
    078,632,145 bytes in total i.e. Compression Ratio: 38.0%; Decompression Ratio: 24.5%

    Non-compressed TEXT:
    [RAM-to-RAM decompression i.e. memcpy() took: 157ms(206908949/157 = 1256MB/s)]
    042,676,189 Folio VIP - Osho's Books on CD-ROM Part 1.txt
    041,123,931 Folio VIP - Osho's Books on CD-ROM Part 2.txt
    042,447,586 Folio VIP - Osho's Books on CD-ROM Part 3.txt
    041,875,240 Folio VIP - Osho's Books on CD-ROM Part 4.txt
    038,786,003 Folio VIP - Osho's Books on CD-ROM Part 5.txt
    206,908,949 bytes in total i.e. Compression Ratio: 100%; Decompression Ratio: 100%

    The CPU for above stats was: Mobile Pentium Merom-1M, 65nm, 2166 MHz.

    Roughly speaking one compressor may have 1:3(QuickLZ) compression ratio, while other 1:5(LZMA) - the better performer regarding decompressed data(TEXT) delivery is not the one with mere greater RAM-to-RAM decompression speed but with smaller amount of data to upload(i.e. ratio dependent) too.
    It is funny: in order to describe the complex performance of some decompressor it is enough to summarize meaning of 'Compression Ratio' & 'Decompression Ratio' into one 'Com/Decom Ratio'. This is almost like removing the I/O time from the ultimate formula - as if we have an extremely speedy device. It's obvious: THE CLOSER THE 'Com/Decom Ratio' is to 100% THE BETTER. Com/Decom Ratio is 38.0/24.5 = 155% for QuickLZ. I wonder, here arise the need of LOWest C.R. but HIGHest D.R. archiver, something like C.R. around 50% and D.R. around 35%; it is worth it to be explored, I think.

    From my user standpoint of view LZ has a great future, that is at the moment the algorithm is not implemented well; to have 6 cores and memory bandwidth 10++GB/s and not to be able to utilize such a power with such a low CPU load algorithm - it's a shame. Yea, yea I know that it is impossible to have a baby in one month having nine wives.

    My LZ(for TEXTs) dream: to achieve 2x(SSD read speed 200MB/s) decompressed TEXT upload - which means one fourth of a second to upload(assuming 25% compression ratio i.e. 50MB) and another one fourth of a second to deliver decompressed 200MB in order to double upload speed. This is it, 800MB/s RAM-to-RAM decompression, it may look like a greedy dream, but with incoming 20nm CPUs and with skillful guys like Dmitry Shkarin, Igor Pavlov, Lasse Reinhold, Markus Oberhumer, Ariya Hidayat, Ilia Muraviev it is achievable I know it.

    For more info about [one of] the fastest LZ plain C implementation: www.sanmayce.com/Downloads/Kazuyalogo_diz_.pdf
    (Try again next day in case of missing, sorry)

    I should sure like to test faster [LZ] decompressors on OSHO books.
    Keep the thrill from decompression-speed-improvements alive.
    Speed is beauty. Yesterday I watched again a movie about aircraft development where the narrator said about SR-71 Blackbird: 'Inspiring awe', I agree but would add: 'Inspiring inspiration'.

    inspiration:
    1. High spirits: animation, elatedness, elation, euphoria, exaltation, exhilaration
    2. Something that encourages: motivation
    3. Liveliness and vivacity of imagination: brilliance, brilliancy, fire, genius
    4. A sudden exciting thought: brainstorm
    5. Divine guidance and motivation imparted directly: afflatus

    High spirits - High speeds, ha-ha.
    Regards.

  2. #2
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Regarding to BALZ performance, it's due to ROLZ nature - decompression suffer from ROLZ offset table. Pure LZ77's decompression speed is always faster because of this reason.
    BIT Archiver homepage: www.osmanturan.com

  3. #3
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    If you are interested in decompression speed, you may have a look at LZ4 :
    http://phantasie.tonempire.net/pc-co...-s-t95.htm#144

    You may try the option -bench if you want to test pure RAM-to-RAM compression/decompression speed.

    3) LZ4.exe -bench enwik8
    with a Core 2 Duo E8400 @ 3.0GHz :
    - Compression speed : 195MB/s
    - Decoding speed : 475MB/s

    4) LZ4.exe -bench win98.vmdk
    with a Core 2 Duo E8400 @ 3.0GHz :
    - Compression speed : 145MB/s
    - Decoding speed : 965MB/s

    LZ4 is also a very fast compressor, but a derivative spending a lot of time compressing (for a better ratio) and nonetheless keeping the same or better decoding speed is also achievable.
    Last edited by Cyan; 1st May 2010 at 18:21.

  4. #4
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    CABARC from Microsoft compresses moderately good when using LZX method while being exceptionally fast at decompression. Bad thing is that CABARC is severely outdated and probably it uses very ineffective (speed-wise) parsing so it's not very competitive at compression performance.

  5. #5
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    lzx was bringed optimal parsing into lzh compression but with huffman stage, it can't compete directly with lzss engines

  6. #6
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Thank you Cyan,
    LZ4 is outstandingly fast,

    my in-search-for-fastest-text-decompressor test shows: the new LZ [co-]king: Yann Collet; the tool: LZ4.

    LZ4 Oct 16 2009 by Yann Collet:
    [RAM-to-RAM decompression took: (0.12+0.09+0.09+0.13+0.11=540)ms(206908949/540 = 365MB/s)]
    021,106,349 Folio VIP - Osho's Books on CD-ROM Part 1.txt.lz4
    020,262,262 Folio VIP - Osho's Books on CD-ROM Part 2.txt.lz4
    020,882,379 Folio VIP - Osho's Books on CD-ROM Part 3.txt.lz4
    020,506,294 Folio VIP - Osho's Books on CD-ROM Part 4.txt.lz4
    019,150,183 Folio VIP - Osho's Books on CD-ROM Part 5.txt.lz4
    101,907,467 bytes in total i.e. Compression Ratio: 49.2%; Decompression Ratio: 29.0%

    The test results follow:

    206,908,949 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt
    101,903,677 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.LZ4
    078,630,023 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.QuickLZ

    LZ4 Prompt-to-Prompt Time: 0.72s
    C:\WorkTemp\_Osho's Books_test>lz4 -d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.lz4" nul
    LZ4 Oct 16 2009, compression software by Yann Collet
    Regenerated size : 206908949 Bytes
    Decoding Time : 0.56s ==> 366MB/s
    Total Time : 0.69s ( Read wait : 0.13s // Write wait : 0.00s )
    C:\WorkTemp\_Osho's Books_test>


    And to confirm above test:
    C:\WorkTemp\_Osho's Books_test>lz4 -bench "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt"
    LZ4 Oct 16 2009, compression software by Yann Collet
    Setting HIGH_PRIORITY_CLASS...
    Benchmarking, please wait...
    Compressed 206908949 bytes into 101868935 bytes (49.2%) at 163.5 Mbyte/s.
    Decompressed at 375.7 Mbyte/s.
    (1 MB = 1 000 000 bytes)
    C:\WorkTemp\_Osho's Books_test>


    Although LZ4 has achieved best Decompression Ratio, LZ4 has next-to-best Com/Decom Ratio(49.2/29.0 = 169%) compared to QuickLZ(38.0/24.5 = 155%).
    For me these two measures are dominant, which tool is best - depends on needs.
    Nevertheless, LZ4 is closer to my needs and I think it is the fastest LZ decompressor for TEXTs, so far.
    Keep chewing(doing your mojo) Cyan, he-he.

    And one sad thing though - it's close source, alas and double GRMBL.
    Regards.

  7. #7
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts

    Cool

    Check out ULZ, it has the simplest decoder ever!


  8. #8
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Thank you Encode,
    ULZ performed along the fastest decompressors, but v0.02 is testless due to lack of stats.
    How many seconds are needed to flush uncompressed data to disk? Total time? Who cares?
    May be if you add some decent time stats(or/and fix writing to nul device) then I will be able to test this promising tool.

    ULZ v0.02 by Ilia Muraviev, maximum compression level:
    [RAM-to-RAM decompression took: (?)ms(206908949/? = ?MB/s)]
    012,998,634 Folio VIP - Osho's Books on CD-ROM Part 1.txt.ulz
    012,508,645 Folio VIP - Osho's Books on CD-ROM Part 2.txt.ulz
    012,881,886 Folio VIP - Osho's Books on CD-ROM Part 3.txt.ulz
    012,496,452 Folio VIP - Osho's Books on CD-ROM Part 4.txt.ulz
    011,776,455 Folio VIP - Osho's Books on CD-ROM Part 5.txt.ulz
    062,662,072 bytes in total i.e. Compression Ratio: 30.2%; Decompression Ratio: ?%

    C:\WorkTemp\_Osho's Books_test>ulz d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.ulz" nul
    ULZ v0.02 by Ilia Muraviev
    Decompressing...
    62620125 -> 3605 in 0.8 sec
    C:\WorkTemp\_Osho's Books_test>

    C:\WorkTemp\_Osho's Books_test>ulz d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.ulz" buggynul
    ULZ v0.02 by Ilia Muraviev
    Decompressing...
    62620125 -> 206908949 in 3.6 sec
    C:\WorkTemp\_Osho's Books_test>

    The in-search-for-fastest-text-decompressor test results follow:

    206,908,949 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt
    101,903,677 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.LZ4
    078,630,023 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.QuickLZ
    062,740,017 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.AIN2.22
    062,620,125 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.ULZ
    062,218,572 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.PKZIP2.50
    055,730,272 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.zip64ultra
    041,671,476 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.Plzip0.5

    ...

    AIN 2.22 Prompt-to-Prompt Time: 3.1s
    C:\WorkTemp\_Osho's Books_test>ain e -y OSHO.AIN
    AIN 2.22 Copyright (c) 1993-95 Transas Marine (UK) Ltd.
    Archive : C:\WORKTEMP\_OSHO'~1\OSHO.AIN
    Created : 05.03.110 06:20:14
    Files : 1, Total size: original 206908949, compressed 62739945 (30%)
    Switches: /m1 /u1
    OSHO.TXT already exists
    Extracting OSHO.TXT
    1 files processed
    C:\WorkTemp\_Osho's Books_test>

    7-Zip 9.10 beta Prompt-to-Prompt Time: between 3.1s and 3.2s
    C:\WorkTemp\_Osho's Books_test>7z e -y OSHO.zip64ultra
    7-Zip 9.10 beta Copyright (c) 1999-2009 Igor Pavlov 2009-12-22
    Processing archive: OSHO.zip64ultra
    Extracting OSHO.TXT
    Everything is Ok
    Size: 206908949
    Compressed: 55730272
    C:\WorkTemp\_Osho's Books_test>

    PKUNZIP Version 2.50 Prompt-to-Prompt Time: between 3.1s and 3.2s
    C:\WorkTemp\_Osho's Books_test>pkunzip -o OSHO.ZIP
    PKUNZIP (R) FAST! Extract Utility Version 2.50 03-01-1999
    Copr. 1989-1999 PKWARE Inc. All Rights Reserved. Shareware Version
    PKUNZIP Reg. U.S. Pat. and Tm. Off.
    # Pentium II class CPU detected.
    # XMS version 2.00 detected.
    # DPMI version 0.90 detected.
    Searching ZIP: OSHO.ZIP
    Inflating: OSHO.TXT
    C:\WorkTemp\_Osho's Books_test>

    LZ4 Prompt-to-Prompt Time: 3.28s
    C:\WorkTemp\_Osho's Books_test>lz4 -d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.LZ4" LZ4.txt
    LZ4 Oct 16 2009, compression software by Yann Collet
    Regenerated size : 206908949 Bytes
    Decoding Time : 0.55s ==> 376MB/s
    Total Time : 3.25s ( Read wait : 0.11s // Write wait : 2.58s )
    C:\WorkTemp\_Osho's Books_test>

    QuickLZ Prompt-to-Prompt Time: between 3.5s and 3.6s
    C:\WorkTemp\_Osho's Books_test>kazuya d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.QuickLZ" QuickLZ.txt
    QuickLZ: decompressed 78630023 bytes back into 206908949 bytes.
    C:\WorkTemp\_Osho's Books_test>

    ULZ Prompt-to-Prompt Time: between 3.5s and 3.7s
    C:\WorkTemp\_Osho's Books_test>ulz d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.ULZ" ULZ.txt
    ULZ v0.02 by Ilia Muraviev
    Decompressing...
    62620125 -> 206908949 in 3.6 sec
    C:\WorkTemp\_Osho's Books_test>

    plzip Prompt-to-Prompt Time: 5.78s
    C:\WorkTemp\_Osho's Books_test>plzip.exe -n2 -k -d -f "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.Plzip0.5"
    C:\WorkTemp\_Osho's Books_test>

    plzip Prompt-to-Prompt Time: 6.82s
    C:\WorkTemp\_Osho's Books_test>plzip.exe -n1 -k -d -f "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.Plzip0.5"
    C:\WorkTemp\_Osho's Books_test>

    A domination of 10-15 years old DOS tools, how is it possible?!
    Viva the authors of AIN & PKUNZIP!
    Paramore - Decode: very appropriate video-clip to contemplate on these two decades of decadence.
    Again Igor Pavlov did a great job, VIVA!

    Regards.

  9. #9
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    To measure Kernel Time/User Time/Process Time/Global Time use a Timer tool from Igor Pavlov!

  10. #10
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts
    There is another very strong candidate for best Com/Decom Ratio:

    LZTURBO 0.95 by Hamid Buzidi, -19 compression Method & Level:
    [RAM-to-RAM decompression took: (?)ms(206908949/? = ?MB/s)]
    010,792,683 Folio VIP - Osho's Books on CD-ROM Part 1.txt.lzt
    010,402,530 Folio VIP - Osho's Books on CD-ROM Part 2.txt.lzt
    010,741,607 Folio VIP - Osho's Books on CD-ROM Part 3.txt.lzt
    010,413,041 Folio VIP - Osho's Books on CD-ROM Part 4.txt.lzt
    009,934,520 Folio VIP - Osho's Books on CD-ROM Part 5.txt.lzt
    052,284,381 bytes in total i.e. Compression Ratio: 25.2%; Decompression Ratio: ?%

    Sadly again, no source and no RAM-to-RAM decompression stats.

    206,908,949 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt
    101,903,677 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.LZ4
    078,630,023 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.QuickLZ
    062,740,017 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.AIN2.22
    062,620,125 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.ULZ
    062,218,572 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.PKZIP2.50
    055,891,414 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.zipDeflate64
    048,780,809 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.lzt [-29]
    042,723,543 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.lzt [-39]
    041,671,476 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.Plzip0.5
    031,050,654 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.zpaq1.00 [c3]

    ...

    C:\WorkTemp\_Osho's Books_test>timer ain e -y OSHO.AIN
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    AIN 2.22 Copyright (c) 1993-95 Transas Marine (UK) Ltd.
    Archive : C:\WORKTEMP\_OSHO'~1\OSHO.AIN
    Created : 05.10.110 04:09:54
    Files : 1, Total size: original 206908949, compressed 62739945 (30%)
    Switches: /m1 /u1
    OSHO.TXT already exists
    Extracting OSHO.TXT
    1 files processed
    Kernel Time = 0.000 = 0%
    User Time = 0.000 = 0%
    Process Time = 0.000 = 0%
    Global Time = 3.194 = 100%
    C:\WorkTemp\_Osho's Books_test>

    C:\WorkTemp\_Osho's Books_test>timer 7za e -y OSHO.zipDeflate64
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    7-Zip (A) 9.13 beta Copyright (c) 1999-2010 Igor Pavlov 2010-04-15
    Processing archive: OSHO.zipDeflate64
    Extracting OSHO.TXT
    Everything is Ok
    Size: 206908949
    Compressed: 55891414
    Kernel Time = 0.265 = 8%
    User Time = 2.375 = 72%
    Process Time = 2.640 = 81%
    Global Time = 3.253 = 100%
    C:\WorkTemp\_Osho's Books_test>

    C:\WorkTemp\_Osho's Books_test>timer ulz d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.ulz" OSHO.TXT
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    ULZ v0.02 by Ilia Muraviev
    Decompressing...
    62620125 -> 206908949 in 3.6 sec
    Kernel Time = 0.296 = 8%
    User Time = 0.718 = 20%
    Process Time = 1.015 = 28%
    Global Time = 3.561 = 100%
    C:\WorkTemp\_Osho's Books_test>

    C:\WorkTemp\_Osho's Books_test>timer kazuya d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.quicklz"
    OSHO.TXT
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    QuickLZ: decompressed 78630023 bytes back into 206908949 bytes.
    Kernel Time = 0.390 = 10%
    User Time = 0.656 = 18%
    Process Time = 1.046 = 29%
    Global Time = 3.606 = 100%
    C:\WorkTemp\_Osho's Books_test>

    [042,723,543 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.lzt [-39] Unseen speed for Compression Ratio: 20.6%, Super!]
    C:\WorkTemp\_Osho's Books_test>timer lzturbo -d "Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.lzt" .
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    Kernel Time = 0.406 = 9%
    User Time = 2.062 = 49%
    Process Time = 2.468 = 59%
    Global Time = 4.161 = 100%
    C:\WorkTemp\_Osho's Books_test>

    C:\WorkTemp\_Osho's Books_test>timer plzip.exe -n2 -k -d -f "Folio VIP - Osho's Books on CD-ROM Part
    1-2-3-4-5.txt.lz"
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    Kernel Time = 0.500 = 8%
    User Time = 7.000 = 120%
    Process Time = 7.500 = 129%
    Global Time = 5.786 = 100%
    C:\WorkTemp\_Osho's Books_test>

    C:\WorkTemp\_Osho's Books_test>timer plzip.exe -n1 -k -d -f "Folio VIP - Osho's Books on CD-ROM Part
    1-2-3-4-5.txt.lz"
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    Kernel Time = 0.468 = 6%
    User Time = 6.687 = 98%
    Process Time = 7.156 = 105% [Buggy?]
    Global Time = 6.784 = 100%
    C:\WorkTemp\_Osho's Books_test>

    It would be very interesting someone to test PLZIP with -n12 i.e. 12 threads.

    In-search-for-fastest-text-decompressor result so far:

    King: not found yet
    Possible(disputed) King: Ilia Muraviev with ULZ
    Possible(disputed) King: Lasse Reinhold with QuickLZ
    Possible(disputed) King: Hamid Buzidi with lzturbo
    Possible(disputed) King: Igor Pavlov with 7za
    Possible(disputed) King: Antonio Diaz Diaz with Plzip

    Of course it is just an opinion of one user.

    Regards.

  11. #11
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    lzturbo is based on stolen open-source tornado

  12. #12
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Great-Nano expectations

    Yum-yum,
    another super text grinder named NanoZip is being cooked by Sami Runsas.

    Compression:
    C:\WorkTemp\_Osho's Books_test>nz a -co OSHO OSHO.TXT
    NanoZip 0.08 alpha/Win32 (C) 2008-2010 Sami Runsas www.nanozip.net
    Intel(R) Pentium(R) Dual CPU T3400 @ 2.16GHz|6070 MHz|#2|2323/2940 MB
    *** THIS IS AN EARLY ALPHA VERSION OF NANOZIP *** USE ONLY FOR TESTING ***
    Archive: OSHO.nz
    Compressor: nz_optimum1 [417 MB], threads: 2. IO-buffers: 4+1 MB.
    Compressed 206 908 949 into 31 272 132 in 28.76s, 7 025 KB/s
    IO-in: 0.41s, 478 MB/s. IO-out: 0.39s, 75 MB/s
    Decompression:
    C:\WorkTemp\_Osho's Books_test>nz x -y OSHO.nz
    NanoZip 0.08 alpha/Win32 (C) 2008-2010 Sami Runsas www.nanozip.net
    Intel(R) Pentium(R) Dual CPU T3400 @ 2.16GHz|45002 MHz|#2|2323/2940 MB
    *** THIS IS AN EARLY ALPHA VERSION OF NANOZIP *** USE ONLY FOR TESTING ***
    Archive: OSHO.nz
    Compressor: nz_optimum1 [412 MB], threads: 2. IO-buffers: 1+4 MB.
    Decompressed 206 908 949 bytes in 11.50s, 17 MB/s.
    IO-in: 0.09s, 342 MB/s. IO-out: 3.38s, 58 MB/s

    C:\WorkTemp\_Osho's Books_test>nz x -y -t1 OSHO.nz
    NanoZip 0.08 alpha/Win32 (C) 2008-2010 Sami Runsas www.nanozip.net
    Intel(R) Pentium(R) Dual CPU T3400 @ 2.16GHz|114 MHz|#2|2323/2940 MB
    *** THIS IS AN EARLY ALPHA VERSION OF NANOZIP *** USE ONLY FOR TESTING ***
    Archive: OSHO.nz
    Compressor: nz_optimum1 [412 MB], threads: 1.
    Decompressed 206 908 949 bytes in 16.73s, 12 MB/s.
    IO-in: 0.03s, 962 MB/s. IO-out: 2.66s, 74 MB/s
    NanoZip 0.08a by Sami Runsas:
    [Pseudo RAM-to-RAM(from cache & test(i.e. no dump)) decompression took: 11680ms(206908949/11680 = 17MB/s)]
    031,272,132 Folio VIP - Osho's Books on CD-ROM Part 1-2-3-4-5.txt.nz

    C:\WorkTemp\_Osho's Books_test>nz t OSHO.nz
    NanoZip 0.08 alpha/Win32 (C) 2008-2010 Sami Runsas www.nanozip.net
    Intel(R) Pentium(R) Dual CPU T3400 @ 2.16GHz|5147 MHz|#2|2323/2940 MB
    *** THIS IS AN EARLY ALPHA VERSION OF NANOZIP *** USE ONLY FOR TESTING ***
    Archive: OSHO.nz
    Compressor: nz_optimum1 [412 MB], threads: 2. IO-buffers: 1+1 MB.
    Decompressed 206 908 949 bytes in 11.68s, 17 MB/s.
    IO-in: 0.03s, 994 MB/s.
    Following is the test(on Pentium T3400 Merom-1M, 2 cores, 65nm, 2166 MHz) on all english OSHO books, this is a real-world text not some synthetic nonsense:

    206,908,949 F ... Osho's ... 5.txt  
    101,903,677 F ... Osho's ... 5.txt.LZ4 {LZ4 Oct 16 2009}
    078,630,023 F ... Osho's ... 5.txt.QLZ [Level 3] {Kazuya, QuickLZ 1.4.0 library used}
    062,740,017 F ... Osho's ... 5.txt.AIN [/m1 /u1] {AIN 2.22}
    062,620,125 F ... Osho's ... 5.txt.ULZ [c6] {ULZ v0.02}
    062,414,746 F ... Osho's ... 5.txt.gz [-9] {minigzip(zlib 1.2.5)}
    062,218,083 F ... Osho's ... 5.txt.zip [-exx] {PKZIP Version 2.50}
    055,739,744 F ... Osho's ... 5.txt.zip [-mx=9 -mm=Deflate64 -tzip] {7-Zip (A) 9.13 beta}
    048,780,809 F ... Osho's ... 5.txt.lzt [-29] {lzturbo 0.95}
    042,723,543 F ... Osho's ... 5.txt.lzt [-39] {lzturbo 0.95}
    041,671,476 F ... Osho's ... 5.txt.lz [-9] {Plzip 0.5}
    040,415,911 F ... Osho's ... 5.txt.7z [-mx=9 -t7z] {7-Zip (A) 9.13 beta}
    040,108,833 F ... Osho's ... 5.txt.PPMSI1 [-o5] {Small PPMII, variant I rev.1}
    035,109,880 F ... Osho's ... 5.txt.4x4 [4t] {4x4 ver. 0.2a}
    032,766,449 F ... Osho's ... 5.txt.bsc [b25] {bsc Version 2.1.0}
    031,272,132 F ... Osho's ... 5.txt.nz [-co] {NanoZip 0.08 alpha/Win32}
    031,050,654 F ... Osho's ... 5.txt.zpaq [c3] {ZP v1.00}
    030,316,502 F ... Osho's ... 5.txt.bsc [b200] {bsc Version 2.1.0}
    029,977,368 F ... Osho's ... 5.txt.nz [-cc] {NanoZip 0.08 alpha/Win32}

    Com/Decom Ratio is 38.0/24.5 = 155% {QuickLZ 1.4.0}
    Com/Decom Ratio is 49.2/29.0 = 169% {LZ4}
    Com/Decom Ratio is 30.7/9.2 = 333% {minigzip(zlib 1.2.5)}
    Com/Decom Ratio is 20.7/3.6 = 575% {LZMA Utility 4.65}
    Com/Decom Ratio is 15.1/1.3 = 1161% {NanoZip 0.08a, 2 threads}
    An impressive tool, a disputed king, I am only scared of development tempo, whether a version 1.00 will be in use until 21 December 2012, hmm.
    Last edited by Sanmayce; 23rd June 2010 at 14:03.

  13. #13
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    seems that you don't yet discovered freearc/4x4

  14. #14
    Member Fu Siyuan's Avatar
    Join Date
    Apr 2009
    Location
    Mountain View, CA, US
    Posts
    176
    Thanks
    10
    Thanked 17 Times in 2 Posts
    Hi Sanmayce, how can i get the test file?

  15. #15
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hi Bulat,
    you are right, I will include your archiver into tests soon, sorry I do not disregard your package, just missed it.
    I have tested 4x4 as alone tool, before, though.

    Regards.

    Hi Fu,
    you can download it from my site at:
    http://www.sanmayce.com/Downloads/Fo...ROMPart1-5.rar

    File OSHO.TXT is made by copying with copy f*.txt OSHO.TXT /b

    Regards.

  16. #16
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    my quick results:

    C:\Downloads\1>arc a a -m4x4:i0:tor:11:c1:h256m:40m -t
    Compressed 1 file, 206,908,949 => 56,704,363 bytes. Ratio 27.4%
    Compression time: cpu 150.10 secs, real 58.10 secs. Speed 3,561 kB/s
    Testing time: cpu 2.17 secs, real 0.82 secs. Speed 251,408 kB/s

    C:\Downloads\1>arc a a -mdict+4x4:i0:tor:11:c3:h256m:30m -t
    Compressed 1 file, 206,908,949 => 41,977,566 bytes. Ratio 20.2%
    Compression time: cpu 101.15 secs, real 39.63 secs. Speed 5,221 kB/s
    Testing time: cpu 2.59 secs, real 1.71 secs. Speed 121,141 kB/s

  17. #17
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    lzma:

    C:\Downloads\1>arc a a -m4x4:i0:t2:lzma:bt4:50m -t
    Compressed 1 file, 206,908,949 => 42,314,002 bytes. Ratio 20.4%
    Compression time: cpu 354.70 secs, real 142.61 secs. Speed 1,451 kB/s
    Testing time: cpu 4.51 secs, real 1.47 secs. Speed 141,042 kB/s

    C:\Downloads\1>arc a a -mdict+4x4:i0:t2:lzma:bt4:32m -t
    Compressed 1 file, 206,908,949 => 37,918,076 bytes. Ratio 18.3%
    Compression time: cpu 143.29 secs, real 63.81 secs. Speed 3,242 kB/s
    Testing time: cpu 4.49 secs, real 2.43 secs. Speed 85,288 kB/s

  18. #18
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    This is bsc, Block Sorting Compressor. Version 2.1.5. 1 June 2010.

    C:\Downloads\1>timer bsc e OSHO.TXT a -m3 -b50 -p
    OSHO.TXT compressed 206908949 into 31909476 in 23.088 seconds.

    Kernel Time = 3.416 = 00:00:03.416 = 14%
    User Time = 63.383 = 00:01:03.383 = 273%
    Process Time = 66.799 = 00:01:06.799 = 288%
    Global Time = 23.135 = 00:00:23.135 = 100%


    C:\Downloads\1>timer bsc d a nul
    a decompressed 31909476 into 0 in 12.807 seconds.

    Kernel Time = 0.982 = 00:00:00.982 = 7%
    User Time = 42.432 = 00:00:42.432 = 324%
    Process Time = 43.415 = 00:00:43.415 = 331%
    Global Time = 13.089 = 00:00:13.089 = 100%

  19. #19
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Is the 18bit multi-window approach the right approach?

    Thank you Bulat,
    C:\Downloads\1>arc a a -m4x4:i0:tor:11:c1:h256m:40m -t
    Compressed 1 file, 206,908,949 => 56,704,363 bytes. Ratio 27.4%
    Compression time: cpu 150.10 secs, real 58.10 secs. Speed 3,561 kB/s
    Testing time: cpu 2.17 secs, real 0.82 secs. Speed 251,408 kB/s
    it looks like this is a way better than options I tried, soon I will test it on my machine.

    I would be very pleased if decompression tools in near future contain a 'fast decompression' option also.
    I still dream about multithreaded LZ implementation of some deflate256 applied on chunks merged in one archive file.

    C:\WorkTemp\_Osho's Books_test>7za a -mx=0 -v256k OSHO_deflate256 OSHO.TXT

    C:\WorkTemp\_Osho's Books_test>dir osho_*7z*
    000,262,144 OSHO_deflate256.7z.001
    000,262,144 OSHO_deflate256.7z.002
    ...
    000,262,144 OSHO_deflate256.7z.789
    000,077,443 OSHO_deflate256.7z.790
    206,909,059 bytes in 790 File(s)

    C:\WorkTemp\_Osho's Books_test>7za a -mx=9 -mm=Deflate64 -tzip OSHO_deflate256_790-files OSHO_deflate256.7z.*

    C:\WorkTemp\_Osho's Books_test>dir osho_*790-*
    057,608,350 OSHO_deflate256_790-files.zip

    C:\WorkTemp\_Osho's Books_test>7za l osho_*790-*
    000,262,144 000,072,873 OSHO_deflate256.7z.001
    000,262,144 000,071,848 OSHO_deflate256.7z.002
    ...
    000,262,144 000,044,249 OSHO_deflate256.7z.789
    000,077,443 000,016,511 OSHO_deflate256.7z.790

    C:\WorkTemp\_Osho's Books_test>7za a -mx=9 -mm=Deflate64 -tzip OSHO OSHO.TXT

    C:\WorkTemp\_Osho's Books_test>timer 7za t OSHO.zip
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    7-Zip (A) 9.13 beta Copyright (c) 1999-2010 Igor Pavlov 2010-04-15
    Processing archive: OSHO.zip
    Testing OSHO.TXT
    Everything is Ok
    Size: 206908949
    Compressed: 55739744
    Kernel Time = 0.031 = 1%
    User Time = 2.343 = 97%
    Process Time = 2.375 = 99%
    Global Time = 2.391 = 100%
    Pseudo RAM-to-RAM(from cache & test(i.e. no dump)) decompression performance(of one thread): 206908949/2.375s = 83MB/s,
    where 55739744/2.375s = 22MB/s is the decompression performance(of one thread) regarding processed compressed data(is there a shortened name for this?),
    wrongly I thought it was much much better for 7za, grrr.

    The goal: to utilize(use as-much-as-possible of I/O & CPU bandwidth) both SSD read-burst & main RAM copy, and as a result to achieve (100%/(compression ratio))*(DPOOTRPCD)*(number of used threads)MB/s, the ideal-insane case of course.

    Assuming(as in above test) that one thread decompresses 22MB of compressed data per second,
    assuming 6 cores(hyper-threaded) CPU,
    assuming 1 core is not used,
    assuming 1 thread is dedicated for I/O read,
    assuming 200MB/s I/O read is available: OK for nowadays SSDs not to speak of RAIDs,
    assuming (DPOOTRPCD)*(number of used threads)MB/s <= 200MB/s: OK,
    assuming 711MB/s main RAM copy is available: no problema for all non-antique CPUs,
    finally there is a sight for sore(closed, he-he) eyes: ( 100%/( 100%*(057,608,350/206,909,059) ) ) * (22MB/s) * ((6-1)*2-1) = 711MB/s,
    oh, oh I don't want to wake up.

    P.S.
    Don't take my greediness for insolence, or my dreams for common sense, he-he.

  20. #20
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    you may see how compression drops even with 40 mb window. with 256 kb windows it would be MUCH worse. and it's pretty useless since you have just 4 cores or so

  21. #21
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    you may see how compression drops even with 40 mb window. with 256 kb windows it would be MUCH worse. and it's pretty useless since you have just 4 cores or so
    In order to gain speed I'm ready(to sacrifice compression ratio at the moment without blinking) even for worst drops, my hope lies on feeding instantly the decompression threads with small chunks, which, hopefully being in the fastest cache(some 256K multiplied by 4 or 8 i.e. 2MB), will bring happiness due to immediate dealing with them(chunks). You understand that I am making dummy speculations, since I have no specific knowledge on how things work. And 2MB(57MB-55MB) for LZ, 5MB(42MB-37MB) for LZMA is nothing, yes 15MB(56MB-41MB) for BWT(is it?) is something - but the trade-off is bearable.

    For pre-previous posts:
    my fault,
    these options are better.

    Nice nice numbers for compression ratio 27.4%:
    Testing time: cpu 2.11 secs, real 1.98 secs. Speed 104,269 kB/s [2 cores CPU; -mt1 i.e. used 1 thread]
    Testing time: cpu 2.48 secs, real 1.55 secs. Speed 133,759 kB/s [2 cores CPU; -mt2 i.e. used 2 threads]
    Testing time: cpu 2.17 secs, real 0.82 secs. Speed 251,408 kB/s [4? cores CPU; -mt4? i.e. used 4? threads]

    You did a great job, by multi-threading the decompression, indeed.
    Yet, I am a little bit confused of different decompression numbers when comparing 'test' and 'extract' options.
    I do not understand fully what the results mean.
    For this ratio my crazy-greedy expectations are (used threads)x83MB/s.
    What a riddle: what do the proportions 2.48:1.55 and 2.17:0.82 mean?! Is there a definite trend?
    Could you shed some light on "How does the number of threads affect the decompression speed?"
    Also is it not a good thing to show how many threads are being used during [de]compression?
    I wonder also how informative would it be to compute at end the performance of each used thread!
    Not by the way, could you post 'Testing time' for -mt1, -mt2, -mt3, -mt4, and more if you have a hyper-threaded CPU; and also the performance of memcpy().


    Round #2 of lzma 4.65 vs lzturbo 0.95 vs FreeArc 0.666 follows:

    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_tornado_40 -m4x4:i0:tor:11:c1:h256m:40m -t osho.txt
    FreeArc 0.666 creating archive: osho_4x4_tornado_40.arc
    Compressed 1 file, 206,908,949 => 56,704,363 bytes. Ratio 27.4%
    Compression time: cpu 155.28 secs, real 87.81 secs. Speed 2,356 kB/s
    Testing time: cpu 2.50 secs, real 1.52 secs. Speed 136,517 kB/s
    All OK
    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_tornado_30 -mdict+4x4:i0:tor:11:c3:h256m:30m -t osho.txt
    FreeArc 0.666 creating archive: osho_4x4_tornado_30.arc
    Compressed 1 file, 206,908,949 => 41,977,566 bytes. Ratio 20.2%
    Compression time: cpu 102.02 secs, real 62.80 secs. Speed 3,295 kB/s
    Testing time: cpu 3.67 secs, real 2.56 secs. Speed 80,745 kB/s
    All OK
    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_lzma_50 -m4x4:i0:t2:lzma:bt4:50m -t osho.txt
    FreeArc 0.666 creating archive: osho_4x4_lzma_50.arc
    Compressed 1 file, 206,908,949 => 42,314,002 bytes. Ratio 20.4%
    Compression time: cpu 416.11 secs, real 211.81 secs. Speed 977 kB/s
    Testing time: cpu 5.09 secs, real 3.03 secs. Speed 68,259 kB/s
    All OK
    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_lzma_32 -mdict+4x4:i0:t2:lzma:bt4:32m -t osho.txt
    FreeArc 0.666 creating archive: osho_4x4_lzma_32.arc
    Compressed 1 file, 206,908,949 => 37,918,076 bytes. Ratio 18.3%
    Compression time: cpu 161.25 secs, real 88.92 secs. Speed 2,327 kB/s
    Testing time: cpu 5.88 secs, real 3.42 secs. Speed 60,467 kB/s
    All OK
    C:\WorkTemp\_Osho's Books_test>dir osho_4x4*.arc
    056,704,608 osho_4x4_tornado_40.arc
    041,977,820 osho_4x4_tornado_30.arc
    042,314,250 osho_4x4_lzma_50.arc
    037,918,333 osho_4x4_lzma_32.arc
    C:\WorkTemp\_Osho's Books_test>dir *.lzt
    051,909,111 OSHO-19.TXT.lzt
    042,723,543 OSHO-39.TXT.lzt
    C:\WorkTemp\_Osho's Books_test>dir *.lzma
    042,396,004 OSHO.LZMA
    Final(decompression) showdown, from cache:

    C:\WorkTemp\_Osho's Books_test>timer lzturbo -d OSHO-19.TXT.lzt .
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    Kernel Time = 0.390 = 9%
    User Time = 1.500 = 36%
    Process Time = 1.890 = 45%
    Global Time = 4.161 = 100%
    C:\WorkTemp\_Osho's Books_test>timer lzturbo -d OSHO-39.TXT.lzt .
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    Kernel Time = 0.453 = 10%
    User Time = 2.093 = 49%
    Process Time = 2.546 = 60%
    Global Time = 4.204 = 100%
    C:\WorkTemp\_Osho's Books_test>timer lzma d OSHO.LZMA OSHO.TXT
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    LZMA 4.65 : Igor Pavlov : Public domain : 2009-02-03
    Kernel Time = 0.328 = 4%
    User Time = 4.234 = 59%
    Process Time = 4.562 = 64%
    Global Time = 7.060 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc x -y osho_4x4_tornado_40.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 extracting archive: osho_4x4_tornado_40.arc
    Extracted 1 file, 56,704,363 => 206,908,949 bytes. Ratio 27.4%
    Extraction time: cpu 3.41 secs, real 7.25 secs. Speed 28,539 kB/s
    All OK
    Kernel Time = 0.921 = 12%
    User Time = 2.750 = 37%
    Process Time = 3.671 = 49%
    Global Time = 7.394 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc t osho_4x4_tornado_40.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 testing archive: osho_4x4_tornado_40.arc
    Tested 1 file, 56,704,363 => 206,908,949 bytes. Ratio 27.4%
    Testing time: cpu 2.48 secs, real 1.55 secs. Speed 133,759 kB/s
    All OK
    Kernel Time = 0.390 = 23%
    User Time = 2.343 = 138%
    Process Time = 2.734 = 161%
    Global Time = 1.689 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc x -y osho_4x4_tornado_30.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 extracting archive: osho_4x4_tornado_30.arc
    Extracted 1 file, 41,977,566 => 206,908,949 bytes. Ratio 20.2%
    Extraction time: cpu 4.53 secs, real 6.86 secs. Speed 30,164 kB/s
    All OK
    Kernel Time = 0.921 = 13%
    User Time = 3.843 = 54%
    Process Time = 4.765 = 68%
    Global Time = 7.005 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc t osho_4x4_tornado_30.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 testing archive: osho_4x4_tornado_30.arc
    Tested 1 file, 41,977,566 => 206,908,949 bytes. Ratio 20.2%
    Testing time: cpu 3.77 secs, real 2.61 secs. Speed 79,294 kB/s
    All OK
    Kernel Time = 0.343 = 12%
    User Time = 3.656 = 132%
    Process Time = 4.000 = 145%
    Global Time = 2.752 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc x -y osho_4x4_lzma_50.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 extracting archive: osho_4x4_lzma_50.arc
    Extracted 1 file, 42,314,002 => 206,908,949 bytes. Ratio 20.4%
    Extraction time: cpu 6.02 secs, real 7.06 secs. Speed 29,297 kB/s
    All OK
    Kernel Time = 0.750 = 10%
    User Time = 5.531 = 76%
    Process Time = 6.281 = 87%
    Global Time = 7.207 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc t osho_4x4_lzma_50.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 testing archive: osho_4x4_lzma_50.arc
    Tested 1 file, 42,314,002 => 206,908,949 bytes. Ratio 20.4%
    Testing time: cpu 5.17 secs, real 2.84 secs. Speed 72,759 kB/s
    All OK
    Kernel Time = 0.312 = 10%
    User Time = 5.125 = 172%
    Process Time = 5.437 = 182%
    Global Time = 2.977 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc x -y osho_4x4_lzma_32.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 extracting archive: osho_4x4_lzma_32.arc
    Extracted 1 file, 37,918,076 => 206,908,949 bytes. Ratio 18.3%
    Extraction time: cpu 6.72 secs, real 8.17 secs. Speed 25,320 kB/s
    All OK
    Kernel Time = 0.843 = 10%
    User Time = 6.187 = 74%
    Process Time = 7.031 = 84%
    Global Time = 8.326 = 100%
    C:\WorkTemp\_Osho's Books_test>timer arc t osho_4x4_lzma_32.arc
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    FreeArc 0.666 testing archive: osho_4x4_lzma_32.arc
    Tested 1 file, 37,918,076 => 206,908,949 bytes. Ratio 18.3%
    Testing time: cpu 6.00 secs, real 3.53 secs. Speed 58,594 kB/s
    All OK
    Kernel Time = 0.359 = 9%
    User Time = 5.875 = 160%
    Process Time = 6.234 = 170%
    Global Time = 3.666 = 100%
    As a note: it is incorrectly to compare the single-threaded lzma 4.65 with multi-threaded rivals, but for example sake it is worth it, right?
    Last edited by Sanmayce; 11th June 2010 at 08:07.

  22. #22
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Ok Bulat,
    obviously I failed to make my point clear,
    it is all about boosting the upload of texts, not some synthetic RAM-to-RAM flying-start(i.e. random access to compressed data) racing, the real-world applications (like in my project 'Gamera') are mostly with sequential access to compressed data, because of using burst-read.

    And don't tell me "it's pretty useless since you have just ..." - this kills the spirit of imagination. To be opened towards infinity - that is wisdom. When referring to 'real-world' I don't imply the current computer-hardware level. The CPU conjuncture has little to do with good ideas, and why not, with well designed tools.

    Ha-ha, surely it is not a coincidence:
    because my English is turbulent(far from fluent) I invoked my sidekick-tool 'Raccoondog' to check whether phrases '*opened to*' are plausible and from my 400+ million sentences the next one popped-up:

    000,000,014 When your eyes full of love look towards the sky, when your heart is opened towards the sky and you're making no effort to do anything from your side, that is the moment when the divine rushes towards you.

    I salute you with this hit, make only one guess what file is it derived from.
    I choose to enjoy state-of-art things rather pursuing some ambitions to create something that I am not capable of or supposed to. Vanity is the strangest teacher of all, I am not ill of that illness, boldly said huh!

    Below not-bad-at-all non-solid archives of 256KB chunks, are listed:

    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_tornado_40_790-chunks -m4x4:i0:tor:11:c1:h256m:40m -t -s- OSHO_deflate256.7z.*
    FreeArc 0.666 creating archive: osho_4x4_tornado_40_790-chunks.arc
    Compressed 790 files, 206,909,059 => 69,744,536 bytes. Ratio 33.7%

    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_tornado_30_790-chunks -mdict+4x4:i0:tor:11:c3:h256m:30m -t -s- OSHO_deflate256.7z.*
    FreeArc 0.666 creating archive: osho_4x4_tornado_30_790-chunks.arc
    Compressed 790 files, 206,909,059 => 56,955,255 bytes. Ratio 27.5%

    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_lzma_50_790-chunks -m4x4:i0:t2:lzma:bt4:50m -t -s- OSHO_deflate256.7z.*
    FreeArc 0.666 creating archive: osho_4x4_lzma_50_790-chunks.arc
    Compressed 790 files, 206,909,059 => 54,276,793 bytes. Ratio 26.2%

    C:\WorkTemp\_Osho's Books_test>arc a osho_4x4_lzma_32_790-chunks -mdict+4x4:i0:t2:lzma:bt4:32m -t -s- OSHO_deflate256.7z.*
    FreeArc 0.666 creating archive: osho_4x4_lzma_32_790-chunks.arc
    Compressed 790 files, 206,909,059 => 51,630,729 bytes. Ratio 24.9%
    By the way did you know that dragsters(8000hp hot-rods) are faster than(i.e. finish before) Formula 1 bolids, stop-start vs flying-start(200mph) respectively, on 1/4 mile?

    Regards.

  23. #23
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    -m4x4:i0:tor:11:c1:h256m:40m -> -m4x4:tor:11:c1:h4m:256k

  24. #24
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    Quote Originally Posted by Sanmayce View Post
    Nice nice numbers for compression ratio 27.4%:
    Testing time: cpu 2.11 secs, real 1.98 secs. Speed 104,269 kB/s [2 cores CPU; -mt1 i.e. used 1 thread]
    Testing time: cpu 2.48 secs, real 1.55 secs. Speed 133,759 kB/s [2 cores CPU; -mt2 i.e. used 2 threads]
    Testing time: cpu 2.17 secs, real 0.82 secs. Speed 251,408 kB/s [4? cores CPU; -mt4? i.e. used 4? threads]

    What a riddle: what do the proportions 2.48:1.55 and 2.17:0.82 mean?! Is there a definite trend?
    As you can see, CPU time can be higher than wall-clock (real) time, since CPU has more cores and if e.g. 2 cores are fully utilized, CPU works for 2 seconds each real-world 1 second. Reading above data you could say that -mt1 uses about 100% of one core, -mt2 scales about 80%, -mt4 about 66%.

    Quote Originally Posted by Sanmayce View Post
    When referring to 'real-world' I don't imply the current computer-hardware level.
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

  25. #25
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts

    'Back to the future' or BCM & bsc compared

    Hi Encode & Gribok,
    regarding maximum speeds, maybe BWT is an algorithm well-suited for future CPUs with multi-dozen-cores, i.e. its time is not come yet.
    This of course cannot stop me to test it as follows:

    Compressed(BCM) files being tested:

    C:\WorkTemp\_Osho's Books_test>dir *.bcm*
    39,382,540 OSHO.TXT.bcm_1 [-b1]
    31,530,489 OSHO.TXT.bcm_64 [-b64]
    30,215,648 OSHO.TXT.bcm_200 [-b200]
    Decompression test for BCM 0.11 (from cache):

    C:\WorkTemp\_Osho's Books_test>timer bcm -d -f OSHO.TXT.bcm_1
    Kernel Time = 0.593 = 0%
    User Time = 74.031 = 98%
    Process Time = 74.625 = 99%
    Global Time = 75.055 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bcm -d -f OSHO.TXT.bcm_64
    Kernel Time = 1.234 = 1%
    User Time = 89.796 = 97%
    Process Time = 91.031 = 99%
    Global Time = 91.635 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bcm -d -f OSHO.TXT.bcm_200
    Kernel Time = 2.359 = 2%
    User Time = 99.906 = 97%
    Process Time = 102.265 = 99%
    Global Time = 102.464 = 100%
    Compressed(bsc) files being tested:

    C:\WorkTemp\_Osho's Books_test>dir *.bsc*
    46,546,417 OSHO.TXT.bsc_1s_m0 [-b1m0s]
    41,047,890 OSHO.TXT.bsc_1s_m2 [-b1m2s]
    39,535,764 OSHO.TXT.bsc_1s_m3 [-b1m3s]
    46,545,632 OSHO.TXT.bsc_1_m0 [-b1m0]
    41,047,454 OSHO.TXT.bsc_1_m2 [-b1m2]
    39,536,092 OSHO.TXT.bsc_1_m3 [-b1m3]
    44,316,567 OSHO.TXT.bsc_64s_m0 [-b64m0s]
    36,386,148 OSHO.TXT.bsc_64s_m2 [-b64m2s]
    31,660,461 OSHO.TXT.bsc_64s_m3 [-b64m3s]
    44,316,567 OSHO.TXT.bsc_64_m0 [-b64m0]
    36,386,148 OSHO.TXT.bsc_64_m2 [-b64m2]
    31,660,461 OSHO.TXT.bsc_64_m3 [-b64m3]
    43,871,182 OSHO.TXT.bsc_200s_m0 [-b200m0s]
    35,826,756 OSHO.TXT.bsc_200s_m2 [-b200m2s]
    30,318,518 OSHO.TXT.bsc_200s_m3 [-b200m3s]
    43,871,182 OSHO.TXT.bsc_200_m0 [-b200m0]
    35,826,756 OSHO.TXT.bsc_200_m2 [-b200m2]
    30,318,518 OSHO.TXT.bsc_200_m3 [-b200m3]
    Decompression test for bsc 2.2.0 (from cache):

    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_1s_m0 buggyname
    Kernel Time = 1.046 = 6%
    User Time = 30.750 = 183%
    Process Time = 31.796 = 190%
    Global Time = 16.716 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_1s_m2 buggyname
    Kernel Time = 1.187 = 5%
    User Time = 40.234 = 186%
    Process Time = 41.421 = 191%
    Global Time = 21.626 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_1s_m3 buggyname
    Kernel Time = 1.234 = 6%
    User Time = 36.593 = 184%
    Process Time = 37.828 = 190%
    Global Time = 19.854 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_64s_m0 buggyname
    Kernel Time = 1.578 = 5%
    User Time = 42.109 = 139%
    Process Time = 43.687 = 144%
    Global Time = 30.218 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_64s_m2 buggyname
    Kernel Time = 1.562 = 3%
    User Time = 61.187 = 145%
    Process Time = 62.750 = 148%
    Global Time = 42.188 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_64s_m3 buggyname
    Kernel Time = 1.453 = 3%
    User Time = 51.796 = 141%
    Process Time = 53.250 = 145%
    Global Time = 36.645 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_200s_m0 buggyname
    Kernel Time = 0.812 = 2%
    User Time = 38.109 = 117%
    Process Time = 38.921 = 120%
    Global Time = 32.390 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_200s_m2 buggyname
    Kernel Time = 0.734 = 1%
    User Time = 58.890 = 107%
    Process Time = 59.625 = 108%
    Global Time = 54.876 = 100%
    C:\WorkTemp\_Osho's Books_test>timer bsc d OSHO.TXT.bsc_200s_m3 buggyname
    Kernel Time = 0.859 = 2%
    User Time = 53.093 = 164%
    Process Time = 53.953 = 167%
    Global Time = 32.306 = 100%
    To Encode: Is the further boosting of ULZ stopped or paused? What about multi-threading it?
    Personally I wait for some fantastic-bombastic text decompression feat from your side, that is, I hope you have more tricks left up your sleeve.

    To Gribok: What is '-s' segmentation switch for? I could not find the difference. In other words, let's say the year is 2012: is 2.2.0 ready to utilize big number of threads? In sake of well-fed threads I have been trying to propose: chunks, a lot of chunks, but in vain, until now, I am not sure.

    Regards.

  26. #26
    Programmer Gribok's Avatar
    Join Date
    Apr 2007
    Location
    USA
    Posts
    159
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Sanmayce View Post
    To Gribok: What is '-s' segmentation switch for? I could not find the difference. In other words, let's say the year is 2012: is 2.2.0 ready to utilize big number of threads? In sake of well-fed threads I have been trying to propose: chunks, a lot of chunks, but in vain, until now, I am not sure.
    "-s" segmentation is an algorithm that split input block to multiple and compress them independently. This is opposite algorithm to solid mode. You do not need this algorithm for text files, but some other files like tar archives can get compression improvements.
    Enjoy coding, enjoy life!

  27. #27
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    Regarding BWT, probably the most efficient block size = 4 MB - nice text and binary compression...

    Quote Originally Posted by Sanmayce View Post
    To Encode: Is the further boosting of ULZ stopped or paused? What about multi-threading it?
    Personally I wait for some fantastic-bombastic text decompression feat from your side, that is, I hope you have more tricks left up your sleeve.
    ULZ keeps too many air in compressed files. Although, it has probably the simplest decoder ever at such results. Adding S&S parsing would really help here, at the cost of really slow compression (check out my LZSS)! ULZ was an experiment... BWT is the most interesting thing. I have an extremely simple BWT encoder that is blazing fast, but not that interesting at compression results. I'm thinking about a fast mode in BCM and I have max mode idea as well - a la improved BCM 0.09 - but it's kinda slow. Anyway, compressor must compress well - it should not be too fast keeping too much air, at the same time it should not be too slow. BCM is nice thing - it works fast enough even at my ATOM based netbook. To be honest, new BCM was specially optimized for such weak hardware! I think it's time for BWT! If we compare LZ and BWT with equal output encoding - i.e. the arithmetic encoder and modeller will have the same complexity, BWT will provide much interesting compression results. Although, LZ will have a faster decompression, but compression results worth BWT... It's not just assumptions or words, I tested it with my LZPM and BCM!

  28. #28
    Member
    Join Date
    Jun 2010
    Location
    India
    Posts
    2
    Thanks
    0
    Thanked 0 Times in 0 Posts
    You might also want to take a look at lzp2 it is also worth a run.sorry i can't test it myself as i am having some issues with my pc that will take time to solve.
    http://phantasie.tonempire.net/pc-co...9-mb-s-t86.htm

  29. #29
    Member Sanmayce's Avatar
    Join Date
    Apr 2010
    Location
    Sofia
    Posts
    57
    Thanks
    0
    Thanked 0 Times in 0 Posts

    THE NEW KING of FASTEST TEXT DECOMPRESSION

    I should have read LZ related posts b.m.(before me, he-he) in the first place, sorry but this is the way I communicate: bum-bamly.

    Thanks Encode,
    I got it, I am glad too with your passionate approach, but you are too happy(occupied) with speedy compression and strong compression ratio to bring joy to decompression fans too. I hope only due to the early stage of BCM. Let me state my passion once more time: Total disinterestedness in compression speed, sacrificing compression ratio at what-developer-decides degree, and maximum bang at decompression.

    Compressed file being tested:

    C:\WorkTemp\_Osho's Books_test>dir *.lzss
    61,439,540 osho.txt.lzss
    Decompression test for lzss v0.01 (from cache):

    C:\WorkTemp\_Osho's Books_test>timer lzss d osho.txt.lzss nul
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    lzss v0.01 by encode
    decoding...
    done
    Kernel Time = 0.125 = 14%
    User Time = 0.750 = 85%
    Process Time = 0.875 = 99%
    Global Time = 0.878 = 100%
    lzss v0.01 by encode, maximum compression level:
    [Pseudo RAM-to-RAM decompression took: 750ms(206908949/750 = 263MB/s)]
    061,439,540 bytes in total i.e. Compression Ratio: 29.6%; Decompression Ratio: 20.9%

    memcpy() Com/Decom Ratio(100/100 = 100%)
    LZSS Com/Decom Ratio(29.6/20.9 = 141%)
    QuickLZ Com/Decom Ratio(38.0/24.5 = 155%)
    LZ4 Com/Decom Ratio(49.2/29.0 = 169%)

    For me, LZSS by Ilia Muraviev is the new king-of-fastest-TEXT-decompression with first-best Com Ratio regardless of third-best Decom Ratio!

    Again and for last time, despite my ignorance concerning [de]compression craft, I sense an ugly for me thing: disparaging the potential of LZ[SS] for achieving high CPU-RAM bandwidth utilization. I still insist that there is a great opportunity to multi-thread the simplicity of LZ[SS] decoding. I should like to see a Ram[jet]LZ console tool using as many as possible threads. I am good(humbleness aside) at naming things, Quick|Fast|Turbo|Speedy come on..., don't you know what propels SR-71: a ramjet(a engine so simple that there is no engine, ha-ha, no turbo altogether because speed is so massive that turbo is created by itself) - that is THE proper word which describes relation with RAM(as simple decoding as copying). So, Encode feel free to use me as a Godfather to your SUPER TOOL. 20+ years ago Phillip Katz, R.I.P., has given the definition of what 'FAST!' could be with PKUNZIP. And I just cannot stand the decadence of his legacy: the ZIP logo being a zipper instead of a BULLET's whirlwind as in comics. I have said enough, unnecessary repetitions are annoying.


    Thanks xxd,
    'lzp2' is weaker and slower than 'lz4', don't repeat my error not to read previous posts on theme, you can search for 'lzss' in the forum as I did yesterday, Cyan and Encode shed some light on the topic. Almost always it's better to take heed to conversation between two chefs instead of reading cook manuals, don't you think.

    Compressed files being tested:

    C:\WorkTemp\_Osho's Books_test>dir *.lz*
    101,903,677 osho.tx_.lz4
    125,488,260 osho.tx_.lzp2
    I was calm as ninja looking at this(60.6%) maximum worst compression ratio knowing the price was worth waiting for, knowing WRONGLY..., until I looked at LZP2's Com/Decom Ratio(60.6/19.3 = 313%) and LZ4's Com/Decom Ratio(49.2/28.5 = 172%).

    Decompression test for LZP2 (from cache):

    C:\WorkTemp\_Osho's Books_test>timer LZP2.exe -d osho.tx_.lzp2 nul
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    LZP2 Oct 10 2009, compression software by Yann Collet
    Regenerated Size : 0 Bytes
    Decoding Time : 0.85s ==> 243MB/s
    Total Time : 1.06s ( Read wait : 0.21s // Write wait : 0.00s )
    Kernel Time = 0.234 = 21%
    User Time = 0.859 = 79%
    Process Time = 1.093 = 101%
    Global Time = 1.076 = 100%

    C:\WorkTemp\_Osho's Books_test>timer LZP2.exe -bench osho.txt
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    LZP2 Oct 10 2009, compression software by Yann Collet
    Setting HIGH_PRIORITY_CLASS...
    Benchmarking LZP2 , please wait...
    Compressed 206908949 bytes into 125445930 bytes (60.6%) at 165.5 Mbyte/s.
    Decompressed at 245.2 Mbyte/s.
    (1 MB = 1000000 byte)
    Kernel Time = 0.421 = 2%
    User Time = 14.890 = 94%
    Process Time = 15.312 = 97%
    Global Time = 15.694 = 100%
    Decompression test for LZ4 (from cache):

    C:\WorkTemp\_Osho's Books_test>timer lz4 -d osho.tx_.lz4 nul
    Timer 9.01 : Igor Pavlov : Public domain : 2009-05-31
    LZ4 Oct 16 2009, compression software by Yann Collet
    Regenerated size : 206908949 Bytes
    Decoding Time : 0.58s ==> 358MB/s
    Total Time : 0.72s ( Read wait : 0.14s // Write wait : 0.00s )
    Kernel Time = 0.156 = 20%
    User Time = 0.593 = 79%
    Process Time = 0.750 = 100%
    Global Time = 0.747 = 100%
    Regards

    P.S.
    And Gribok I like your motto very much, since pre-INTERNET(BBS) times when all good little ZIPs come along with 'Enjoy!' in DIZs, a magic word when combined with an exclamation mark. Viva the spirit of playfulness and appreciating things(not taking them for granted)!
    Last edited by Sanmayce; 1st July 2010 at 18:17.

  30. #30
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    If combine ideas from ULZ (more advanced byte aligned output) with LZSS's parsing... It will be the REAL king of fastest decompression! Probably I'll do it!

Page 1 of 3 123 LastLast

Similar Threads

  1. Fastest Compressors
    By LovePimple in forum Forum Archive
    Replies: 0
    Last Post: 1st November 2006, 06:36

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •