Page 3 of 5 FirstFirst 12345 LastLast
Results 61 to 90 of 130

Thread: HFCB: Huge Files Compression Benchmark

  1. #61
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    I can confirm precomp 0.4 -slow -t-j takes VERY long to run on this file. It's been running for ~15 hours now and it's at 97,4%. This is done on a machine with 4gb of RAM running a Q9650 @ 4050mhz on a fast Solid State Disk. At the moment the Solid State Disk is bottlenecking the process, showing a constant ~150mb/s usage with reads and writes. Once it's done I'll see how small I can get the file.

  2. #62
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Aaaand....it's done! I'm going to compress both the original .dll and the new .pcf file and see how much this insanely long precomp run has saved me. I'll keep you guys updated!

    Edit: Just noticed the time taken in milliseconds and a quick calculation shows that this is a little over 17 hours. Talk about long
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	precomp done.jpg 
Views:	885 
Size:	75.4 KB 
ID:	1174  
    Last edited by Mushoz; 30th December 2009 at 20:10.

  3. #63
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Results:

    -Freearc: 895 MB (939.307.719 bytes)
    -Precomp 0.4 > Freearc: 779 MB (817.577.052 bytes)
    -Precomp 0.4 > 7z > Freearc > 775 MB (812.958.875 bytes)
    -Precomp 0.4 > SREP > Freearc: 771 MB (809.395.616 bytes)
    -Precomp 0.4 > 7z > SREP > Freearc: 768 MB (805.338.865 bytes)

    Now that's a pretty good saving right there! Please give me some more time, and I'll try to get it even smaller. Wish me luck

    Edit: I've now thrown in some SREP into the mix, and got the final file down to 771 MB (809.395.616 bytes). Will get it smaller

    Edit2: More results with 7z thrown in there as well for the BCJ2 algorithm.
    Last edited by Mushoz; 30th December 2009 at 23:05.

  4. #64
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Skymmer View Post
    And by the way, I have new personal record. Now its: 779 709 433
    The chain is:
    Code:
    PreComp 0.3.8 -> 7z -m0=BCJ2 -> SREP -> NanoZIP v0.07 -nm -cO -m680m
    Is NanoZIP _that_ much better than freearc.exe -max? I've done the same chain as you, but with precomp 0.4 instead of 3.8 (which should give better results, because of the recursion), and my final file is quite a bit larger, still. Maybe I'm missing something? Anyway, well done!

  5. #65
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by schnaader View Post
    Bulat's result (Precomp 0.4) compared to yours (Precomp 0.3. can have several reasons:

    - Slow HD (made a test on an external USB 2.0 drive today, was 20 times slower (!) than on the internal drive)
    - Debug mode (especially if the output isn't piped to a file)
    - Precomp 0.4 uses recursion

    As I also tested with a 0.4.1 version and also had to wait very long (definitely longer than 5 hours) and although the PC wasn't idle and it's not a very fast one, I think Bulat's timing can be correct.

    I think the third point (recursion) is the most important. To understand, see what Precomp has to do if slow mode and recursion are combined: It won't only search/find zLib streams everywhere in the original file, but also in its decompressed variants, so instead of processing 4 GB, it will process ~8-10 GB and it'll slow down even more because of the additional streams that are found.

    So, 38 hours is very extreme and I agree that something could went wrong there (system wasn't idle all the time or some weird errors), but I could also understand if it's correct. I hope I'll get rid of the temporary files soon so that things like this will improve.
    Getting rid of the temporary files would be a _huge_ upgrade in these kind of cases. Even my fast Solid State Disk was bottlenecking the process. Being able to do it all in RAM would be much faster, and would bring the bottleneck back to the CPU. Do you happen to have an ETA for a version without the temporary files? I'm looking forward to it! Also, would it be possible to make this program multi-threaded? I've used precomp before in batchfiles, in which I would run multiple instances of the program to speed up the precomping (is that a verb? ) of many files, but with a single file that isn't a possibility. Thanks!
    Last edited by Mushoz; 31st December 2009 at 13:31.

  6. #66
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Quote Originally Posted by Mushoz View Post
    Is NanoZIP _that_ much better than freearc.exe -max? I've done the same chain as you, but with precomp 0.4 instead of 3.8 (which should give better results, because of the recursion), and my final file is quite a bit larger, still. Maybe I'm missing something? Anyway, well done!
    Thanks. Basicly its impossible to say what is better. Furthermore it depends on what you mean. Compression ratio? Sometimes NZ can give a real punch but its also true for FA. If we talk about functionallity then there is no need to argue here I think. FA is highly customizeable and you can forget about -max preset here. With possibility to assign your own groups and custom compression chains for them you can achieve much better results.

    Answering your question about PreComp. I think you forgot that you used -t-j switch.

  7. #67
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Skymmer View Post
    Thanks. Basicly its impossible to say what is better. Furthermore it depends on what you mean. Compression ratio? Sometimes NZ can give a real punch but its also true for FA. If we talk about functionallity then there is no need to argue here I think. FA is highly customizeable and you can forget about -max preset here. With possibility to assign your own groups and custom compression chains for them you can achieve much better results.

    Answering your question about PreComp. I think you forgot that you used -t-j switch.
    Thanks for your response! Yes, with better I meant a higher compression ratio in this case. I will try precomp 0.38 with the switches you provided, and see if I can get it to pass without the -t-j switch. I'd love to try 0.4 without the -t-j switch, but with one entire run taking 17+ hours, it would take days if not weeks to get it completely done, by adding more and more to the -i switch. Let's see if I can get close to your wonderful compression with precomp 0.38 now : Wish me luck! I'll need it!

  8. #68
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Good luck man! Hope to see good things from you next year.

  9. #69
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Using precomp 0.38 with the switches you provided finished without any crashes in ~2:45 hours. The compression level is even worse than with the other .PCF I generated in the 17 hour run, so the difference has got to be with Nanozip. Since the .PCF generated with 0.40 seems to compress better than the one with 0.38, I'm going to try Nanozip with the exact same switches as you used, to see if I can get it smaller than you did

  10. #70
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Skymmer View Post
    Good luck man! Hope to see good things from you next year.
    Next year? Hah! I've got it down to 671 MB (703.762.525 bytes). It's gonna be 2010 in 8 minutes, so it's time for some champagne and fireworks, so I'll explain later what I did and what switches I used. Happy new year everyone!

  11. #71
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Ok, now this is what I did to get the result I mentioned in my previous post.

    precomp0.38 (with the switches provided by Skymmer, thanks for that!)
    srep64 -l128
    nanozip -nm -cc -m2g

    The result was a file consisting of 671 MB (703.762.525 bytes). The precomp04 file should compress even better, though, so that's what I'm going to try next

  12. #72
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Nice! My congrats!
    Its quite obvious that I can't be a serious competitor here, cause I have only 1GB memory. If you gonna make PreComp 0.4.0 work with this file with JPEG model activated then you have a clue how to achieve it. Once again, good luck!

  13. #73
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Skymmer View Post
    Nice! My congrats!
    Its quite obvious that I can't be a serious competitor here, cause I have only 1GB memory. If you gonna make PreComp 0.4.0 work with this file with JPEG model activated then you have a clue how to achieve it. Once again, good luck!
    Thank you I know how to make precomp 0.4 work with the jpeg model enabled, by ignoring any part which crashes, but since 0.40 took 17+ hours to run, finding all spots where it crashes would take days, if not weeks, so I'm not going to do that. Having said that, I do have a new record

    precomp0.4 -slow -t-j
    srep64 -l128
    nanozip -nm -cc -m2g

    Which gives us a final file size offffff........*drumroll*

    667 MB (699.992.612 bytes)
    Yay, the 700.000.000 bytes barrier is broken!

    Thanks for the help with this Skymmer, learned a lot from you in how to achieve extreme compression. Now please learn me some extra tricks, by getting it even smaller I know you can do it

    Edit: Bulat, I thought this was quite interesting. Looking forward to more test files.
    Last edited by Mushoz; 2nd January 2010 at 22:37.

  14. #74
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Bulat, do you happen to have any plans on creating another interesting benchmark file? Thanks!

  15. #75
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,483
    Thanks
    719
    Thanked 653 Times in 349 Posts
    i think about Mass Effects or some other huge game

  16. #76
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    Metal Gear Solid 4 is huge
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

  17. #77
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    i think about Mass Effects or some other huge game
    The problem of Mass Effect series is that most of the resources wraped into Unreal Engine compressed files, like upk, u, sfm, pcc, and other, which utilize the modified ZLIB compression (precomp is useless here) or even LZO or LZX.
    You need to use the decompress tool from Gildor to convert such files into uncompressed form. By the way, Unreal Engine supports such converted files so the game is still works and better packable.

    Quote Originally Posted by Black_Fox View Post
    Metal Gear Solid 4 is huge
    Never saw the Metal Gear Solid 4 but I suppose that there are a lot of high quality movies which are badly packable.

    Quote Originally Posted by Mushoz View Post
    Thanks for the help with this Skymmer, learned a lot from you in how to achieve extreme compression. Now please learn me some extra tricks, by getting it even smaller I know you can do it
    I returned to this test just for fun and also to see what can be done with latest versions.
    First, I tried to precomp the vm.dll file with 0.4.1 version without deactivating jpeg model. Such attempt brought some conclusions. Now I know why precomping took more than 24 hours for some people. Its not the recursion and not the slow HDD as mentioned above. Seems that precomp have some problem with big files. At some point (about 3.5Gb) precomp starts to find the possible GIFs very often. During this, the one of the temporary files grows to about 4 Gb and precomp reports something like this:
    Code:
    Possible GIF found at position 3511787008
    Can be decompressed to -1 bytes
    No matches
    Then Precomp finds possible GIF again and the cycle repeats. Every attempt takes about ~10-20 seconds and thats the reason why precomp works too slow on that file.
    So I just splitted the vm.dll to 2 parts with 7-zip's split function.
    vm.dll.001 - 2300000000 bytes
    vm.dll.002 - 1944176896 bytes
    The 2300000000 value have been chosen because of the last ignore position which lead to crash. So precomp options for files are:
    Code:
    precomp -slow -c- -i619716256 -i620119742 -i733687954 -i733280138 -i733841552 -i734911416 -i1212229222 -i1319302591 -i1319303624 -i1325620736 -i1623902430 -i1637846002 -i2231172608 -i734199378 -oD:\vm.001.pcf vm.dll.001
    
    precomp -slow -c- -oD:\vm.002.pcf vm.dll.002
    With such scheme precomping took 1080 sec. for 1st chunk and 2957 sec. for second one, resulting in 67 minutes at total. Sizes of PCF files are:
    Code:
    vm.001.pcf	2 586 401 030
    vm.002.pcf	2 594 708 792
    Total PCF       5 181 109 822 bytes
    
    And the speedup commands are:
    -zl11,18,21,31,34,35,37,38,39,41,44,48,51,52,55,57,58,59,61,64,65,66,67,68,69,71,77,78,79,81,86,87,91,96,97,98,99 -l1
    -zl11,18,21,22,31,37,39,41,42,54,61,65,66,67,68,71,72,76,77,78,79,81,83,86,87,89,91,96,97,98,99 -l2
    Now lets pack it.
    Code:
    BCJ2      7z 9.22 -m0=BCJ2
    dispack   ARC (18 March 2011) -mdispack
    srep      SREP 2.96 -a1
    lzma      ARC (18 March 2011) -mlzma:a1:mfbt4:d1024m:fb128:mc10000
    nz07      NanoZIP 0.07 -m2g -nm -cc
    nz08      NanoZIP 0.08 -m2g -nm -cc
    nz08_6g   NanoZIP 0.08 -m14g -nm -cc
    plzma     plzma_v3a 30 10000
    
    
    vm_BCJ2.7z			5 181 973 674
    vm_dispack.arc			5 151 685 442
    
    vm_BCJ2_srep512_lzma.arc	795 123 509
    vm_dispack_srep512_lzma.arc	798 209 809
    vm_BCJ2_512_plzma.plzma         775 308 351
    
    vm_bcj2_srep512_nz07.nz		694 620 043
    vm_bcj2_srep512_nz08.nz		690 823 103
    vm_dispack_srep512_nz08.nz	699 599 778
    vm_bcj2_srep512_nz08_6g.nz	728 351 046
    
    vm_srep064_nz_08.nz		709 439 780
    vm_srep128_nz_08.nz		694 645 173
    vm_srep256_nz_08.nz		690 470 723
    vm_srep512_nz_08.nz		690 419 331
    Interesting stats.
    Seems that BCJ2 works better than dispack for both lzma and cm.
    cm in NanoZIP 0.08 is impoved comparing to 0.07.
    NanoZIP 0.08 -m14g allocates 6g of memory but result is worse than -m2g.
    For NanoZIP is better to exclude both BCJ2\dispack filters in chain.

    So current best result is: 690 419 331
    Last edited by Skymmer; 2nd July 2011 at 21:07.

  18. #78
    Member zody's Avatar
    Join Date
    Aug 2009
    Location
    Germany
    Posts
    90
    Thanks
    0
    Thanked 1 Time in 1 Post
    If you are still searching for new highly compressable data for HFCB, take a look at FakeFactorys Cinematic Mod 10
    Installed it's about 20gb - and it's freeware; so you can legally download it
    [Playing is only possible with Half Life 2 + Addons installed]
    Last edited by zody; 3rd July 2011 at 15:25.

  19. #79
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    515
    Thanks
    182
    Thanked 163 Times in 71 Posts
    Quote Originally Posted by Skymmer View Post
    Seems that precomp have some problem with big files. At some point (about 3.5Gb) precomp starts to find the possible GIFs very often. During this, the one of the temporary files grows to about 4 Gb and precomp reports something like this:
    Code:
    Possible GIF found at position 3511787008
    Can be decompressed to -1 bytes
    No matches
    Then Precomp finds possible GIF again and the cycle repeats. Every attempt takes about ~10-20 seconds and thats the reason why precomp works too slow on that file.
    I finally found and fixed this bug, it should work in the next Precomp version (0.4.3).
    http://schnaader.info
    Damn kids. They're all alike.

  20. #80
    Member
    Join Date
    Nov 2011
    Location
    France
    Posts
    22
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hello. Here are a few benchmarks for you. It was done with no particular purpose, but to make tests.

    Backtrack:
    Ok, a few analysis on this one. if you oopen the iso, you can see that 99.9% of the file holds in the \casper\filesystem.squashfs
    squashfs is a compressed file, and the complete iso didn't work with precomp, so I decompressed it manually, using 7z.
    As you can see this is already greatly compressed (6>>2 GB) but we can apply further compression for sure.
    So here is the benchmark, excuse me for the lack of time precision ( but here is an approximation, arc ultra is acceptable, fp8 took at least one night.), and for the absence of paq8px, sorry but fp8 was already really slow.

    6 395 112 931 BT5R1-KDE-64
    6 399 266 418 BT5R1-KDE-64.mx0.7z
    1 374 364 725 BT5R1-KDE-64.7z.arc
    1 202 652 833 BT5R1-KDE-64.fp8
    2 126 034 944 BT5R1-KDE-64.iso


    So my personal appreciation: Precomp is still not perfect (bug), fp8 too (bug: cannot compress *.7z, I had to directly compress the folder)


    Debian DVDs:
    Sorry but I only compressed the first of the 8 dvds (total is 31.2 GB), for obvious time reasons, and because I think that the results will be almost the same on the others dvds.
    So now on this iso, 99.9% is hold in *.deb files, and you all know that this is compressed file format.
    But again, precomp didn't managed to finish the whole decompress process, so I manually decompressed all of them with a batch dearchiver tool.
    SO here is the benchmark:


    10 852 925 020 debian-6.0.3-amd64-DVD-1
    10 854 874 201 debian-6.0.3-amd64-DVD-1.mx0.7z
    3 024 000 952 debian-6.0.3-amd64-DVD-1.7z.arc
    8 374 736 613 debian-6.0.3-amd64-DVD-1.7z.srep
    2 968 965 676 debian-6.0.3-amd64-DVD-1.7z.srep.arc
    3 099 757 185 debian-6.0.3-amd64-DVD-1.7z.asymetric.arc
    2 711 686 567 debian-6.0.3-amd64-DVD-1.fp8
    4 695 957 504 debian-6.0.3-amd64-DVD-1.iso

    My conclusion, precomp fails again, fp8 and paq8px (I tested it on the 7z file) failed to compress the 7z and srep file. But you can see that the asymetric arc compression is really not bad.


    Borderlands:
    No big investigations on this one. Precomp didn't fail, but fp8 and paq8px still failed ( and that's why the .pcf.srep.arc is the smallest file ont this benchmark)
    Benchmark:

    7 509 143 070 rld-blns
    4 825 746 546 rld-blns.arc
    4 798 970 437 rld-blns.fp8
    7 509 207 040 rld-blns.iso
    7 509 207 377 rld-blns.pcf
    5 904 821 591 rld-blns.pcf.srep
    4 746 692 432 rld-blns.pcf.srep.arc



    So, no big deal with these benchmarks. Maybe, the only things to remember are the annoying bugs.
    Last edited by 0011110100101001; 12th October 2012 at 06:40.

  21. #81
    Member
    Join Date
    Nov 2011
    Location
    France
    Posts
    22
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Ok, I've seen that many of you use ccm for those benchmarks? Is there any specific reason ( for ex: very good compression, but much faster than paq*, or even fp8? )? I might include ccm in my future tests.

  22. #82
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    2,966
    Thanks
    153
    Thanked 802 Times in 399 Posts
    ccm is reasonably fast for a CM (~5MB/s in low memory modes)
    and has quite a few built-in filters (E8,text,sparse,records) to provide good results on binaries.

  23. #83
    Member Karhunen's Avatar
    Join Date
    Dec 2011
    Location
    USA
    Posts
    91
    Thanks
    2
    Thanked 1 Time in 1 Post
    0011110100101001 said :
    squashfs is a compressed file, and the complete iso didn't work with precomp, so I decompressed it manually, using 7z
    .

    To this end, I have been looking for precompiled win32 binaries of the squashfs tools so that I could decompress the read-only file system in a Linux iso.
    Are there other tools available I have overlooked ?

  24. #84
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Skymmer View Post
    So current best result is: 690 419 331
    I've got a new record

    Code:
    original                 vm.dll                 4.244.176.896
    precomp043 -cn -intense  vm.pcf                 5.185.593.982
    reflatev0c1 -c6 > tar    vm.pcf.ref.tar         7.144.737.792
    srep64 3.0               vm.pcf.ref.tar.srep    3.839.680.029
    nz0.09 -nm -cc -m2g      vm.pcf.ref.tar.srep.nz   646.182.659
    Some things I found out:
    • Reflate is awesome to throw into the mix! It seems to be able to pick a lot more streams that precomp wasn't able to catch. I've tried both reflate > precomp as precomp > reflate in the chain, but precomp first seems to work best. This was tested on a different (smaller) file though, but the difference was quite big, so I assume this remains the best order for this particular file too.
    • Precomp is alot faster (between 2 and 3 hours) instead of 17 hours+ when I first did this "challenge" and also doesn't seem to crash anymore without the -t-j flags!
    • I've only tested reflate with level 6, so better results might be possible by using a different level. Throwing in BCJ2 could shave off some more as well. As could using a better nanozip version (I believe 0.08 is slightly better? Not sure) and/or a higher memory setting for nanozip.
    • Getting the size much smaller is going to be increasingly hard, since we've already come a _long_ way since this started. This has definitely brought extreme compression to the next level!


    Thanks to everyone who learned me use all those tools to their max potential, and of course thank you to all the coders who made these wonderful tools I've learned a lot here!
    Last edited by Mushoz; 10th October 2012 at 01:38.

  25. #85
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,483
    Thanks
    719
    Thanked 653 Times in 349 Posts
    bcj2->dispack?

  26. #86
    Member
    Join Date
    Dec 2009
    Location
    Netherlands
    Posts
    39
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Hmmm, nevermind my last post. Seems like the file is totally corrupted when trying to unpack. I'll see if I can redo the chain, perhaps with different switches, without corrupting the result halfway through.

    Edit: The raw2hif.exe program from reflate seems to bug and corrupt 36 streams of the 33k+ streams that rawdet.exe found. I'm going to report the bug to the creator of the program so he can hopefully fix it. Tomorrow I'll compress the current result without processing the 36 problematic streams through raw2hif.exe, so the archive won't get corrupted anymore. I'll post the results as soon as they are in
    Last edited by Mushoz; 11th October 2012 at 02:20.

  27. #87
    Member
    Join Date
    Nov 2011
    Location
    France
    Posts
    22
    Thanks
    0
    Thanked 0 Times in 0 Posts
    @Karhunen, According to wikipedia page, these tools: http://www.tomas-m.com/blog/482-Squa...r-Windows.html are compiled windows binaries to decompress squashfs filesystem. However download links are down. So I found this one: ftp://ftp.cc.uoc.gr/mirrors/linux/slax/useful-binaries/win32/squashfs-tools/ and this one which looks like somebody's else work http://fragilematter.blogspot.fr/201...-binaries.html



    These tools seem to have great potential, because you can recreate the squashfs with mksquashfs. I think they would come in handy in OS iso recompression.

  28. #88
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Quote Originally Posted by Mushoz View Post
    I'll post the results as soon as they are in
    Hi! Nice to see that you returned to this little competition. Waiting for results

  29. #89
    Member
    Join Date
    May 2012
    Location
    United States
    Posts
    318
    Thanks
    172
    Thanked 51 Times in 37 Posts
    Quote Originally Posted by Mushoz View Post
    I've got a new record

    Code:
    original                 vm.dll                 4.244.176.896
    precomp043 -cn -intense  vm.pcf                 5.185.593.982
    reflatev0c1 -c6 > tar    vm.pcf.ref.tar         7.144.737.792
    srep64 3.0               vm.pcf.ref.tar.srep    3.839.680.029
    nz0.09 -nm -cc -m2g      vm.pcf.ref.tar.srep.nz   646.182.659
    Some things I found out:
    • Reflate is awesome to throw into the mix! It seems to be able to pick a lot more streams that precomp wasn't able to catch. I've tried both reflate > precomp as precomp > reflate in the chain, but precomp first seems to work best. This was tested on a different (smaller) file though, but the difference was quite big, so I assume this remains the best order for this particular file too.
    • Precomp is alot faster (between 2 and 3 hours) instead of 17 hours+ when I first did this "challenge" and also doesn't seem to crash anymore without the -t-j flags!
    • I've only tested reflate with level 6, so better results might be possible by using a different level. Throwing in BCJ2 could shave off some more as well. As could using a better nanozip version (I believe 0.08 is slightly better? Not sure) and/or a higher memory setting for nanozip.
    • Getting the size much smaller is going to be increasingly hard, since we've already come a _long_ way since this started. This has definitely brought extreme compression to the next level!


    Thanks to everyone who learned me use all those tools to their max potential, and of course thank you to all the coders who made these wonderful tools I've learned a lot here!
    What do you mean "reflate > tar"? How did you end up with a larger file than the PCF? What settings did you use with Reflate (C.bat, Test1.bat, etc.)?

  30. #90
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    2,966
    Thanks
    153
    Thanked 802 Times in 399 Posts
    > but precomp first seems to work best

    Sure, as it stores its streams more efficiently than reflate without level detection.

    > What do you mean "reflate > tar"?

    raw2hif produces a .unp+.hif pair per each input stream, plus two more files (.str and .out).
    For easier processing then it makes sense to wrap all these files into an archive.
    Though in reflate demo I'm using my own "shar" container format, as its much less redundant than tar.

    > How did you end up with a larger file than the PCF?

    More unpacked deflate streams = larger uncompressed archive

    > What settings did you use with Reflate (C.bat, Test1.bat, etc.)?

    Its quite possible to use rawdet and raw2hif utils directly.

Page 3 of 5 FirstFirst 12345 LastLast

Similar Threads

  1. convert swf files to avi files
    By Jabilo in forum The Off-Topic Lounge
    Replies: 13
    Last Post: 26th October 2016, 11:39
  2. New benchmark for generic compression
    By Matt Mahoney in forum Data Compression
    Replies: 20
    Last Post: 29th December 2008, 09:20
  3. MONSTER OF COMPRESSION - New Benchmark -
    By Nania Francesco in forum Forum Archive
    Replies: 222
    Last Post: 5th May 2008, 10:04
  4. Compression speed benchmark
    By Sportman in forum Forum Archive
    Replies: 104
    Last Post: 23rd April 2008, 16:38
  5. Synthetic compression benchmark
    By giorgiotani in forum Forum Archive
    Replies: 6
    Last Post: 3rd March 2008, 12:14

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •