Page 1 of 2 12 LastLast
Results 1 to 30 of 57

Thread: Precomp 0.4.6_dev

  1. #1
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts

    Precomp 0.4.6_dev

    I noticed some unexpected behaviour. Just in case it is caused by some Precomp flaw, I'm posting here.
    Some small PDFs (About 200 in ~13mb), were processed in parallel using PPX2; this took about 5 minutes. Now, the same set, packed into a Shar archive was processed in ~1 minute 15 secs by a single instance of Precomp... And now I don't understand nothing at all Because theoretically parallel processing should speed the things up, not the contrary (I have 4 threads)...
    I ran the same comparison a few times, making sure there's no other processes using much CPU, or writing to the disk.
    Could it be that much the time precomp spends loading itself into the memory, opening and closing the file, or maybe four instances writing temp files to disk is just too much for the system?
    Can anybody reproduce the problem?

  2. #2
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    I'd say it's the disk usage that slows things down here. CPUs are optimized for parallel usage, but disks aren't - the fact that you have multiple small PDFs means you'll have short times of CPU usage with much disk access in between. Also, in the single thread scenario, one file will be accessed most of the time before switching to the next one, while in the 4 thread scenario, the disk will jump back and forth on multiple files.

    You could try to use only 2 threads, perhaps this gives better results, but I guess disk usage will still dominate.

    The next commit will change temporary file usage and only use 2 instead of 4 temporary files per stream (for partial matches). Would be interesting to see if that improves things. I'll post a compiled version when it's ready.
    http://schnaader.info
    Damn kids. They're all alike.

  3. The Following User Says Thank You to schnaader For This Useful Post:

    Gonzalo (29th May 2016)

  4. #3
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    32- and 64-bit versions of commit faa477 attached.
    Attached Files Attached Files
    http://schnaader.info
    Damn kids. They're all alike.

  5. #4
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    856
    Thanks
    45
    Thanked 104 Times in 82 Posts
    Quote Originally Posted by Gonzalo View Post
    I noticed some unexpected behaviour. Just in case it is caused by some Precomp flaw, I'm posting here.
    Some small PDFs (About 200 in ~13mb), were processed in parallel using PPX2; this took about 5 minutes. Now, the same set, packed into a Shar archive was processed in ~1 minute 15 secs by a single instance of Precomp... And now I don't understand nothing at all Because theoretically parallel processing should speed the things up, not the contrary (I have 4 threads)...
    I ran the same comparison a few times, making sure there's no other processes using much CPU, or writing to the disk.
    Could it be that much the time precomp spends loading itself into the memory, opening and closing the file, or maybe four instances writing temp files to disk is just too much for the system?
    Can anybody reproduce the problem?
    Is the basic latancy vs bandwidth issue.
    You are increasing you bandwidth with multiple files but since you are seeking more on you I/O latency goes up. if you driver has a high latency the penalty from increasing the latency will outweigh the gains in bandwidth.
    Also as you improve performance in some part of the chains another one becomes the bottleneck. you might want to look into copying the files to a ramdriver first and then run ppx/precomp from there. and see if the total time including copying is faster than just single threaded mode without ramdrive

  6. The Following User Says Thank You to SvenBent For This Useful Post:

    Gonzalo (5th June 2016)

  7. #5
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    I'm not so sure about whether the "two temp files less" commit is worth to implement. As you know, there are very different cases of zLib recompression. See for example, a set of PDFs of my own. All data sets are compressed with the same method (fazip rep+lzma):

    PDFs: 73.25%
    Precomp old: 65.19%
    Precomp new: 68.22%

    Difference: 3.03%

    BUT, this is with all the JPEG data inside. If we took it away, the difference ascend to a 9.65% worse compression of the plain zLib data with the new less accurate approach...

  8. #6
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Gonzalo View Post
    PDFs: 73.25%
    Precomp old: 65.19%
    Precomp new: 68.22%

    Difference: 3.03%
    I found regressions on PDF files, too - which seemed strange as most streams are completely recompressed there. Analyzed this further and found the culprit - trying combinations was sometimes stopped to early. This wasn't a problem before as the additional decompression/recompression stage hid this, but it became a serious problem after removing these stages. Should be fixed in the latest version (commit d1d4263), attached 32- and 64-bit Windows compiles.

    This new version also contains some commits made after faa477 that reduce temporary file usage further - now only 1 temporary file is used (the decompressed stream).

    As a side effect bonus, original compressed size is now tracked and written into verbose logs, could be useful for some people.
    Attached Files Attached Files
    http://schnaader.info
    Damn kids. They're all alike.

  9. The Following User Says Thank You to schnaader For This Useful Post:

    Samantha (5th June 2016)

  10. #7
    Member Samantha's Avatar
    Join Date
    Apr 2016
    Location
    italy
    Posts
    38
    Thanks
    31
    Thanked 7 Times in 4 Posts
    I Tested the 0.46 x64 still under development, compared with the last 0.45, great job, both in speed and reflate ... with one difference in favor of the 0.46 +0.24Mb/s in speed and a higher ratio of 3.57% on 4,000 files of images.

    0.45
    Click image for larger version. 

Name:	0.45.png 
Views:	244 
Size:	36.3 KB 
ID:	4447

    0.46
    Click image for larger version. 

Name:	0.46.png 
Views:	243 
Size:	36.3 KB 
ID:	4448

    Although actually I expected a greater load on the CPU since I used the version mt to 64, but the load threshold has not exceeded 25-30% of the cpu...


  11. #8
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    I'm sorry I don't have much time. But I think the logs speaks for themselves. Look at the timing...
    Attached Files Attached Files

  12. #9
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Gonzalo View Post
    I'm sorry I don't have much time. But I think the logs speaks for themselves. Look at the timing...
    I just found the problem - in commit 074e9f, I changed the condition to stop trying combinations and added the requirement "penalty bytes must be 0". This was useful because it prevented some cases where successful recompression was found with too many penalty bytes and compression got worse. On the other hand, it is not good regarding speed as in some files, there is no combination without penalty bytes. For example, FlashMX.pdf is such a file - 197 PDF streams are recompressed, all of them use 5 penalty bytes:

    Code:
    precomp (v0.4.5) -cn: 2.2 s
    precomp (commit faa4776) -cn: 1.9 s
    precomp (commit d1d4263) -cn: 26 s
    precomp (commit d1d4263) -cn -zl65: 1.5 s
    I'm not sure what to do here yet, I guess I'll either remove the penalty bytes condition or change it (so there have to be more than X penalty bytes to continue trying combinations).
    http://schnaader.info
    Damn kids. They're all alike.

  13. #10
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    Quote Originally Posted by schnaader View Post
    I just found the problem - in commit 074e9f, I changed the condition to stop trying combinations and added the requirement "penalty bytes must be 0". This was useful because it prevented some cases where successful recompression was found with too many penalty bytes and compression got worse. On the other hand, it is not good regarding speed as in some files, there is no combination without penalty bytes. For example, FlashMX.pdf is such a file - 197 PDF streams are recompressed, all of them use 5 penalty bytes:

    Code:
    precomp (v0.4.5) -cn: 2.2 s
    precomp (commit faa4776) -cn: 1.9 s
    precomp (commit d1d4263) -cn: 26 s
    precomp (commit d1d4263) -cn -zl65: 1.5 s
    I'm not sure what to do here yet, I guess I'll either remove the penalty bytes condition or change it (so there have to be more than X penalty bytes to continue trying combinations).
    Any news about this? Thanks for answering.

  14. #11
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Gonzalo View Post
    Any news about this? Thanks for answering.
    I'll gather problematic files and try some of the possible solutions on them. At the moment, it looks like I'll implement some penalty threshold, so e.g. anything with less than X penalty bytes will be handled as a successful recompression, so the speed regression will go away on files like FlashMX.pdf where compressing doesn't get much worse.

    About the general roadmap: The last months, I hadn't have much time for Precomp. This will get better now, plans are:

    1. Finish work on on-the-fly LZMA compression in the sftt branch and pull the changes. Although it might need some more polishing (needs much memory, multi-threading seems to be improvable), it's already usable and offers the usual benefits (good compression; very fast, low-memory decompression).
    2. Finish work on temporary files removal
    3. Release Precomp 0.4.6
    http://schnaader.info
    Damn kids. They're all alike.

  15. #12
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    Thanks!
    BTW: Are you sure is worth implementing lzma? Everybody can compress a file with lzma in any platform. Why spend time integrating it on another program? There will always be some troubleshooting to do. Why not use that time in improving the side of precomp where it really shines? That is, de-compression and parsing...
    Also, I don't see the point of using bzip2 after preprocessing... I think bzip2 is pretty dead by now...
    Anyway, keep up with the great work!

  16. The Following 2 Users Say Thank You to Gonzalo For This Useful Post:

    Razor12911 (4th October 2016),Samantha (4th October 2016)

  17. #13
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Gonzalo View Post
    BTW: Are you sure is worth implementing lzma? Everybody can compress a file with lzma in any platform. Why spend time integrating it on another program? There will always be some troubleshooting to do. Why not use that time in improving the side of precomp where it really shines? That is, de-compression and parsing...
    Also, I don't see the point of using bzip2 after preprocessing... I think bzip2 is pretty dead by now...
    For ease of use, Precomp should offer a stand-alone compression option. E.g. "precomp myfile" should be enough to compress myfile, no need to use Precomp together with another compression tool (although that's possible, too, of course). That's why "-cn" is an optional parameter and "-cb" is default. As I added libbzip2 for bzip2 recompression, it was a good opportunity to add bzip2 on-the-fly compression ("-cb"), too. On the other hand, as you said, bzip2 doesn't give best compression for most files. Also, I didn't implement multi-threading, which would have been a great way to speed it up.

    With liblzma, it's the same, just the other way round. On-the-fly compression is the first thing that will be done, offering a way to get better compression results using only Precomp with very fast decompression/restoration. The second part of it, lzma recompression, will be implemented later, but get very useful.
    http://schnaader.info
    Damn kids. They're all alike.

  18. #14
    Member
    Join Date
    Oct 2014
    Location
    South Africa
    Posts
    38
    Thanks
    23
    Thanked 7 Times in 5 Posts
    Quote Originally Posted by Gonzalo View Post
    Thanks!
    BTW: Are you sure is worth implementing lzma?
    I'm agree with Gonzalo. As I include the precomp in the FreeArc SFX stub to have a stand alone SFX file, I prefer to have the smaller size precomp.

    I think that using the liblzma increases the precomp file size.

  19. #15
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by msat59 View Post
    I'm agree with Gonzalo. As I include the precomp in the FreeArc SFX stub to have a stand alone SFX file, I prefer to have the smaller size precomp.
    The best concept for using Precomp in self extracing archives would be a completely modular one, so only the compression methods used in the archive would be added to the SFX. Also, for even smaller size, decompression code could be removed, so only the recompression code that is used would be left. I've added it as an "nice to have" issue, see Issue #49.

    Note that since Precomp is open source now, you could compile a version without liblzma yourself. In fact, there is a commit where I removed #ifdefs so that liblzma is always used. Could add them back, so you'd just have to remove "#define LZMA_H" for a Precomp version without liblzma. I don't like this kind of "quick and dirty" solution using #ifdefs as these clutter the code, but it's the fastest way to implement this.
    http://schnaader.info
    Damn kids. They're all alike.

  20. The Following User Says Thank You to schnaader For This Useful Post:

    msat59 (23rd October 2016)

  21. #16
    Member
    Join Date
    Oct 2014
    Location
    South Africa
    Posts
    38
    Thanks
    23
    Thanked 7 Times in 5 Posts
    Any plan to improve ZIP recompression?

    Today, I testet reflate rawflit v1_l. It outperforms precomp in recompression of ZIP files.

    Code:
               Original size     Reflate+9rep+9xb            precomp:cn+9rep+9xb
               ---------------   ----------------------     --------------------------
    zip1         997,622              812,048                     1,044,973
    zip2         965,227              889,366                       965,628
    Attached Files Attached Files

  22. #17
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by msat59 View Post
    Any plan to improve ZIP recompression?

    Today, I testet reflate rawflit v1_l. It outperforms precomp in recompression of ZIP files.
    This is issue #21 - the current version of Precomp does a very basic recompression (brute forcing 9*9 possible zLib parameters) and needs full or partial matches. This often works, but will fail for many "custom"/non-zLib deflate streams. Reflate uses a more complete approach by storing the match differences in an efficient way. My own implementation of this is in work and called "difflate". It's almost done and will be released on Github once it is. I'll also merge it into Precomp when it's stable and fast enough.
    http://schnaader.info
    Damn kids. They're all alike.

  23. The Following 4 Users Say Thank You to schnaader For This Useful Post:

    Bulat Ziganshin (31st October 2016),JamesB (31st October 2016),msat59 (1st November 2016),Razor12911 (19th December 2016)

  24. #18
    Member
    Join Date
    Oct 2014
    Location
    South Africa
    Posts
    38
    Thanks
    23
    Thanked 7 Times in 5 Posts
    Quote Originally Posted by schnaader View Post
    This is issue #21 - the current version of Precomp does a very basic recompression (brute forcing 9*9 possible zLib parameters) and needs full or partial matches.
    I've found another file format, MATLAB "mat file" for which reflate woks better.

    Original size= 1,418,708 bytes
    Precomp-intense + 9rep + 9xb = 1,136,394
    reflate + 9rep + 9xb = 670,809
    Attached Files Attached Files

  25. #19
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Just tested outlook .pst compression, with precomp and reflate.
    Apparently .pst already has some weak LZ compression inside, so recompressors usually don't work with it.
    But I was able to find some utils to convert it to text: https://netix.dl.sourceforge.net/pro...63-w32-bin.zip

    Code:
    215,823,582 Inbox // readpst.exe testreal.pst
    218,879,866 Inbox1 // unix2dos
    170,887,502 Inbox1pcf // precomp64.exe -cn -t+M Inbox
    375,510,624 Inbox1pcf2 // precomp64.exe -cn Inbox1pcf
    
     67,552,542 0.7z // 7z a -mx=9 -myx=9 -mmt=2 -md=160M -bb3 0 Inbox
     65,726,014 1.7z // 7z a -mx=9 -myx=9 -mmt=2 -md=160M -bb3 1 testreal.pst
    
     66,319,300 0.pa // 7z a -mx=9 -mf=off -bb3 -m0=plzma4:mt1:a1:d160M:lc8:pb0:lp0:fb273:mc99 0.pa Inbox        
     64,397,316 1.pa // 7z a -mx=9 -mf=off -bb3 -m0=plzma4:mt1:a1:d160M:lc8:pb0:lp0:fb273:mc99 1.pa testreal.pst 
     60,852,915 2.pa // 7z a -mx=9 -mf=off -bb3 -m0=plzma4:mt1:a1:d160M:lc8:pb0:lp0:fb273:mc99 2.pa Inbox1pcf    
     44,990,230 3.pa // 7z a -mx=9 -mf=off -bb3 -m0=plzma4:mt1:a1:d160M:lc8:pb0:lp0:fb273:mc99 3.pa Inbox1pcf2   
    
     44,362,572 4.pa // 7z a -mx=9 -mf=off -bb3 -m0=reflate -m1=plzma4:mt1:a1:d160M:lc8:pb0:lp0:fb273:mc99 4.pa Inbox1pcf2
    
     51,696,590 5.pa // 7z a -mx=9 -mf=off -bb3 -m0=reflate -m1=plzma4:mt1:a1:d160M:lc8:pb0:lp0:fb273:mc99 5.pa Inbox1pcf
    
     48,691,366 7a.pa // 
       reflate.exe c6 Inbox1pcf Inbox1pcf1 Inbox1pcf1h6a Inbox1pcf1h6b Inbox1pcf1h6c // 66666 666 66 6.6
       7z a -mx=9 -mf=off -bb3 -ms=off -m0=plzma4:mt1:a1:d160M:lc8:pb0:lp0:fb273:mc99 7a.pa Inbox1pcf1
       7z a -mx=9 -myx=9 -bb3 -m0=lzma2 7a.pa Inbox1pcf1h6a Inbox1pcf1h6b Inbox1pcf1h6c
    Now the main reason why I'm posting this here: precomp wasn't able to detect base64 blocks in readpst output,
    it worked only after I converted it to CRLF linefeeds (it was just LF).

  26. The Following 4 Users Say Thank You to Shelwien For This Useful Post:

    Bulat Ziganshin (17th January 2017),RamiroCruzo (17th January 2017),schnaader (17th January 2017),xinix (19th January 2017)

  27. #20
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Thanks for reporting. The RFCs (e.g. RFC 2045, RFC 822) all use CRLF very strict, so I didn't implement LF linefeeds, but it's a good idea to do so.
    http://schnaader.info
    Damn kids. They're all alike.

  28. #21
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    i've read on some forum: "Any game from ea using frostbite 3 engine is lz4 compressed"

    seems it's time to add lz4/zstd/brotli recompression too

  29. The Following 2 Users Say Thank You to Bulat Ziganshin For This Useful Post:

    Gonzalo (19th January 2017),Minimum (20th January 2017)

  30. #22
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    i've read on some forum: "Any game from ea using frostbite 3 engine is lz4 compressed"

    seems it's time to add lz4/zstd/brotli recompression too
    Not only games, everyone seems to be switching to zstd nowadays.
    But there's the other side of the coin, too... Do we want precomp to be used even more on warez? Wait a little more time and it will be blacklisted by all antivirus on the market.

  31. #23
    Member RamiroCruzo's Avatar
    Join Date
    Jul 2015
    Location
    India
    Posts
    15
    Thanks
    137
    Thanked 10 Times in 7 Posts
    None of the antivirus out there today allows any software, companies pay to get their exe whitelisted. So, its not the "warez" factor that makes exe flagged as "Virus".

    XD We should be discussing possible ways of recompression rather than just ordering Schnaader Uncle, as last time I checked, lz4 got 21 lvl and can make about 200+ recompression possibilities compared to 81 of zlib.

  32. #24
    Member
    Join Date
    Feb 2017
    Location
    none
    Posts
    17
    Thanks
    2
    Thanked 10 Times in 5 Posts
    Hi! im' trying to compile the GIT version under UBUNTU 16 32 bits, but i get errors trying to compile the liblzma, i already install the liblzma-dev, but i get the same error


    common/mythread.h:162:19: note: each undeclared identifier is reported only once for each function it appears in
    Makefile:13: recipe for target 'liblzma' failed
    make[1]: *** [liblzma] Error 1
    make[1]: Leaving directory
    '/home/users/precomp/precomp-cpp-master/contrib/liblzma'
    Makefile:40: recipe for target 'liblzma' failed
    make: *** [liblzma] Error 2
    Any hint? thanks!

  33. #25
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by redrabbit View Post
    Hi! im' trying to compile the GIT version under UBUNTU 16 32 bits, but i get errors trying to compile the liblzma, i already install the liblzma-dev, but i get the same error
    Any hint? thanks!
    Thanks for reporting, this should be fixed with the latest commit - contrib/liblzma/Makefile was using "-std=c99" instead of "-std=gnu99" which is necessary for the compiler recognizing SIG_SETMASK.
    http://schnaader.info
    Damn kids. They're all alike.

  34. #26
    Member
    Join Date
    Feb 2017
    Location
    none
    Posts
    17
    Thanks
    2
    Thanked 10 Times in 5 Posts
    Thanks! now worked, anyway i found some "bugs" trying to compile and executing the binary file in 32 bits systems, anyway here is what i did and works

    HOW-TO compile precomp-git version for 32 bits systems and make it run in old Ubuntus (i had libstd errors running the compiled binary in old versions of the system)

    Assuming you are in a 32 bits OS (Debian or ubuntu in this case)

    1-wget github.com/schnaader/precomp-cpp/archive/master.zip && unzip master.zip && cd precomp-cpp-master/
    2-sudo apt-get install libbz2-1.0 libbz2-dev libbz2-ocaml libbz2-ocaml-dev g++-multilib
    3-Edit the ALL the Makefiles you found and change the option -m64 for -m32
    4-Edit the main MakeFile and add this option -static-libstdc++

    Main MakeFile
    CFLAGS
    = -std=c++11 -DUNIX -DBIT64 -D_FILE_OFFSET_BITS=64 -m32 -O2 -Wall -pthread -static-libstdc++

    I attach the compiled version for 32 bits
    Attached Files Attached Files

  35. #27
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    872
    Thanks
    457
    Thanked 175 Times in 85 Posts
    May somebody please post a windows x64 build of latest commit?

  36. #28
    Member
    Join Date
    Jan 2016
    Location
    India
    Posts
    20
    Thanks
    23
    Thanked 8 Times in 7 Posts
    >> May somebody please post a windows x64 build of latest commit?
    Attached Files Attached Files

  37. The Following User Says Thank You to PrinceGupta For This Useful Post:

    Stephan Busch (9th March 2017)

  38. #29
    Member
    Join Date
    Feb 2017
    Location
    none
    Posts
    17
    Thanks
    2
    Thanked 10 Times in 5 Posts
    Compiled and attached the 64 bits version of latest precomp 0.4.6 for Linux
    If doesn't works (who knows) do ldd precomp64.bin and you will see what libs are missing
    Attached Files Attached Files

  39. #30
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    872
    Thanks
    457
    Thanked 175 Times in 85 Posts
    latest precomp 0.4.6 crashes on this file every time:

    http://www.squeezechart.com/KTv1.apk

  40. The Following User Says Thank You to Stephan Busch For This Useful Post:

    schnaader (10th March 2017)

Page 1 of 2 12 LastLast

Similar Threads

  1. Precomp (and Precomp Comfort) in 315 kb
    By Yuri Grille. in forum Data Compression
    Replies: 2
    Last Post: 1st April 2009, 19:40

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •