Results 1 to 20 of 20

Thread: Precomp 0.4.5

  1. #1
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts

    Precomp 0.4.5

    Precomp 0.4.5 is out.

    List of changes (also see closed issue list at GitHub)

    • Updated packJPG to 2.5k, packMP3 to 1.0g
    • Windows version compiled using GCC/G++ 5.3.0 (before: 4.8.1)
    • 32-bit and 64-bit versions (~10-20% faster on 64-bit machines)
    • SWF support adjusted to newer versions
    • MP3 support
    • MP3 and JPG recompression without temporary files for sizes up to 64 MB
    • Fixed memory corruption in packJPG that led to crashes
    • Fixed Base64 streams not being restored correctly in recursion


    Have a look at http://schnaader.info/precomp.php and https://github.com/schnaader/precomp-cpp
    http://schnaader.info
    Damn kids. They're all alike.

  2. The Following 10 Users Say Thank You to schnaader For This Useful Post:

    Bulat Ziganshin (8th May 2016),comp1 (8th May 2016),Gonzalo (8th May 2016),kassane (21st December 2016),Mike (8th May 2016),Minimum (8th May 2016),msat59 (27th May 2016),Razor12911 (27th May 2016),Simorq (19th December 2016),Stephan Busch (8th May 2016)

  3. #2
    Member
    Join Date
    Aug 2015
    Location
    indonesia
    Posts
    47
    Thanks
    3
    Thanked 7 Times in 7 Posts
    how to compile it using dev c++ 5.10 under windows 7 64 bit ?

  4. #3
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by suryakandau@yahoo.co.id View Post
    how to compile it using dev c++ 5.10 under windows 7 64 bit ?
    Since dev c++ seems to use GCC, the first thing I'd try is to compile using the make.bat script. There's a new block in it where you can adjust path/filenames of the GCC executables:

    Code:
    REM gcc/g++ 32-bit/64-bit commands - change them according to your environment
    set GCC32=gcc
    set GPP32=g++
    set GCC64=gcc
    set GPP64=g++
    After that, calling "make" compiles a 32-bit version using GCC32/GPP32, calling "make 64" compiles a 64-bit version using GCC64/GPP64.

    Once this works, you secured that compiling without the IDE works (it should, as dev c++ uses MinGw GCC/G++ 4.8.1 which was used to compile older Precomp releases). The next thing I'd try is to put every source file (.cpp/.c/.h) from the root directory and the contrib directories into a project - keep the directory structure, if possible - and try to compile it using the IDE. It can be a long way to achieve this, but it gives you breakpoints and debugging, so if you want to do something with the code, it's worth the pain
    http://schnaader.info
    Damn kids. They're all alike.

  5. #4
    Member
    Join Date
    Oct 2014
    Location
    South Africa
    Posts
    38
    Thanks
    23
    Thanked 7 Times in 5 Posts
    Good to see that Precomp is still alive.

    I checked the performance on an Excel 2010 XLSB file. Version 0.4.5 was 100% slower than old v0.4.3 while just decompressing. I used only -cn option.

    v0.4.5:
    6 second(s), 552 millisecond(s)

    v0.4.3
    3 second(s), 697 millisecond(s)

    Is it possible to improve the performance?

    Thanks

  6. #5
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by msat59 View Post
    I used only -cn option.

    v0.4.5:
    6 second(s), 552 millisecond(s)

    v0.4.3
    3 second(s), 697 millisecond(s)

    Is it possible to improve the performance?
    Could you give some more details? E.g. stream types involved? Did both versions detect the same streams? I'm not aware of any degradations in speed between these two versions.

    As to performance in general, it should be improved with the next release that will avoid using temporary files.
    http://schnaader.info
    Damn kids. They're all alike.

  7. #6
    Member Razor12911's Avatar
    Join Date
    May 2016
    Location
    South Africa
    Posts
    31
    Thanks
    51
    Thanked 53 Times in 15 Posts
    All precomps after 0.42, tested with 0.43, 0.44 and 0.45, all of them fail on 3 files from a game which is PES2016.
    Precomp 0.38, 0.40 and 0.42 all work on this file and they can restore it (I think).


    The 3 files:
    dt80_100E_win.cpk
    dt80_200E_win.cpk
    dt80_300E_win.cpk


    all together, 1.28GB.


    What precomp does is start as normal, becomes slow from 24% till it reaches 29.3%, 20 minutes would have elapsed already and only moved 5%.
    Continues fast after 29.3%, spends about 2 minutes to get from 29.3% to another point where it gets slow again when it is at 46.61% takes about 22 minutes to get to 55.2%.
    The it goes from 55.2% running till 87.06%, slow again till 93.78% then ran to 100%.


    Took about 70 minutes to process everything, output 5.37GB.


    954 million read instances, read over 515GB
    3.29 million write instances, wrote more than 21.62GB (output was 5.37GB)
    Now, IO other is very weird, precomp seems to think debug/verbose option is turned on, 1 million writes and 3.07GB on IO Other.


    Ok, now here comes the problem.
    File input was 1.28 GB (1,384,696,832 bytes), Precomp output was 5.37 GB (5,767,825,783 bytes).
    Precomp input was 5.37 GB (5,767,825,783 bytes) but File output was 5.28 GB (5,679,664,140 bytes), exceeded original output size by 4.00 GB (4,294,967,308 bytes) / 0x10000000C.


    Precomp writes 4GB data out of nowhere, it isn't reading it from anywhere, it just writes the 4GB at 34.50%.


    Took about 9 minutes to restore.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	precomp_3.PNG 
Views:	194 
Size:	28.1 KB 
ID:	4428  
    Attached Files Attached Files

  8. The Following User Says Thank You to Razor12911 For This Useful Post:

    schnaader (29th May 2016)

  9. #7
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Razor12911 View Post
    954 million read instances, read over 515GB
    3.29 million write instances, wrote more than 21.62GB (output was 5.37GB)
    Now, IO other is very weird, precomp seems to think debug/verbose option is turned on, 1 million writes and 3.07GB on IO Other.
    Read and write are as expected, read is that high because there are very much comparisons where nothing is written, but bytes are read and compared.

    Debug/verbose turned off doesn't necessarily mean there's low console I/O. The progress indicator and the percentage are updated very often. Most of the updates will only happen if a second has passed to avoid updating too often, but especially in recursion, there are many updates without time checks. I will look into restricting this so the output is updated only when something has changed.

    Nevertheless, the ratio is strange. With 1 million writes, I would expect it to be several MBs (10-30 bytes per write), but not 3 GB (3000 bytes per write).

    Quote Originally Posted by Razor12911 View Post
    Ok, now here comes the problem.
    File input was 1.28 GB (1,384,696,832 bytes), Precomp output was 5.37 GB (5,767,825,783 bytes).
    Precomp input was 5.37 GB (5,767,825,783 bytes) but File output was 5.28 GB (5,679,664,140 bytes), exceeded original output size by 4.00 GB (4,294,967,308 bytes) / 0x10000000C.

    Precomp writes 4GB data out of nowhere, it isn't reading it from anywhere, it just writes the 4GB at 34.50%.
    Hmm.. sounds like some bug with storing offsets or length, especially the fact that the additional byte count is so close to 2^32.

    Some ideas on how to proceed further:
    • The stream summary (how many streams, stream types, recursion depth) would be useful
    • There were many changes between 0.4.2 and 0.4.3, but one candidate would be GIF recompression. So using "-t-f" could help.
    • Try to split the file somewhere around 34.5%, e.g. try to process a 10 or 100 MB piece. It's likely that with a smaller piece, recompression will fail, too. This can be analyzed easier.
    http://schnaader.info
    Damn kids. They're all alike.

  10. The Following User Says Thank You to schnaader For This Useful Post:

    Razor12911 (29th May 2016)

  11. #8
    Member Razor12911's Avatar
    Join Date
    May 2016
    Location
    South Africa
    Posts
    31
    Thanks
    51
    Thanked 53 Times in 15 Posts
    Ok will do and report back, will try to upload the chunk around the 34% region.

  12. #9
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    222
    Thanks
    89
    Thanked 46 Times in 30 Posts
    Quote Originally Posted by schnaader View Post
    Precomp 0.4.5 is out.

    List of changes (also see closed issue list at GitHub)

    • Updated packJPG to 2.5k, packMP3 to 1.0g
    • Windows version compiled using GCC/G++ 5.3.0 (before: 4.8.1)
    • 32-bit and 64-bit versions (~10-20% faster on 64-bit machines)
    • SWF support adjusted to newer versions
    • MP3 support
    • MP3 and JPG recompression without temporary files for sizes up to 64 MB
    • Fixed memory corruption in packJPG that led to crashes
    • Fixed Base64 streams not being restored correctly in recursion


    Have a look at http://schnaader.info/precomp.php and https://github.com/schnaader/precomp-cpp
    Thanks schnaader, this is such a fascinating approach. I'm having trouble understanding the command usage. First, let me note that Bitdefender is blocking your website (http://schnaader.info/precomp.php) – do you know if this is a false positive or an actual infection?

    So in the instructions when you speak of the "original file", do you mean the original uncompressed file or the compressed file?

    How do I recompress a file with precomp? What do I need to add to commands like:

    precomp jquery.js.gz

    precomp flowers.jpeg

    to re-gzip the pcf data or apply packJPG to the pcf data?

    If I want to use libdeflate or zopfli on the gzip, how do I do that? precomp produces pcf data, and I'm not sure what to do from there – how do I give that to libdeflate or zlib?

    Thanks.

  13. #10
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by SolidComp View Post
    Thanks schnaader, this is such a fascinating approach. I'm having trouble understanding the command usage. First, let me note that Bitdefender is blocking your website (http://schnaader.info/precomp.php) – do you know if this is a false positive or an actual infection?
    I'm sure it's a false positive - for anyone interested, here's the VirusTotal report (1/63 alarms): https://www.virustotal.com/de/url/04...is/1464668644/

    The site is 100% made by me, there's no JavaScript code at all and the only "external" resources are the PayPal button and the Google Ads. I'm not using executable packers like UPX anymore because they often trigger false alarms.

    Quote Originally Posted by SolidComp View Post
    So in the instructions when you speak of the "original file", do you mean the original uncompressed file or the compressed file?
    The compressed file. To get the uncompressed file, use the parameter "-cn" which disables bzip2 compression and only does decompression.

    Quote Originally Posted by SolidComp View Post
    precomp jquery.js.gz
    [...]
    to re-gzip the pcf data
    [...]
    If I want to use libdeflate or zopfli on the gzip, how do I do that? precomp produces pcf data, and I'm not sure what to do from there – how do I give that to libdeflate or zlib?
    Precomp decompresses the gzip stream and recompresses it using bzip2 (if not using "-cn"), but it doesn't recompress it using gzip. Precomp is not a tool for optimizing streams using the same compression method. It's a tool for decompressing streams to recompress them using better compression methods and to restore them bit-to-bit identical afterwards.

    Using "-cn", you'll get a decompressed version of jquery.js.gz, which will be similar to jquery.js, but containing additional data for restoring the original compressed file.

    Quote Originally Posted by SolidComp View Post
    precomp flowers.jpeg
    [...]
    apply packJPG to the pcf data?
    This command already applies packJPG to the data. In this case, the bzip2 compression might make the result a bit worse, but it depends on the JPG, so you can try using "precomp -cn flowers.jpeg" here, too. Note that for packMP3 and packJPG, things are a bit different. They are decompressing things internal and Precomp directly gets the recompressed stream from them.
    http://schnaader.info
    Damn kids. They're all alike.

  14. #11
    Member
    Join Date
    Jul 2013
    Location
    Stanford, California
    Posts
    24
    Thanks
    7
    Thanked 2 Times in 2 Posts
    I'd be interested to hear feedback on extending precomp to reversibly transform uncompressed archive streams so that they can be processed in a canonical format.

    https://github.com/schnaader/precomp-cpp/issues/41

    This can facilitate better deduplication ratios on a data set whose duplicate content spans a long range and coexists in multiple formats. Or, it lets one interchangeably feed in two different raw formats for data (ZFS send stream, incremental GNU tar file) into the same deduplication process and end up with high fidelity in identifying duplicate content, even for a medium variable block size.

  15. #12
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Intensity View Post
    I'd be interested to hear feedback on extending precomp to reversibly transform uncompressed archive streams so that they can be processed in a canonical format.

    https://github.com/schnaader/precomp-cpp/issues/41
    I think I'll split this issue into two others, if you agree. Let me explain the two issues I see and the solutions for them:

    1. If the decompressed stream data for some streams is completely the same, Precomp will write it to the PCF files multiple times although it could be deduplicated. This has a rather easy solution, e.g. calculating CRC32 checksums (except for gzip, which already has a CRC32 checksum) and if the checksum matches previously decompressed data, compare the decompressed data. When a match is found, we can deduplicate the decompressed data (refer to the previous position and leave it out). Could be done in the version after 0.4.6.
    2. If only a part of the decompressed data is the same, deduplicating is harder and we have to decide: Either Precomp does the deduplication, or we prepare the decompressed data in a way that an external program can deduplicate the resulting file. If my thoughts about this are correct, Precomp knows a bit more about the data - it knows where streams start and the type of data; in most cases, it also knows something about the file format that embeds the stream. These are things that can possibly make deduplication more efficient, though the question remains: Is doing deduplication on the Precomp side worth it? Also, the other way, preparing the data for external deduplication, might be different depending on which tool is used (e.g. SREP, zpaq). I think it's important to track this, but more research has to be done and I think is something that should be done later, e.g. in beta status (version above 0.5).


    Please tell me if you agree and if there are additional things you think have to be done.
    http://schnaader.info
    Damn kids. They're all alike.

  16. #13
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    One good reason to include deduplication in a recompressor is nested recompression.
    Processing can be noticeably faster if you're able to skip a second copy of some .pdf, instead of having to process
    all the deflate streams and jpegs in it.

    Another possible similar speed optimization is avoiding repeated detection scans of known data.

  17. #14
    Member
    Join Date
    Oct 2014
    Location
    South Africa
    Posts
    38
    Thanks
    23
    Thanked 7 Times in 5 Posts
    Quote Originally Posted by schnaader View Post
    Could you give some more details? E.g. stream types involved? Did both versions detect the same streams? I'm not aware of any degradations in speed between these two versions.

    As to performance in general, it should be improved with the next release that will avoid using temporary files.
    Unfortunately, I couldn't reproduce the issue. I am not sure about the file I'd checked before.

    I checked few XLSB files and v0.45 was faster that v0.43, as you said.
    -----------------------------------------

    I found another a XLSB file contains more than 3000 EMF images. It seems that v0.45 is slower in processing EMF files.

    Precomp v.043 processed the file in 7mins and 30 secs, but v0.45 processed it in 58mins.

    The original file is 12MB, so if you need it, I will upload it somewhere.
    Last edited by msat59; 8th June 2016 at 12:14.

  18. #15
    Member Razor12911's Avatar
    Join Date
    May 2016
    Location
    South Africa
    Posts
    31
    Thanks
    51
    Thanked 53 Times in 15 Posts
    Quote Originally Posted by schnaader View Post
    Read and write are as expected, read is that high because there are very much comparisons where nothing is written, but bytes are read and compared.

    Debug/verbose turned off doesn't necessarily mean there's low console I/O. The progress indicator and the percentage are updated very often. Most of the updates will only happen if a second has passed to avoid updating too often, but especially in recursion, there are many updates without time checks. I will look into restricting this so the output is updated only when something has changed.

    Nevertheless, the ratio is strange. With 1 million writes, I would expect it to be several MBs (10-30 bytes per write), but not 3 GB (3000 bytes per write).



    Hmm.. sounds like some bug with storing offsets or length, especially the fact that the additional byte count is so close to 2^32.

    Some ideas on how to proceed further:
    • The stream summary (how many streams, stream types, recursion depth) would be useful
    • There were many changes between 0.4.2 and 0.4.3, but one candidate would be GIF recompression. So using "-t-f" could help.
    • Try to split the file somewhere around 34.5%, e.g. try to process a 10 or 100 MB piece. It's likely that with a smaller piece, recompression will fail, too. This can be analyzed easier.
    Sorry took long to come back with split file. I checked with process hacker, it read about 1.86GB of the 5.37GB file, which is approx. 1997159793 bytes, so I decided to make a 200mb split, the split was done this way.

    I took 150mb before the error and 50mb after the error so hotspot is somewhere in the 140-160mb zone of the file.
    and there you go:
    http://rgho.st/8YyjfwFBz
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	precomp045_1.PNG 
Views:	152 
Size:	25.1 KB 
ID:	4456   Click image for larger version. 

Name:	precomp045_2.PNG 
Views:	131 
Size:	23.9 KB 
ID:	4457  

  19. #16
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Razor12911 View Post
    I took 150mb before the error and 50mb after the error so hotspot is somewhere in the 140-160mb zone of the file.
    and there you go:
    http://rgho.st/8YyjfwFBz
    That link doesn't offer a download without a password.
    http://schnaader.info
    Damn kids. They're all alike.

  20. #17
    Member
    Join Date
    Jul 2013
    Location
    Stanford, California
    Posts
    24
    Thanks
    7
    Thanked 2 Times in 2 Posts
    Thanks for the reply and sorry for the delay. I thought I'd clarify to say that while I believe that adding deduplication (in processing or output) directly to precomp would perhaps be of benefit, I'm alright with running my own deduplication pass on the precomp output. I think other tools already do deduplication well enough, so I could just use those. Some of those may also already work on a stream. So while adding deduplication processing to precomp directly might help with saving on the time of repeated unwrapping (and processing) of a nested structure like Shelwein suggests, for the purposes of achieving an efficient storage footprint in the end (whether through recompression or a separate deduplication pass plus recompression), it would be alright with me if precomp let another tool do the deduplication.

    So then in my issue post, I was advocating that precomp generalise what it processes so that inbound data that arrives in a multitude of forms (whether that's raw, a tar file, cpio, ZFS stream, and so on) is normalised to a canonical representation of each of those popular uncompressed formats in such a way that a separate deduplication pass would be more successful. Imagine that some cpio file format inserts a header or checksum every 16 kilobytes of data regardless of what's coming in. Then a variable block deduplication pass that averages between 24 and 48 kilobytes would not consider a big file stored in that cpio file format to be the same as the raw file. So, even after deduplication, the data would need to be stored twice. All one would need to do to let those streams become equivalent is perform some basic reversible transform that translates between formats or translates to a canonical format so that deduplication will have maximal benefit whilst the "corner case" metadata (those headers, the individual intricacies of that file format) are shoved to later in the stream, or bundled up in chunks after allowing a long run of data. It could almost be enough to just artificially increase the format's internal block size, or move internal magic bytes to the end of the stream.

    From (1) it sounds like you're referring to adding in deduplication to precomp itself. That's fine if it's a direction you'd like to take. However as I see it, other tools already do deduplication on their own and if they can operate on a stream, then I achieve the same result by leveraging these. My only hesitation to having it added to precomp (only in the case that it's an option I can't turn off) is that an increasing amount of RAM would be needed to store information on deduplication (the set of checksums) - unless this uses filesystem to do this. Already, ddar has an advantage of using a flat amount of memory and the sqlite database representing the deduplication tables can exist on disk or on fast SSD (or on a RAM disk). Anyhow I might say about (1) that it's work that I personally wouldn't need to make use of since I can do this in another way. I may like to have it practical to unconditionally feed large continuous streams (like a ZFS send) through precomp in order to achieve efficient deduplication and compression, and if the memory footprint grows with the size of the input and precomp always allocated memory for deduplication, I wouldn't have that flexibility.

    As to your comments around (2), I would find value in having precomp consider processing the stream in a way that makes use of the file format, type of data coming in. I don't know how this preparation step for a deduplication tool would strongly depend on which deduplication tool (srep, zpaq, ddar) is being used. I think that if the effective blocksize is effectively "stretched" enough, then the deduplication tools (which may not by design or by interest level look into what kinds of data are coming in) might perform about the same as each other for the same average variable block. I think what matters most for those is the (sometimes tweakable) parameter of average window size. Make it small enough and the deduplication can improve, but with perhaps greater memory use or metadata written to disk. Tools such as srep, zpaq, and ddar (there are others too) could try to normalise data as the feature request to precomp was saying, but I think precomp is already taking up the processing step of analysing common compressed input formats and aiming to present that in a canonical uncompressed way in a way that's reversible.

    What happens if I end up feeding in data in raw, ZFS stream, and tar/cpio file format is that byte-for-byte, those different streams are almost the same. xdelta with some good parameters would be able to turn one stream into another with minimal encoding of those differences (so would bsdiff, but that would probably be more than what's needed, and wouldn't scale as well). I tested for example zpaq by taking a raw random 1G file, tarring it up, putting it in CPIO format, sending it as a ZFS stream, and also putting it in a VMDK on a virtual machine image filesystem. If I am able to set the block size small enough for deduplication, adding those different formats should incur negligible overhead. But I found that 25% or more of the data needed to be stored once again when I am forced due to resource optimisation to keep my variable block size larger, and that can get as bad as nearly the same amount of data all over again.

    If I see those different formats as just alternate representations of the original raw content, then with some small steps of a reversible transform in place (the effective "stretching" of the blocksize), I can get the overhead down to 5% or less, even if I store the data multiple times. I'd prefer to be able to intrinsically process the data by reasoning about it and understanding it rather than having to resort to a smaller and smaller blocksize to tackle the problem. This need not be done perfectly. Maybe considering a transform that addresses some common formats might suffice. Something like xdelta on its own may suffice. Even "getting it wrong" with respect to deducing the internal format might help with deduplication - as long as the process is done in a way that's reversible, then there won't be any loss in fidelity.

  21. The Following User Says Thank You to Intensity For This Useful Post:

    schnaader (12th June 2016)

  22. #18
    Member Razor12911's Avatar
    Join Date
    May 2016
    Location
    South Africa
    Posts
    31
    Thanks
    51
    Thanked 53 Times in 15 Posts
    password is schnaader, it's the first time rghost has done this.

  23. #19
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by Razor12911 View Post
    All precomps after 0.42, tested with 0.43, 0.44 and 0.45, all of them fail on 3 files from a game which is PES2016.
    Precomp 0.38, 0.40 and 0.42 all work on this file and they can restore it (I think).

    Precomp writes 4GB data out of nowhere, it isn't reading it from anywhere, it just writes the 4GB at 34.50%.
    Bought PES2016, got the files and analyzed the problem now. It's one of the PNG files in dt80_200E_win.cpk (see attachment). It's a PNG file with multiple IDAT chunks ("PNG multi"). The first one contains 8192 image data bytes, the second one only 3. This combination leads to an undetected 32-bit underflow in the Precomp routines when restoring the file; and this is why it writes 4 GB of data.

    Created issue #50 and will fix this in the upcoming version 0.4.6.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	pes_bad.png 
Views:	45 
Size:	9.1 KB 
ID:	4782  
    http://schnaader.info
    Damn kids. They're all alike.

  24. The Following 4 Users Say Thank You to schnaader For This Useful Post:

    msat59 (17th December 2016),RamiroCruzo (18th December 2016),Razor12911 (19th December 2016),Simorq (18th December 2016)

  25. #20
    Member Razor12911's Avatar
    Join Date
    May 2016
    Location
    South Africa
    Posts
    31
    Thanks
    51
    Thanked 53 Times in 15 Posts
    Oh that's great, at least you found the culprit.

    I got a couple of suggestions for precomp though.

    For some strange reason, I just don't like precomp making excessive writes on disk, especially if you're running it on SSD, the only solution to this is if precomp starts running things in memory, there is a quicker way I thought of a couple of days ago, is if when precomp does its trial and error, why not make it make a RAMDrive so it tests over there. It's possible to do this with program, you could just make precomp create a RAM Drive, X:\ for example, all the temps it creates and deletes should be created there, which means more speed and less writes on physical disk, I'm only mentioning this because it's almost equivalent to making precomp run in memory unless if you have already done that then you can just ignore this idea.

    http://www.ltr-data.se/opencode.html/#ImDisk

  26. The Following User Says Thank You to Razor12911 For This Useful Post:

    Simorq (19th December 2016)

Similar Threads

  1. Precomp 0.4.4
    By schnaader in forum Data Compression
    Replies: 16
    Last Post: 29th January 2016, 14:14
  2. sum one pls help me abt precomp
    By srk3461 in forum Data Compression
    Replies: 1
    Last Post: 28th December 2010, 18:19
  3. Precomp (and Precomp Comfort) in 315 kb
    By Yuri Grille. in forum Data Compression
    Replies: 2
    Last Post: 1st April 2009, 19:40
  4. Precomp 0.3.5 is out!
    By squxe in forum Forum Archive
    Replies: 1
    Last Post: 20th August 2007, 14:55
  5. Precomp 0.3.3 is out!
    By squxe in forum Forum Archive
    Replies: 1
    Last Post: 20th July 2007, 17:27

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •