Page 72 of 85 FirstFirst ... 2262707172737482 ... LastLast
Results 2,131 to 2,160 of 2523

Thread: zpaq updates

  1. #2131
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Updated http://mattmahoney.net/dc/zpaq706.zip to v7.06c
    This is not a release. I will do more testing first.

    This fixes crashes found during fuzz testing by Maciej Adamczyk and compiler warnings found by Petr Písař and Evan Nemerson.

    - Fixes crash caused by bad fragment IDs in index block when not detected by a checksum error.
    - Fixes crash caused by writing from an undersized block during decompression when not detected by a checksum error.
    - Fixes warning about negative char constants on ARM (where char is unsigned).
    - Fixes warning about unsigned comparison from read() on ARM (which normally returns ssize_t).
    - Fixes warning about shifting negative constants being undefined.
    - Replaced g++ with $(CXX) in Linux Makefile.

    The crashes can only occur in archives deliberately constructed to cause them, not by random corruption, because they require recalculating the SHA-1 checksum.

  2. The Following 6 Users Say Thank You to Matt Mahoney For This Useful Post:

    Bulat Ziganshin (10th March 2016),Gerhard (4th March 2016),mlogic (6th March 2016),sh0dan (4th March 2016),surfersat (4th March 2016),thometal (3rd March 2016)

  3. #2132
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Another update. v706d will no longer append an empty 104 byte header when there are no files to update.

  4. The Following 4 Users Say Thank You to Matt Mahoney For This Useful Post:

    Gerhard (4th March 2016),mlogic (6th March 2016),Rockshabird (23rd April 2016),sh0dan (4th March 2016)

  5. #2133
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    408
    Thanks
    36
    Thanked 60 Times in 37 Posts
    update for the wonderful zpaq-plugin for totalcmd
    -
    Version 1.3 (2016-02-25)
    -
    whats new:

    the progress dialog for packing/unpacking will now receive constant updates,
    meaning that you can now send any operation to background easily,
    like in case of handling large single files
    (they formerly blocked the dialog/operation until they were nearly finished)

    - added optional normal archive view
    (i.e. the default view showing the latest archive state)
    in an extra dir, when listing an archive via the 'Show all archive versions' option
    -> name of the dir can be customized
    -> will "move" all relevant files (incl. path) out of the version sub-dirs into that extra dir
    -> will "remove" the latest version dir, as all files found in there represent the latest state anyway
    -> can optionally include all "deleted" files, meaning it would show a combination of all files (and paths)
    that the archive ever came across, which might help in cases where you didn't add files to
    an archive via the '-nodelete'/'Don't mark files found only in the last update as deleted' option
    - added option to warn about pending background operations (that would use additional CPU/RAM resources)
    - fixed: the detailed custom string for the 'Show archive version names as detailed timestamp'
    option now replaces Windows forbidden file name characters ('<', '>', ':' etc.)

    best regards

    download binary: http://totalcmd.net/download.php?id=ZPAQ
    download source: http://wincmd.ru/files/9924355/wcx_zpaq_13_source.rar

  6. The Following 4 Users Say Thank You to joerg For This Useful Post:

    Bulat Ziganshin (10th March 2016),Gerhard (7th March 2016),Gonzalo (9th March 2016),Matt Mahoney (7th March 2016)

  7. #2134
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Update v7.06e (same link).

    - Fixed uncaught StringBuffer overflow exceptions in Linux. This was due to exceptions not propagating through the JIT assembler code (although this worked in Windows). The new version of libzpaq will catch these exceptions, propagate an error code through the assembler, then call error("Write error"), which you will see instead. This means that when zpaq reads a corrupted archive it will skip the block and try to read the next one for a partial extraction as in Windows, instead of aborting. In the new libzpaq, if your virtual Writer::write() method throws bad_alloc during decompression, then libzpaq will call error("Out of memory"). Any other exception will call error("Write error").

    - In v7.06d I fixed zpaq so that it no longer appends an empty 104 byte header when no files are added or removed. This now works with -test too. (The code is also simpler. I added a test mode to class Archive and removed the Counter and CounterBase classes.).

    - Added a check for archive update access before and after scanning the local directory tree. In earlier versions if access is intermittent, like when plugging in an external drive while zpaq is running, then zpaq could overwrite the archive with a new one instead of updating it. (This happened to one user). The new version will exit if the archive unexpectedly appears or disappears before zpaq starts updating it.

    - Fixed a reported clang error about left shifting a negative number. (I think it's legal code and couldn't reproduce the error, but I replaced it with a multiply anyway).

    - Added a proper Linux Makefile (written by Petr Písař) to install /usr/local/bin/zpaq, /usr/local/lib/libzpaq.so, and a man page.

  8. The Following 2 Users Say Thank You to Matt Mahoney For This Useful Post:

    avitar (7th March 2016),Mike (7th March 2016)

  9. #2135
    Member
    Join Date
    Mar 2016
    Location
    Miami, Florida
    Posts
    1
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Dear Matt,

    First, let me say thank you for the great backup utility!
    I'm currently working on GUI for Zpaq but I hitting a wall.


    Question: I been dealing for days trying to list contents of a folder from a test zpaq that contains various incremental versions.
    NOTE: Directory is new to a specific version of the archive. (created after the first version of the zpaq archive).


    following command:
    zpaq64 list archive.zpaq "\directory-in-archive"


    How do you I go about getting this directory listing from the zpaq archive without comparing to a local file on the PC?


    Best regards,
    Juan Avila

  10. #2136
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Use -only, like
    zpaq64 list archive.zpaq -only "\directory-in-archive"

    You can also use wildcards * and ?. When there are no file name arguments, it will not compare to the external directory.

  11. #2137
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Update v7.06f (same link). I fixed some signed arithmetic overflow warnings. Updated the man page to describe the advanced compression options and fix some mistakes.

  12. #2138
    Member
    Join Date
    Mar 2015
    Location
    Bulgaria
    Posts
    47
    Thanks
    0
    Thanked 9 Times in 7 Posts
    Hi all.
    I have problem with ZPAQ64 when compressing about 1.7TB of data (-method 26, 27, 28...). ZAQ64 throws an exception when all data is compressed - at 100%. Same data is compressed daily so the problem is very well "tested"
    Also has been tested on several machines with 24-32G of ram - same exception.

    with < 1.5TB data problem is gone.

    I'll do some tests with 7.06... and will post more info...

  13. #2139
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Quote Originally Posted by MiroGeorg View Post
    Hi all.
    I have serious problems compressing ~1.7TB of data (-method 26, 27, 28...). ZAQ64 throws an exception when all data is compressed - 100%.
    Problem is consistent. Has been tested on several machines with 24-32G of ram.

    with < 1.5TB data problem is gone.

    I'll do some tests with 7.06...
    If you have the exact error message, so pls post it. It's hard to guess to find a solution if we have a not precise message

  14. #2140
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Hi Matt,

    it seems that zpaq is using just one thread if the content which need to compressed appears as random is this correct? And if why?
    Last edited by thometal; 11th March 2016 at 18:25.

  15. #2141
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Random data is stored with no compression. This is fast compared to computing SHA-1 hashes and disk I/O. The hashing for dedupe is done in one thread.

    > I have problem with ZPAQ64

    What is the message?

  16. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    thometal (12th March 2016)

  17. #2142
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Update 7.06g (same link). I made some changes to make zpaq more robust to corrupted archives by imposing some reasonable input size limitations. Journaling blocks must be smaller than 4 GiB. Fragments must be smaller than 2 GiB. Filenames, comments, and attribute strings must be under 64 KiB. Journaling files cannot have fragment IDs pointing to streaming segments. I plan to update the specification to include these limits.

    For streaming archives, the comment field is ignored, eliminating the need for several error checks. The practical effect is that dates and attributes stored there are ignored and not restored when extracted. File sizes will be displayed as -1 when listing.

    v7.06f and earlier allow dedupe against streaming updates in mixed archives. For example:

    zpaq a archive file -method s0.0
    zpaq a archive file -to file2

    would dedupe file2 if it is not split into fragments, which will cause extraction to fail in v7.06g. However v7.06g will now store the file twice so extraction will succeed. The reason for this change is to support one pass compression and decompression with unlimited block and segment sizes in streaming archives. v7.06f has bugs that will fail on large blocks and segments. For now, 7.06g will detect these cases and give an error message until I finish writing the code. Streaming decompression will be single threaded.

  18. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    surfersat (30th March 2016)

  19. #2143
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    what happened with zpaqd is it still maintained? it seems its wasn’t updated a long time.

  20. #2144
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    I'm planning to update zpaqd after the next zpaq release. I want to be able to create journaling archives for testing and add a feature to encrypt files.

    I also plan to create a new version of unzpaq to read journaling and encrypted archives and update the spec. Conceptually, a journaling archive works by first extracting everything in streaming format to create a set of temporary jDC* files (identified by "jDC\x01" in the comment suffix). Those files can then be converted into the remaining archive contents without any need for the original archive. This is a clean way to mix streaming and journaling format, which is the reason to disallow fragments pointers into streaming files.

  21. #2145
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    In the past days I've been doing a lot of comparisons between zpaq and other archivers.

    Obviously, the mere fact of being incremental gives it an advantage in terms of versatility. Zpaq is ideal for backups and updates archives very efficiently.

    But, on the other hand, I noticed zpaq compresses much slower for the same ratio than others, let's say, 7-zip or FreeArc. Or the same, it compresses weaker than others for the same speed.
    The question is: Is that a side effect of the plasticity of the format? Maybe the need of non-solid archiving for rapid reuse of the older data... I know for a fact zpaq's author knows what he's doing, for he is an expert, and zpaq itself seems a polished program. So, can we expect better ratios in the next versions? Or speed-ups?
    What about smart algorithm selection? Other algorithms? Ppmd? Exe filter? Wave compressor? Etc, etc...

    You might consider release the LZ77 engine as a separated product so we can compare it with other stream compressors, let's say LZMA, zstd, and so on and seek for a more efficient solution.

  22. #2146
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,483
    Thanks
    719
    Thanked 653 Times in 349 Posts
    lzma has the best speed/compression ratio among all compressors on binary data. tornado is also pretty good. add all those preprocessors implemented in freearc and you will see that the only way to compete is to emloy just the same compression algos. zpaq has a godd first stage (deduplication), but Matt's own algos can't compete against lzma and especially those employed in fa (or nz/zcm)

    zstd eventually can outperform tornado, but just now it replaces only fast tornado methods. lzma has a lot of smart tricks, so you will need someone understanding those tricks to write a OSS compressor that can outperfrom lzma

  23. #2147
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    @Bulat: Well, even deduplication stage seems sub-optimal to me. I compare it on zpaq to your rep filter, and rep is much faster and accurate. If anyone wants the real numbers I can make the tests.

  24. #2148
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Quote Originally Posted by Matt Mahoney View Post
    I'm planning to update zpaqd after the next zpaq release. I want to be able to create journaling archives for testing and add a feature to encrypt files.

    I also plan to create a new version of unzpaq to read journaling and encrypted archives and update the spec. Conceptually, a journaling archive works by first extracting everything in streaming format to create a set of temporary jDC* files (identified by "jDC\x01" in the comment suffix). Those files can then be converted into the remaining archive contents without any need for the original archive. This is a clean way to mix streaming and journaling format, which is the reason to disallow fragments pointers into streaming files.
    Oh that sounds really nice, so then I can append zpaq created journaling archives with zpaqd plus cfg created versions. For example for jpg or so that will improve the compression ratio for e.g. filesets with many jpgs. Or am I confused?

    Quote Originally Posted by Gonzalo View Post
    In the past days I've been doing a lot of comparisons between zpaq and other archivers.

    Obviously, the mere fact of being incremental gives it an advantage in terms of versatility. Zpaq is ideal for backups and updates archives very efficiently.

    But, on the other hand, I noticed zpaq compresses much slower for the same ratio than others, let's say, 7-zip or FreeArc. Or the same, it compresses weaker than others for the same speed.
    The question is: Is that a side effect of the plasticity of the format? Maybe the need of non-solid archiving for rapid reuse of the older data... I know for a fact zpaq's author knows what he's doing, for he is an expert, and zpaq itself seems a polished program. So, can we expect better ratios in the next versions? Or speed-ups?
    What about smart algorithm selection? Other algorithms? Ppmd? Exe filter? Wave compressor? Etc, etc...

    You might consider release the LZ77 engine as a separated product so we can compare it with other stream compressors, let's say LZMA, zstd, and so on and seek for a more efficient solution.
    I can not verify this. Zpaq + deduplication were in all my Tests better ( faster in compression or decompression or smaller) than 7zip, do you have an example?

  25. #2149
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Quote Originally Posted by Gonzalo View Post
    @Bulat: Well, even deduplication stage seems sub-optimal to me. I compare it on zpaq to your rep filter, and rep is much faster and accurate. If anyone wants the real numbers I can make the tests.
    Yes pls, do you know the Fragment Option?

  26. #2150
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    I updated to zpaq v7.06h (same link). I rewrote streaming extraction so there is no longer a 2 GB limit on blocks or segments.

    zpaq compresses worse than other compressors on many older benchmarks. Benchmarks generally contain compressible and non duplicate files, which is not realistic for backups. Most backups have lots of already compressed files (jpg, mp4, zip, pdf, etc) and lots of duplicate files. zpaq detects incompressible data and stores it, which is faster. It dedupes by comparing to hashes in the archive in 64KB average size fragments. This is different from srep, which can only dedupe against its own input or requires extracting and scanning the earlier backup to dedupe against it. srep is faster because it uses a fast keyed hash (vmac) instead of SHA1 and compresses better because there are no fragments. (It is more like LZ77 with long matches). These methods can't be used for incremental updates. vmac collision resistance depends on the secrecy of the key, which is not possible because it would have to be stored in the archive.

    zpaq uses an average fragment size of 64 KB. You can change that with the -fragment option, like -fragment 3 for 2^3 = 8 KB. This requires more memory because the fragment hash table requires about 30-40 bytes per fragment. You have to use the same option for all updates or else dedupe won't recognize identical data in different sized fragments.

    zpaq outperforms zip, 7zip, and rar on the 10 GB benchmark, which is more realistic for backups. http://mattmahoney.net/dc/10gb.html
    (and summarized in the graph at http://mattmahoney.net/dc/zpaq.html ). pcompress beats it on the middle part of the Pareto frontier, but it only runs under Linux and you can't update the archive. Many other programs beat it if you only consider decompression speed, but that isn't very useful for backups where you compress more often than decompress.

    If you're interested in the compression algorithm, I describe it in http://mattmahoney.net/dc/zpaq_compression.pdf
    It is self describing, so future improvements won't break compatibility with older versions. It uses LZ77 (variable length codes or context modeled), BWT, and context mixing + E8E9 depending on the compression level and analysis of the input. In the future I might add delta coding, dictionary coding, a JPEG model, and grouping files by contents instead of filename extension.

  27. The Following 3 Users Say Thank You to Matt Mahoney For This Useful Post:

    Cyan (27th March 2016),Gonzalo (13th March 2016),pothos2 (13th March 2016)

  28. #2151
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    464
    Thanks
    202
    Thanked 81 Times in 61 Posts
    It dedupes by comparing to hashes in the archive in 64KB average size fragments. This is different from srep, which can only dedupe against its own input or requires extracting and scanning the earlier backup to dedupe against it. srep is faster because it uses a fast keyed hash (vmac) instead of SHA1 and compresses better because there are no fragments. (It is more like LZ77 with long matches). These methods can't be used for incremental updates.
    Yeah, as I suspected, side effect of the backup friendly nature. So different archivers for different needs. Thank you for the details.

  29. #2152
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    I am updating the spec. http://mattmahoney.net/dc/zpaq205.pdf
    It is not released yet because I still have to write the corresponding unzpaq205.cpp. Unlike the current reference decoder, it will handle journaling and encrypted formats and indexes according to the spec.

    The changes to the spec are:
    - Mixed journaling and streaming archives are not allowed.
    - Filename and comment fields and attribute strings are limited to less than 64 KiB.
    - Fragments must be less than 2 GiB.
    - Uncompressed journaling blocks must be less than 4 GiB. (There is still no limit on streaming segments or blocks).
    - Streaming comment field is undefined except to distinguish from a journaling block by suffix "jDC\x01". (Previously it also specified the size and last modified date if present).

    zpaq 7.05 will create and update mixed journaling and streaming archives but I plan to remove this capability in 7.06 when I release it. It will still read mixed archives produced by earlier versions as long as there are no deduped fragments pointing to streaming segments. Streaming archives are extracted in a single thread.

  30. #2153
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Is it normal that some blocks are just the halfsize of the blocking if I compress files?

  31. #2154
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Yes. zpaq will group into smaller blocks to avoid splitting files if it can. It sorts by filename extension, then my size from largest to smallest, so that the next file type will be more likely to start a new block. It will group or split random files into smaller blocks because there is no advantage to large blocks when storing uncompressed.

  32. #2155
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    I released zpaq v7.06. http://mattmahoney.net/dc/zpaq.html

    It will not create mixed streaming and journaling archives, although it will still extract them. It will not allow block sizes larger than 11 (2 GB) like -method 512. I also posted a link to the new spec (v2.05) although I still need to write the new reference decoder that goes with it. The man page and online docs are also updated to describe the advanced compression methods.

    There are 3 Windows executables now.
    zpaq64.exe for 64 bit Vista and later.
    zpaq.exe for 32 bit Vista and later. Limited to 2 GB memory.
    zpaqxp.exe for 32 bit XP and later. Also will not add alternate streams (like old zpaq.exe).

  33. The Following 3 Users Say Thank You to Matt Mahoney For This Useful Post:

    Gerhard (18th March 2016),mlogic (19th March 2016),thometal (17th March 2016)

  34. #2156
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Quote Originally Posted by Matt Mahoney View Post
    I released zpaq v7.06. http://mattmahoney.net/dc/zpaq.html

    It will not create mixed streaming and journaling archives, although it will still extract them. It will not allow block sizes larger than 11 (2 GB) like -method 512. I also posted a link to the new spec (v2.05) although I still need to write the new reference decoder that goes with it. The man page and online docs are also updated to describe the advanced compression methods.

    There are 3 Windows executables now.
    zpaq64.exe for 64 bit Vista and later.
    zpaq.exe for 32 bit Vista and later. Limited to 2 GB memory.
    zpaqxp.exe for 32 bit XP and later. Also will not add alternate streams (like old zpaq.exe).
    With did you introduce a 2gb block limit?

  35. #2157
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Quote Originally Posted by Matt Mahoney View Post
    Yes. zpaq will group into smaller blocks to avoid splitting files if it can. It sorts by filename extension, then my size from largest to smallest, so that the next file type will be more likely to start a new block. It will group or split random files into smaller blocks because there is no advantage to large blocks when storing uncompressed.
    Hm interesting I have 2 groups (different file extension) of 5 files each with the same extension both with a compressible grade between 173-194. Only the first file has a 50% blocksize block(the compressable grade on this block is not a local minimum or maximum) all other blocks are around the specified blocksize except the last. Also all blocks with different files. I do not know if this is still the normal behaviour?

  36. #2158
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    Not sure, but you could try a larger block size like -method 15, 16, 17,...

    > With did you introduce a 2gb block limit?

    The spec allows 4 GiB journaling blocks and 2 GiB fragments. (Streaming is unlimited). Fragment size only affects dedupe, not compression, and is normally around 64 KiB. I put a 2 GiB limit because the size is stored as a signed int in current and earlier versions of zpaq. Anything bigger would break it.

    Block size affects memory usage. A block is decompressed to memory, so you would need 4 GB per thread plus whatever memory the model used. The model would have to use a lot for blocks that large to be useful. Currently, methods 1..4 use 5-8x block size and 5 uses 16x per thread.

    Furthermore, zpaq context mixing components all use 32 bit contexts, so huge models don't make sense. A ICM or ISSE could not use more than 64 GiB memory, and practically less because if the hash table index is more than 24 bits (256 MiB), you start losing bits of the 8 bit checksum and get more collisions. An ICM or ISSE maps a 32 bit context hash on a nibble boundary to a hash table of 16 byte rows (15 states and a checksum) using the low bits of the hash, and then the next 8 bits to detect collisions to retry 2 more rows within a cache line. Likewise a CM is limited to 16 GiB. A MATCH is limited to a 4 GiB buffer and 16 GiB hash table.

    So a 2 GiB block limit for compression (that I could increase to 4 GiB) or 4 GiB for decompression seems like a reasonable limit. But who knows if these design decisions will seem archaic in 25 years like the 32 KiB window for deflate?

  37. The Following 2 Users Say Thank You to Matt Mahoney For This Useful Post:

    pothos2 (18th March 2016),schnaader (18th March 2016)

  38. #2159
    Member
    Join Date
    Jun 2008
    Location
    G
    Posts
    370
    Thanks
    26
    Thanked 22 Times in 15 Posts
    Quote Originally Posted by Matt Mahoney View Post
    Not sure, but you could try a larger block size like -method 15, 16, 17,...
    I tried and it stays at the half of blocksize

  39. #2160
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,254
    Thanks
    305
    Thanked 774 Times in 484 Posts
    I released a bug fix. http://mattmahoney.net/dc/zpaq.html

    zpaq v7.07 fixes a bug introduced in v7.06. When creating a new encrypted multi-part archive (zpaq add part??.zpaq -key), v7.06 would create independent salts for the index (part00.zpaq) and first part (part01.zpaq). If you later did a remote update with only the index local (using any version), then the new part (part02.zpaq) would be incorrectly encrypted and not readable because it was based on the salt in the index and not the original part 1 which is no longer present.

    In all other zpaq versions, both salts differ in only the first byte (by XOR with '7' XOR 'z'). zpaq will compute the salt and offset from the index so it can correctly encrypt the new part using AES-256 in CTR mode with independent keystreams for both files. (An encrypted archive never starts with '7' or 'z', but an unencrypted archive always does).

    Updating an existing archive with v7.06 that was created with an earlier version works correctly. Also, if you keep the archive local or delete the index, then updates will still be correct with any version. To check if the archive is correct, bytes 1..31 of part00.zpaq and part01.zpaq should be the same.

    I introduced the bug when adding a test for intermittent archive access.

  40. The Following 2 Users Say Thank You to Matt Mahoney For This Useful Post:

    mlogic (19th March 2016),PDP8user (28th March 2016)

Page 72 of 85 FirstFirst ... 2262707172737482 ... LastLast

Similar Threads

  1. ZPAQ self extracting archives
    By Matt Mahoney in forum Data Compression
    Replies: 31
    Last Post: 17th April 2014, 03:39
  2. ZPAQ 1.05 preview
    By Matt Mahoney in forum Data Compression
    Replies: 11
    Last Post: 30th September 2009, 04:26
  3. zpaq 1.02 update
    By Matt Mahoney in forum Data Compression
    Replies: 11
    Last Post: 10th July 2009, 00:55
  4. Metacompressor.com benchmark updates
    By Sportman in forum Data Compression
    Replies: 79
    Last Post: 22nd April 2009, 03:24
  5. ZPAQ pre-release
    By Matt Mahoney in forum Data Compression
    Replies: 54
    Last Post: 23rd March 2009, 02:17

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •