I did some experiments using packJPG and the JPEG Developers Package from Matthias Stirner's new GitHub repository. At first, I had a look at uncmpJPG because I wanted to see if packJPG could be replaced with it in Precomp, because just uncompressing without the arithmetic coding suits the "Precomp way" better. Then, I compressed its output and also had a look at lpaqJPGtest.
But first, let's have a look at the bare results. Test file is a typical camera JPG including a thumbnail from a Precomp thread (direct link to the image at MediaFire). Test PC was a Intel i5 (M 520, 2.4 GHz, 2 physical cores).
Some conclusions follow. I assumed that unpackJPG is close to "packJPG without the arithmetic coding part", but haven't verified it in depth, so don't trust me hereCode:Original: 4,307,553 bytes packjpg 2.5j: 5.12 s, 3,415,122 bytes, decomp 5.23 s Precomp v0.4.4: 4,196,559 bytes, 2/2 JPG lpaqjpgtest: 117 s, 3,822,763 bytes paq8p -3: 134 s, 3,113,452 bytes (finds 2 JPEGs: 4608x3456, 1440x1080) paq8p -4: 134 s, 3,098,874 bytes (finds 2 JPEGs: 4608x3456, 1440x1080) Precomp v0.4.4 -cn -i13486: 4,198,382 bytes, 1/1 JPG (only process the second JPG to "hide" it from paq8p) paq8p -4: 3,117,069 bytes (finds the first JPEG: 4608x3456) uncmpJPG : 0.9 s, 64,224,983 bytes, recompression: 0.9 s Precomp v0.4.4: 14.43 s, 4,368,889 bytes, 2/2 JPG 7-Zip Ultra LZMA2: 21 s, 4,346,381 bytes zpaq v7.05 -method 4: 16.13 s, 3,962,237 bytes zpaq v7.05 -method 5: 127.6 s, 3,753,728 bytes paq8p -3: 736.66 s, 3,717,779 bytes paq8p -4: 4075.35 s, 3,709,691 bytes SREP 2.991: 21,913,027 bytes 7-Zip Ultra LZMA2: 20 s, 4,368,115 bytes zpaq v7.05 -method 5: 75.65 s, 3,774,036 bytes paq8p -3: 251.77 s, 3,727,443
- packJPG does a very good job at giving context to the uncompressed data while retaining a decent speed. Without knowing the context, even paq8p -4 and zpaq -method 5 don't come close to the 3.4 MB that packJPG gives and they take much more time, even after reducing the data with SREP.
- uncmpJPG is around 5 times faster than packJPG, so the arithmetic coding takes (at least) 80% of the time.
- lpaqJPGtest is interesting for developers, but not for users.
- The JPG model in PAQ is still better giving the smallest result here (3.1 MB), but has other disadvantages (speed, no support for progressive JPGs).
- Using uncmpJPG in Precomp won't pay off without further modifications to support following compression.
- uncmpJPG could help when processing multiple JPGs with similarities across them, but this has to be tested.
File specific conclusions:
- Precomp doesn't detect the length of the first JPG correct, but succeeds in passing the second one to packJPG.
- Both packJPG and uncmpJPG treat the second JPG as "garbage following EOI".
- "Precomp -i | paq8p -4" shows that paq8p doesn't have a big advantage although it detects both streams, although it's very likely that the second JPEG is a smaller version of the first one (could someone check this?).
- uncmpJPG only wraps a header around the garbage part, so the second JPG can be processed, as seen in the "uncmpJPG | Precomp" result. The header of the first JPG is also detected again.
The write_ujpg method in uncmpJPG gives a good starting point at what the uncompressed file consists of:
- "HDR" part (header), 30,044 bytes here
- 3 "CMP" parts (decompressed DCT coefficients), 63,700,992 bytes here
- PAD (padding) part, only 4 bytes here
- GRB (garbage) part, 493,914 bytes here
The first CMP part consists of 64 runs of ( 4608/8 ) * ( 3456/8 ) = 248,832 16-bit values, the following two are only half the size. Signed 16-bit values are bad for most compressors, so rearranging and preprocessing them could help a lot. The range also decreases with increasing file position, as one would expect from JPG compression.