1. Hi!

I just found a way to improve CM compression, which should be applied, when bits are modeled using a state machine. This idea came to my mind when thinking about a (faster) alternative to SSE. Some terminilogy:

* a "historymap" is a table which maps the bitmodel's state to a prediction.

How it works: instead of mapping a bit history directly to a bit model, select a history map by a small context. I found that a context which is made of the bit position (0...7, 3 bits) and the previous bit's prediction (not the one of the corresponding model, the final, mixed prediction!) error scaled down to 3 bits works very well.
I could gain up to 7% on very redundant files (like FP.LOG) vs an average of 3% normally (measured on the file compressed with a direct mapping as a base).

The idea behind: using the bit position as a context seperates more inaccurate predictions (e.g. the first bit seperates 256 possible values into 2x12 from more accurate predictions (e.g. the last bit seperates 2 possible values 0/1). This can be further improved by using the previous predictions error as a measure of predictivity.

A nonlinear quantization of the bit position (finer at the first bits) works too, but not so well. I tried the following position to code mappings: 0, 1, 23, 4567; 0, 1, 3, 34567; 0,1, 234, 567.

I didn't try a similar quantization to the prediction error. This might give another improvement.

I think i'll finish cmm3 during the next few weeks. It is a combination of LZP+CM (using some tricks, as the one mentioned above). As i said before, my LZP layer is very similar to Matt's in paq9a; and i'll make it open source.

Greets

2. I am happy that you bushels implementing a mixed algorithm LZP+CM because to get great speed of compression is only solution! for the quantization: pseudo code ?

3. Qunatization of z = 3 bits (0..7) to q = 2 bits (0..3) using the mapping 0, 1, 23, 4567 (works best, ignoring a linear mapping):

q = (z>=1) + (z>=2) + (z>=4);

Mapping large ranges should use a lookup.

4. Thanks! Hi !

5. However this results in a slowdown due to cache misses, since you bloat tha history map size from ~512 bytes to 64 * 512 bytes per model (8 in my implementation). So 8*512 becomes 512^2 bytes, what a mess.

An improvement could be to represent the 2^6 contexts by an approximation like a state machine used for bit models.

I'm busy till weekend, where i might try this.

Any suggestions?

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•