Haar DWT image compression is a series of filtering and downsampling. But with the help of linear algebra we just need to compute the following operation: Compressed Image (C) = Transformation Matrix (H) * Original Image (I) * Transformation Matrix Transpose (H^t). How can we relate the subsampling that occurs in the first method with matrix multiplication in the second. The second method will always hold the same matrix size so where is the value of compression?!

Haar coefficients are differences between average values of neighboring regions - it can be found by a simple recurrence - take the main part of https://github.com/JarekDuda/FractalWavelets (below) but replace vectors with powers of 2 (1,2,4,8,...) to get standard Haar.

Thank you for your answer. My main concern is that in the linear algebra method where can we see the subsampling that must occur in the first method which will decrease the amount of stored or transmitted data.

Haar transform is just a reversible linear transform, e.g. for x0, x1, x2, x3 you get
c0 = x0 + x1 + x2 + x3
c1 = (x0 + x1) - (x2 + x3)
c2 = x0 - x1
c3 = x2 - x3

Haar DWT image compression is a series of filtering and downsampling. But with the help of linear algebra we just need to compute the following operation: Compressed Image (C) = Transformation Matrix (H) * Original Image (I) * Transformation Matrix Transpose (H^t).

Actually, no. First, the transformation doesn't compress anything. Second, there is no transposed matrix. It is simply:

How can we relate the subsampling that occurs in the first method with matrix multiplication in the second.

The typical way how to deal with that is to use the polyphase matrix notation. That is, you write your image in the Z-domain as I(z) = \sum_k i_k z^k, and then decompose this as a two dimensional vector v(z) that consists of even and odd samples.

v(z) = (x_even(z),x_odd(z))^T

The transformed signal then consists of a low-pass and a high pass that are the two components that result from the application of the polyphase matrix of the wavelet filter on the even/odd separation of the input:

(h(z),g(z))^T = P(z) (x_ev(z),xo_(z))^T

It is a nice exercise to compute the polyphase matrix of some wavelets.

Originally Posted by Jessy

The second method will always hold the same matrix size so where is the value of compression?!

As said, there is no compression at this stage. Compression happens afterwards, by two steps: First, by quantizing the coefficients, i.e. by removing redunancy (or precision) you do not need. This again does not reduce the size of the matrix, but it already removes information.

Second, by entropy coding the output of the quantizer, i.e. the quantization indices generated by the quantizer.

The reason why you can entropy-code the transformed and quantized data is because the *transformed data* consists now of one single large amplitude signal (the low-pass) and many small-amplitude signals (the high-passes), and the latter are easy to compress.

So what the transformation does is to implement an energy-compaction of the original signal that *enables* compression. The energy-compaction by itself is reversible. The compression happens by first leaving data away (due to quantization) and by entropy coding the outcome.

The Following 2 Users Say Thank You to thorfdbg For This Useful Post: