# Thread: How to estimate image error

1. ## How to estimate image error

I'm experimenting with image compression and different algorithms produce different results. So, I would like to write code that computes which of the results is "closer" to the original. Supposedly I could compute PSNR or SSIM and use these metrics to determine which version is "better". I'm also aware about other advanced metrics like butteraugli, but I'm trying to write simple code that doesn't suck too much for my own experimentation My code operates on 4x4 blocks or RGB pixel data (regular RGB, that comes from PNG/BMP files). In short, I'm trying to come up with some simple function that chooses which compression algorithm produces least error. After googling I came up with this code that computes error between original 4x4 block and result that I get after performing compression:

Code:
```int error = 0;
uint8_t *original, *compressed;
int getErr(uint8_t *original, uint8_t *compressed);
for (int i=0; i<16; ++i) // iterate all pixels in 4x4 block
error += getErr(original+i*3, compressed+i*3);

int getErr(uint8_t *o, uint8_t *c)
{
int dr = o - c;
int dg = o - c;
int db = o - c;
return = (dr*dr * 38) + (dg*dg * 76) + (db*db * 14); // rough approximation for 0.299*R*R + 0.587*G*G + 0.114*B*B
}```
What does this calculation do ("0.299*R*R + 0.587*G*G + 0.114*B*B")? is it wrong? is there similarly simple equation that could help me achieve better results? In other words, what should getError do here instead t improve error estimation for my purposes? Should I use some of these equations instead: 1) (0.2126*R + 0.7152*G + 0.0722*B) 2) (0.299*R + 0.587*G + 0.114*B)? As I understand regular RGB that I get from a random PNG on the net is is sRGB and I'd need to convert values to linear RGB to make this calculation more accurate, is that correct? Where can I read about sRGB<->RGB and if I don't do that conversion (for perf reasons) how inaccurate the estimation becomes? 2. From the looks of it that calculation is the square error blended by a fixed weighted average of the RGB channels, some colors share luminance so that appears to be taking that into account. However this fixed weighting is tuned for our human eyes on natural image sets, some computer generated images will typically have different luminance correlations depending on how complex the shaders are on the system which computed it. Compare a screenshot of Super Mario 64 vs a photo of a real castle for example.
However you want to know "exactly" how much one image varies from another, we can track the difference on a linear scale instead of what you're currently using.
It's just a minor change:

Code:
```int error = 0;
uint8_t *original, *compressed;
int getErr(uint8_t *original, uint8_t *compressed);
for (int i=0; i<16; ++i) // iterate all pixels in 4x4 block
error += getErr(original+i*3, compressed+i*3);

int getErr(uint8_t *o, uint8_t *c)
{
int dr = o - c;
int dg = o - c;
int db = o - c;
return = dr + dg + db; // error over all color channels, no bias
}```
The lower the linear difference the easier it will be for an algorithm to compress since it inherently has lower complexity. Hopefully this helps. 3. Originally Posted by Lucas However this fixed weighting is tuned for our human eyes on natural image sets, some computer generated images will typically have different luminance correlations depending on how complex the shaders are on the system which computed it.
That was the intention of the error check to select those results that are more similar according to human vision. Originally Posted by Lucas Compare a screenshot of Super Mario 64 vs a photo of a real castle for example.
I don't understand what you mean. What screenshot and what photo of a castle? Originally Posted by Lucas However you want to know "exactly" how much one image varies from another, we can track the difference on a linear scale instead of what you're currently using.
I think that the error calculation should not be linear. Non-linear should produce better results. It's better to have +/-1 on 16 pixels which basically introduces negligible 0.4% on each of 16 pixels instead of having one pixel with +/-16 (or 6.2% error), most likely that kind of error concentration in a single pixel will produce more noticeable picture distortions. 4. My mistake, I thought you wanted a means to determine if one image is more similar to another and have a compressor compress the difference, compressors don't have eyes so they see data differently than we do, that's typically where a linear scale helps over non-linear.

The castle example was to say that not all fixed mixers are ideal for all data sets, if you have natural image sets then the current mixer is fine, but for unnatural images then it may over-quantize the total difference and hurt your results. 5. Originally Posted by Lucas if you have natural image sets then the current mixer is fine, but for unnatural images then it may over-quantize the total difference and hurt your results.
I have some photo-like images, and then lots of computer generated images (game textures). Imagine that I had multiple different encoders that encode some 4x4 RGB block. I want to write code that would tell me which of the encodings would look best for humans eyes and this code should be simple and high-perf. Something like that. The formula that I showed was found on the net and I'm not sure if I should try something else. 6. Can anybody comment if some other ratios should be used (for example from here: http://www.brucelindbloom.com/index....SpaceInfo.html)
Or possibly calculation itself should be different. Eg. ((dr * 38 ) + (dg * 76) + (db * 14)) ^ 2

This is basically squared luma difference. #### Tags for this Thread

error estimation, image compression #### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•