Page 1 of 7 123 ... LastLast
Results 1 to 30 of 184

Thread: Is Encode.ru community interested in a new free lossy image codec?

  1. #1
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts

    Is Encode.ru community interested in a new free lossy image codec?

    Hello,

    I have started some time ago a new free open-source state-of-the-art wavelet-based image compression codec called NHW Project: http://nhwcodec.blogspot.com/ .

    Do another lossy image codec is of interest for this forum?

    If so, I would be glad to present in detail my codec.Very quickly that might be of interest for this forum, the NHW Project is still experimental and the 3 new entropy coding schemes that it uses are not optimal for now and we can nearly save 2KB in average per .nhw compressed file (512x512 bitmap color image).

    Do not hesitate to let me know if there are some interest!

    Many thanks!
    Cheers,
    Raphael

  2. #2
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    2,802
    Thanks
    125
    Thanked 712 Times in 342 Posts
    Sure, you can use SSIM or https://encode.ru/threads/2499-butte...ll=1#post54040 for codec comparison.

  3. #3
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Many thanks for your answer Sir!

    I don't have tested butteraugli yet, but my codec does not perform "very good" on SSIM and PSNR because it enhances image neatness, so there are "more errors".The best way I have found to evaluate my codec for now, is to visually evaluate it.On my demo page, I have posted an image comparison at high compression (-l7 setting) with x265 (HEVC) on rather good quality images, that shows that x265 have more precision (very good PSNR and SSIM) but my codec has more neatness, and on the image comparison we can see that more neatness is more pleasant...

    Very quickly the advantages of my codec compared for example to x265 (HEVC) are that my codec has more neatness (more pleasant visually), it is at least 50x faster to encode and at least 15x faster to decode, and it is royalty-free!

    If you want more details, do not hesitate to let me know.Any feedback, remark are also very welcome!

  4. #4
    Member
    Join Date
    Oct 2010
    Location
    Germany
    Posts
    274
    Thanks
    4
    Thanked 23 Times in 16 Posts
    Hey, nice to see something new.

    Can you tell us more about codec internals?
    Do you use a wavelet packet decomposition including R&D optimized best basis search?
    How is the residual coding done? Bitplane by bitplane - including context coding?

  5. #5
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Hello Sebastian,

    Many thanks for your interest!

    First, the NHW Project was designed with speed first in mind (to be usable by mobile, embedded electronics...).

    The fast discrete wavelet transform I use is new and very fast.It performs the classic wavelet decomposition and there is no R&D optimized best basis search.

    The residual coding is there to keep still a decent precision, because the codec could lack of precision (notably on degraded images with blur, artifacts...) to the benefit of neatness and sharpness.The residual coding is not applied on the whole image but on the first order wavelet DC image which reduces the number of residuals.The residuals are not transformed, they are quantized and coded with a very simple scheme that we can be improved but is ultra fast.The residuals are not the biggest part of a .nhw file, the biggest part is the wavelet coefficients.

    In return, the NHW Project is very fast, faster (and better) than JPEG.And there are absolutely no optimization for now (SIMD, multithreading), just pure C unoptimized code.

    Do not hesitate to let me know if you would need additional information.

    Many thanks!
    Cheers,
    Raphael

  6. #6
    Member
    Join Date
    Oct 2010
    Location
    Germany
    Posts
    274
    Thanks
    4
    Thanked 23 Times in 16 Posts
    Yeah simple and fast. You should compare the PSNR of your codec to existing implementions using a dead-zone quantizer. There are quite a few papers for the well known images lena, barbara etc.
    After that you could apply psychovisual masking. Its clear that you don't quantizie the DC-Band in a pyramidal decomposition, you usually only apply some linear prediction to it, to reduce its size further.

    For the next step I would apply a wavelet-packet decomposition, as this only slows down the encoder.
    Generate the full basis-tree and prune the nodes which do not improve the L1-cost. Use the image "barbara" with its fine textures and you should see obvious improvements.

  7. #7
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Many thanks for your very interesting improvement suggestions Sebastian!

    I apply a kind of psychovisual masking in the NHW Project, but it's not a very advanced one I think.Could you detail a little more psychovisual masking?

    Yes the coding/compression of the DC-band is far from optimal, and I have (many) fast ideas to improve it, but I lack time currently...

    Wavelet-packet decomposition seems very interesting! As I don't know it, would you have some implementation examples? -Is it patent-free?-

    Cheers,
    Raphael

  8. #8
    Member
    Join Date
    Oct 2010
    Location
    Germany
    Posts
    274
    Thanks
    4
    Thanked 23 Times in 16 Posts
    Quote Originally Posted by Raphael Canut View Post
    Wavelet-packet decomposition seems very interesting! As I don't know it, would you have some implementation examples?
    Its actually quite easy to implement and improves quality considerably on images with fine textures.
    In a pyramidal decomposition you iteratively apply the wavelet transform only on the low-pass filtered image. So after one step you have the bands: LL, LH, HL, HH and apply the next decomposition step on the LL band and throw it away afterwards. In a wavelet packet decomposition you keep all bands and apply the wavelet filter on all bands recursively. In this way you get an overcomplete representation of the image.
    You can visualize this as a tree. The original image gets split into LL,LH,HL,HH and every subband is again an image which gets split into 4 subbands.

    You now have to select a basis (a linearly independent spanning set). Let x be a band at a certain level, then if cost(x_LL,x_LH,x_HL,x_HH)<cost(x) keep the decomposition and do the exact same on x_LL, x_LH etc...
    If the cost is not smaller, you prune the tree at that node. As a cost function you could use the L1-norm (sum of abs wavelet coefs). In this variant you construct the tree using the full 2D-transform.
    In a variant of this you could also select a basis consisting of a mixture of horizontal, vertical and horizonal+vertical transforms. You could even use a different wavelet transformation at each level but the improvements are neglible imo.

    Much fun

  9. #9
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Thank you for the explanation, I think I see it, that's clever! I will try it when I have time, but I will have to rewrite a "lot" of code because I have inter-band search written for simple, classic wavelet decomposition...

    Also just for information, there was a PhD student from IIT Bombay that implemented some directional wavelet transforms in the NHW Project, and he concluded that directional wavelet transforms would bring marginal and not significant improvement to my codec...

    Many thanks again!
    Cheers,
    Raphael

  10. #10
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    430
    Thanks
    1
    Thanked 93 Times in 54 Posts
    Quote Originally Posted by Raphael Canut View Post
    I don't have tested butteraugli yet, but my codec does not perform "very good" on SSIM and PSNR because it enhances image neatness, so there are "more errors".The best way I have found to evaluate my codec for now, is to visually evaluate it.On my demo page, I have posted an image comparison at high compression (-l7 setting) with x265 (HEVC) on rather good quality images, that shows that x265 have more precision (very good PSNR and SSIM) but my codec has more neatness, and on the image comparison we can see that more neatness is more pleasant...
    I'm sorry, but "neatness" is in the eye of the observer (pun intended). Subjective testing requires quite some care, in particular, following a specific test protocol, and using a sufficient number of observers. Yes, indeed, objective numbers like PSNR or SSIM do not mean much, but a subjective test by a single subject means even less.

    Just for the interested readers: There is an open call by JPEG for "JPEG XL", a next generation lossy image coder. Thus, if you want to apply, you are welcome. Subjective evaluation by independent labs is part of the exercise.

  11. #11
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Hello,

    > but a subjective test by a single subject means even less.

    Yes that's totally right Sir, and I also suspect that my eyes get used to notice neatness more than the common observer...

    > There is an open call by JPEG for "JPEG XL", a next generation lossy image coder. Thus, if you want to apply, you are welcome.

    It would be great if I could apply to "JPEG XL"!!! Thank you! It would be great to determine by rigorous subjective evaluation if the NHW Project is interesting or not.

    Also please notice that the NHW Project is still experimental and is just for 512x512 image size for now... My plan was to make a good demo version for 512x512 image size and then if a company is interested, they will maybe sponsorize me to adapt the NHW Project to any image size...

    Many thanks again for your proposition and your help!
    Cheers,
    Raphael

  12. #12
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Sorry, what is the procedure to apply to JPEG XL?

    Cheers,
    Raphael

  13. #13
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Hello,

    @thorfdbg, do your subjective test method is to compare the compressed image with the original image and rank how similar are the 2 images? Because in that scenario, my codec does not perform well compared to HEVC... My codec increases neatness of image but decreases precision, and on the contrary HEVC keeps a very good precision (similarity) but decreases neatness... So "normally" the better testing for my codec would be to compare the image compressed with HEVC and the image compressed with the NHW Project, to see which one is more pleasant, and according to my testing, on rather good quality images the NHW Project is more pleasant in 55-60% of the cases...

    Else, I mainly focused on speed for the NHW Project because I had also video compression and mobile devices in mind, because at least 50x faster to encode and at least 15x faster to decode than x265, is a very important feature, it will really save battery life... But for still-image, speed is maybe less important... But we can also add processing to the NHW Project, for example the decoder is very very fast, but on the other side the big drawback of the NHW Project is aliasing... So we could add in the decoder a post-processing function that will detect aliasing and remove it and enhance image quality...

    Many thanks!
    Cheers,
    Raphael

  14. #14
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    2,802
    Thanks
    125
    Thanked 712 Times in 342 Posts

  15. #15
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Many thanks for the links!

    It would be so great to apply to JPEG XL, but I have read the requirements and I fear to not have them all... Maybe thorfdbg could tell me if I meet the requirements to apply or not?
    But I'm pretty sure that if we invest enough time on the NHW Project, we can quickly have a good result!

    Also it is not very clear for now who I must contact to apply to JPEG XL? Should I contact Touradj Ebrahimi directly?

    Many thanks!
    Cheers,
    Raphael

  16. #16
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    610
    Thanks
    181
    Thanked 221 Times in 136 Posts
    Quote Originally Posted by Raphael Canut View Post
    ... we can nearly save 2KB in average per .nhw compressed file (512x512 bitmap color image).
    Of course interested!

    Did you compare against guetzli? What about pik -- github.com/google/pik?

    What bit rates are you aiming to perform best at -- and which bit rates are you performing comparisons to other technology? Guetzli works best above 2.0 bpp and pik above 1.0 bpp.

    Video codec based image coding, such as AV1 and WebP can be better choices at low bit rates, but their quality is less predictable and one can often observe degradation even at very high bit rates such as 3.0 bpp.

    Current average jpeg bit rate in internet is around 2 to 2.5 bpp.

  17. #17
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    610
    Thanks
    181
    Thanked 221 Times in 136 Posts
    Based on the 3 presumably 512x512 test images in http://nhwcodec.blogspot.com the goal bpp of your work is 2.08 bpp:

    octets*8/pixels = (50885+81264+72668)*8/(3*512*512) = 2.08

    Jpeg XL is about lower bit rates -- the SDR image observations start at 0.06 bpp and end at 0.5 bpp.

  18. #18
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    610
    Thanks
    181
    Thanked 221 Times in 136 Posts
    Quote Originally Posted by thorfdbg View Post
    Yes, indeed, objective numbers like PSNR or SSIM do not mean much, but a subjective test by a single subject means even less.
    In my experience a single subject (with normal vision) visual testing is already light years ahead of PSNR and somewhat better than SSIM. Particularly so if some of the test images or image generation methods have been optimized for PSNR, SSIM or for the human visual system.

  19. #19
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Hello,

    Thank you very much for your answer! It's great to be in contact with such references on encode.ru.

    I did not compare against guetzli, but I downloaded Pik binaries on this forum.Pik is very impressive.For me Pik has more precision than the NHW Project but the NHW Project has more neatness than Pik, and it is just me, but my eyes are trained to prefer neatness than precision... But it's only my personal case.

    The bit rates I am aiming for best performance are 0.5bpp to 2bpp.These are the bitrates I use to compare to other technology.

    I have tested my codec against WebP, and at these bitrates I globally prefer my codec... But I try to follow AV1 because it will be extremely good, but also on the other side extremely slow to encode...

    Many thanks again!
    Cheers,
    Raphael

  20. #20
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    > Based on the 3 presumably 512x512 test images in http://nhwcodec.blogspot.com the goal bpp of your work is 2.08 bpp

    Well, these are very old results from 2012, that I did not updated...

    For now I have coded up to -l8 setting which is nearly 0.5bpp.I am working currently on -l9 and -l10 settings, and I hope to still have good results below 0.5bpp, but it's a lot of work, and I really lack time...

    If you could find time, you can test the latest nhw version of today (08/25/2018) at high compression -l8 setting.It would so great to have your opinion on the NHW Project at -l8 setting.

    Many thanks!
    Cheers,
    Raphael

  21. #21
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    430
    Thanks
    1
    Thanked 93 Times in 54 Posts
    Quote Originally Posted by Raphael Canut View Post
    Hello,

    @thorfdbg, do your subjective test method is to compare the compressed image with the original image and rank how similar are the 2 images?
    Look, there are multiple test protocols, and the outcome of such protocols can be different. In one particular protocol, one uses absolute category ranking on a 5-point scale with a calibrated monitor, rating *single* images, with a hidden reference image (i.e. perfect image) within the test set. Then perform outlier removal, and compute a mean opinion score per image (MOS). From that, one can report "differential MOS" relative to the original.

    One can also use side-by-side-reference, i.e. the screen is split, one half of the screen is the original image, one half is the distorted image, and test subjects are questioned how to rate the distorted image relative to the original. Depending on the protocol, on a 5-point scale, or on a continous scale from 0 to 100.

    It all depends on what you want.

    Quote Originally Posted by Raphael Canut View Post
    Because in that scenario, my codec does not perform well compared to HEVC... My codec increases neatness of image but decreases precision, and on the contrary HEVC keeps a very good precision (similarity) but decreases neatness...
    I'm sorry, but "neatness" is not a well-defined term. It is just a way of saying "how I like it", but not necessarily "how observers like it" or "how the market accepts it". Otherwise, my codec always wins because "I like how it distorts the images". So please, keep this scientific. "Neatness" is just another term for "snake oil".

    Quote Originally Posted by Raphael Canut View Post
    So "normally" the better testing for my codec would be to compare the image compressed with HEVC and the image compressed with the NHW Project, to see which one is more pleasant, and according to my testing, on rather good quality images the NHW Project is more pleasant in 55-60% of the cases...
    "Your testing" counts nothing. Really. It requires testing with independent test subjects. In particular "not you".

    Quote Originally Posted by Raphael Canut View Post
    Else, I mainly focused on speed for the NHW Project because I had also video compression and mobile devices in mind, because at least 50x faster to encode and at least 15x faster to decode than x265, is a very important feature, it will really save battery life...
    Pick JPEG. That's very fast, and will do. And will do "neatly" for "most subjects". If that is not fast enough, "go JPEG XS" because there independent tests with independent test subjects have been performed very rigorously, showing that it is visually lossless in the desired target compression rate.

    You can provide subjective scores from a rigorous test - that would be very good. Failing that - and it is understandable that a freelance deverloper usually cannot - objective scores may provide *some* information. Such as (gasp!) PSNR or VDP.

  22. #22
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Hello,

    Yes, I realize you're completely right.Neatness is in fact the term for how I used across the years to like the distortion of my codec.So let's keep this scientific, you're right, sorry for trolling...

    Your testing methodology is very impressive, and it would be so great for my codec to go through it.Do think I can apply to JPEG XL, or unfortunetaly my codec is too experimental and it can't apply for now?

    Many thanks again Sir!
    Cheers,
    Raphael

  23. #23
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Hello,

    Thanks to skal, I have fixed a compilation error under Linux for the NHW Project.The source code and the GitHub repo are now updated!

    Many thanks again skal for tracking this bug!!!
    Cheers,
    Raphael

  24. #24
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Dear thorfdbg,

    As I am very late in the process and submissions must be done before 01/09/2018, could you confirm me that to apply to JPEG XL I have to send an email to Mr Touradj Ebrahimi?

    Still hope that I can apply, would be so great...

    Many thanks!
    Cheers,
    Raphael

  25. #25
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    610
    Thanks
    181
    Thanked 221 Times in 136 Posts
    Quote Originally Posted by Raphael Canut View Post
    Dear thorfdbg,

    As I am very late in the process and submissions must be done before 01/09/2018, could you confirm me that to apply to JPEG XL I have to send an email to Mr Touradj Ebrahimi?

    Still hope that I can apply, would be so great...

    Many thanks!
    Cheers,
    Raphael
    Sounds great! As an overview of the competition:
    • density target is 0.06 BPP to 0.5 BPP mostly -- up to 2.0 BPP for some HDR images
    • wide gamut and HDR seem to be a substantial part of the competition
    • binaries are given in a docker container

  26. #26
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Really many thanks for your encouragement!!!

    Yes, the NHW Project doesn't have all the requirements for the great JPEG XL competition for now, but they can be developed and added (if there is some interest)...

    Cheers,
    Raphael

  27. #27
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Hello,

    I have contacted JPEG XL today, and I will finally not be able to submit, mainly because my codec is only for 512x512 image size, so I will not be able to provide encoded and decoded reference images which are mandatory.

    But they say me that they will allow me to participate to the core experiments, normally in October 2018.Normally with the different submissions (and apparently there are quite a few), they will elaborate a first JPEG XL algorithm and design, and at the core experiments level that I can participate, I will be able to submit some key elements of my codec if I prove that they are better than the proposed ones... But it will not be that evident I think because my codec is very specific to wavelets... But I still thank a lot JPEG XL to allow me to participate to core experiments.

    Anyway, if you found time to take a look at the NHW Project, any feedback, advice, remark,... would be really very welcome!

    Many thanks!
    Cheers,
    Raphael

  28. #28
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    2,802
    Thanks
    125
    Thanked 712 Times in 342 Posts
    I think you should be able to extend your codec to any image size easily enough
    1) pad image sizes to multiples of 512
    2) extra color components/precision can be also turned into extra image blocks
    3) encode 512x512 blocks. Either don't reset entropy model between blocks, or use multiple threads.

  29. #29
    Member
    Join Date
    Aug 2018
    Location
    France
    Posts
    100
    Thanks
    7
    Thanked 5 Times in 4 Posts
    Dear Shelwien,

    You're totally right!!! I can even easily adapt my codec to 256x256, 128x128, 64x64 and 32x32 image sizes, so I should even pad image dimensions to multiples of 32! I would search first how much 512x512 blocks fit image, then how much 256x256 blocks fit, then how much 128x128 blocks fit, then 64x64 and then 32x32... But padding image dimensions to multiples of 512 is also ok and really easier...

    At entropy coding stage, I can also encode/compress the exact image size as the entropy coders can encode any data size they are feed with!

    The ideal would be still to adapt to any image size, but this could be an excellent "temporary" solution!!!

    However this won't be ready before Saturday 1rst September, the deadline for JPEG XL submissions, as I really lack time.And they unfortunately told me that they will close submissions on 01/09/2018 and no delay will be given... So it's a real pity!, maybe my codec is better suited for video compression as it is very fast... 3 weeks ago, I contacted the Alliance for Open Media and submitted my codec, but so far I have not been able to get an answer...

    Many thanks for your great help!
    Cheers,
    Raphael

  30. #30
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    610
    Thanks
    181
    Thanked 221 Times in 136 Posts
    Quote Originally Posted by Shelwien View Post
    3) encode 512x512 blocks.
    Wouldn't lossy coding of 512x512 blocks independently cause visible boundaries around them at low bit rates -- similar to 8x8 blocks becoming visible in jpeg at low bit rates?

Page 1 of 7 123 ... LastLast

Similar Threads

  1. ERR_BLOCKED_BY_XSS_AUDITOR on encode.ru
    By khavish in forum The Off-Topic Lounge
    Replies: 3
    Last Post: 25th November 2017, 10:40
  2. Anyone interested in SAL annotations for their codec?
    By nemequ in forum Data Compression
    Replies: 2
    Last Post: 18th November 2017, 08:27
  3. Spam on Encode.ru ?
    By Fairy in forum The Off-Topic Lounge
    Replies: 5
    Last Post: 19th November 2008, 22:58
  4. Long live ENCODE.RU!
    By encode in forum Forum Archive
    Replies: 8
    Last Post: 15th April 2008, 20:53
  5. ENCODE.RU will survive!
    By encode in forum Forum Archive
    Replies: 8
    Last Post: 10th April 2007, 04:37

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •