Results 1 to 25 of 25

Thread: compressing animation frames

  1. #1
    Member
    Join Date
    Jun 2016
    Location
    Earth
    Posts
    6
    Thanks
    1
    Thanked 0 Times in 0 Posts

    Question compressing animation frames

    Hi,

    I'm a Delphi/C++ programmer.
    Currently I'm using LZ4 from here in a small Delphi application (work-in-progress) to compress animation frames (temporary/work RAM storage).
    But sometimes, on high resolution animations, I get "Out of memory".
    In such cases it's preferable to wait a little longer for the application to process the data.
    So I'm looking for an alternative with higher compression rate but close speed. The application will still use LZ4 by default but the user can choose to use the alternative in Options.

    So far I tested SynLZ, it has close speed but only 4% higher compression ratio.

    What do you recommend?

    Thank you.

    Regards,
    Cosmin

  2. #2
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by Cosmin3 View Post
    LZ4 ... to compress animation frames ... So I'm looking for an alternative with higher compression rate but close speed.
    WebP lossless or WebP near-lossless are good solutions for this. If compression speed is an important topic, try out the WebP lossless encoder with setting -m 0 and -q 0. If you only care about decompression speed and size, use -m 6 and -q 100.

    Example command lines to try out with cwebp:

    cwebp -lossless -m 0 -q 0 input.png -o output-fast-compression.webp
    cwebp -lossless -m 6 -q 100 input.png -o output-dense-compression.webp
    cwebp -near_lossless 60 -m 6 -q 100 input.png -o output-jyrkis-favorite-webp-lossless-settings.webp

    For a fair comparison, you need to take a new version of cwebp (0.5.0 at least), there are compression density impacting bugs in versions 0.3.x and 0.4.x.

    If WebP is too slow for your use case, you might be able to keep using LZ4, but just apply a delta filter for pixels before the LZ4 phase.

  3. The Following User Says Thank You to Jyrki Alakuijala For This Useful Post:

    Cosmin3 (8th June 2016)

  4. #3
    Member
    Join Date
    Jun 2016
    Location
    Earth
    Posts
    6
    Thanks
    1
    Thanked 0 Times in 0 Posts
    Thank you.

    "-m 6 -q 100" is way too slow.
    But "-m 0 -q 0" + "-mt" seems promising: a 1920x1080 24 bit frame is compressed to a 502 KB file in less than 1 sec.

    As for applying a delta filter before compressing to LZ4, I don't think it can be used since the data has to be reconstructed 100% as it was before.

  5. #4
    Member
    Join Date
    Feb 2015
    Location
    United Kingdom
    Posts
    154
    Thanks
    20
    Thanked 66 Times in 37 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post

    If WebP is too slow for your use case, you might be able to keep using LZ4, but just apply a delta filter for pixels before the LZ4 phase.
    From my experience with delta filtering LZ is not a suitable compression stage since it doesn't provide noticeable gains before or after applying the delta. This is because the delta transform doesn't introduce context, it just reduces the entropy using local correlations. If speed and performance is key I'd recommend a delta stage followed by FSE or any other fast entropy encoder.

    If you're looking for a lossless delta encoder you can take a look at https://github.com/loxxous/prepack

  6. #5
    Member
    Join Date
    Jun 2016
    Location
    Earth
    Posts
    6
    Thanks
    1
    Thanked 0 Times in 0 Posts
    @Jyrki Alakuijala

    Strange: I compressed a bitmap to webp using -lossless switch, decompress it and then I compared the original with the result.
    They're not exactly the same:

    Click image for larger version. 

Name:	screenshot.png 
Views:	180 
Size:	61.6 KB 
ID:	4450

    The color values are close but not identical. So "-lossless" is not "100% lossless"...


    @Lucas

    Thank you, I'll have a look.

  7. #6
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by Cosmin3 View Post
    The color values are close but not identical. So "-lossless" is not "100% lossless"...
    Like many other RGBA lossless encoders WebP lossless nowadays doesn't preserve RGB values when alpha is zero. This gives significant savings (~2 % for a large corpus of images, sometimes 50+ %) There is a flag to turn off the zero alpha cleanup.

    Other losses are a bug or possibly an ICC color-profile issue. Note that -near_lossless 100 is not the same as -lossless.

  8. #7
    Member
    Join Date
    Jun 2016
    Location
    Earth
    Posts
    6
    Thanks
    1
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Like many other RGBA lossless encoders WebP lossless nowadays doesn't preserve RGB values when alpha is zero. This gives significant savings (~2 % for a large corpus of images, sometimes 50+ %) There is a flag to turn off the zero alpha cleanup.

    Other losses are a bug or possibly an ICC color-profile issue. Note that -near_lossless 100 is not the same as -lossless.
    Make sense, but the bitmap used for testing is 24 bit not 32 bit, so no alpha. The values you see in the screenshot are from a "green screen" background.
    R: 26
    G: 219
    B: 0
    After is compressed and decompressed:
    R: 26
    G: 220
    B: 0

    And I didn't use "-near_lossless 100", only "-lossless".

  9. #8
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by Cosmin3 View Post
    Make sense, but the bitmap used for testing ...
    Could you send the bitmap to me at jyrki.alakuijala@gmail.com

  10. #9
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Like many other RGBA lossless encoders WebP lossless nowadays doesn't preserve RGB values when alpha is zero. This gives significant savings (~2 % for a large corpus of images, sometimes 50+ %) There is a flag to turn off the zero alpha cleanup.
    That's a very bad practice. Lossless is lossless and visually lossless is not.

  11. #10
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by m^3 View Post
    That's a very bad practice. Lossless is lossless and visually lossless is not.
    Why is it bad practice? (an honest question)

    WebP lossless was originally really lossless by default, but we were inspired by the FLIF encoder and PNG optimizers changing the alpha = 0 pixels by default.

  12. #11
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Why is it bad practice? (an honest question)

    WebP lossless was originally really lossless by default, but we were inspired by the FLIF encoder and PNG optimizers changing the alpha = 0 pixels by default.
    Because it's a lie. I'm not against promoting visually lossless compression, but calling it what it's not.

    In this thread you have an example of user who was misled. They didn't seem too care about invisible loss, but being vigilant they spotted an anomaly (which seems to actually be a bug, but if it was alpha channel, the story would look the same).


    As a side note, there is that lone 0.01% of users who do care about full losslessness that they need:
    * for steganography
    * for the checksums to match

  13. #12
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by m^3 View Post
    Because it's a lie. I'm not against promoting visually lossless compression, but calling it what it's not.
    No lies. In those versions of cwebp that actually remove RGB data from transparent areas there is an added line in the -help page:

    -exact ................. preserve RGB values in transparent area

    Defaults should make sense for the most common use case.

  14. #13
    Member
    Join Date
    Jun 2016
    Location
    Earth
    Posts
    6
    Thanks
    1
    Thanked 0 Times in 0 Posts
    I attached the frame and the result of the compression + decompression.
    If I open the compressed frame in IrfanView, it shows the correct color value. If I decompress it with IrfanView, I get 100% the original bitmap.
    So the problem may be in the decompressor (dwebp.exe).
    Attached Files Attached Files

  15. #14
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by Cosmin3 View Post
    I attached the frame and the result of the compression + decompression.
    If I open the compressed frame in IrfanView, it shows the correct color value. If I decompress it with IrfanView, I get 100% the original bitmap.
    So the problem may be in the decompressor (dwebp.exe).

    $ compare -metric PAE Frame.bmp result.bmp null:
    0 (0)

    The two images you posted are identical in pixels values.

  16. #15
    Member
    Join Date
    Jun 2016
    Location
    Earth
    Posts
    6
    Thanks
    1
    Thanked 0 Times in 0 Posts
    Later edit: original message deleted.

    I decided to move on to other (more pressing) issues.

    Thanks anyway.
    Last edited by Cosmin3; 11th June 2016 at 19:46.

  17. #16
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    No lies. In those versions of cwebp that actually remove RGB data from transparent areas there is an added line in the -help page:

    -exact ................. preserve RGB values in transparent area

    Defaults should make sense for the most common use case.
    Fact 1:
    You call a compression mode lossless
    Fact 2:
    That mode is lossy
    Fact 3, from 1 and 2:
    What you say is incorrect
    Fact 4:
    You know it's incorrect
    Fact 5, from 3 and 4:
    You're lying.

    You may have another option for really lossless compression, but it's irrelevant. You may optimise the defaults for the most common case, like I said I'm not against promoting visually-lossless compression. But neither of these things changes the facts above, including fact 5. You're lying.
    I'm hugely disappointed about that and even more because you're arguing this. Do we have to dig into compression history for the definition of what is lossless and what is lossy because you decided to redefine the former as 'exact' and divided the latter into 2 categories, calling one of them 'lossless'?

  18. #17
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by m^2 View Post
    visually-lossless compression
    There are five kinds of lossless compression touched in this thread.

    1) Binary lossless: One where the respective .BMP/.JPG/.whatever file looks the same byte-by-byte when examined with the binary file.

    2) Exact lossless, where ARGB all are preserved even when A = 0.

    3) Lossy transparent, where RGB is not preserved when A = 0, i.e., the transparent color doesn't have a specified color, other colors are specified losslessly. This is the most common kind of lossless by modern high-performance image compressors such as FLIF, PNG optimizers, and WebP joined this club half a year ago in 0.5.0 (didn't check, can be off by a few months). ZopfliPNG allows this with --lossy_transparent, but probably making it default there too would make more sense. When such an image is rendered using normal alpha blending or additive or subtractive alpha-blended rendering, the rendered result is equivalent to the exact lossless.

    4) Visually-lossless, a method of lossy compression where the loss is small enough that it can be expected that humans don't notice the difference.

    Glossary definition of visually lossless (from http://www.digitizationguidelines.go...suallylossless) is below:
    """A form or manner of lossy compression where the data that is lost after the file is compressed and decompressed is not detectable to the eye; the compressed data appearing identical to the uncompressed data."""


    For example libjpeg quality 100 could be considered visually lossless, even when actual rendered pixel values change a bit.

    I think this is a great definition for visually lossless. We are working hard to be able to decide this automatically by using butteraugli. Being able to automatically choose from a variety of methods and take the smallest file size that gives visually lossless image is often favorable to 'exact lossless' or 'lossless' compression, particularly in the end delivery such as the web sites.


    5) Near-lossless compression. Near-lossless is defined usually with a guarantee of the maximum error. The image may look really bad, but none of the ARGB values are further away from the original than say 8 or 16 values. A good near-lossless encoder needs to apply other constraints to the image than trying to just obtain the maximum error everywhere with the highest possible compression ratio.

    A modern trend in lossless image compression is to call 1), 2) and 3) just as "lossless". I agree with you that visually lossless is different from lossless, and should never be called lossless. There are two elements in WebP lossless that are in the category of visually lossless; storing 16 bit color channel pngs into WebP discards the 8 least-significant bits (lossy, but typically visually lossless), and near-lossless compression down to quality 60 (no guarantees with this, just the current state) is also typically visually lossless.

    It seems to me that the community of advanced image compressors has now converged to use the term 'lossless' to mean 2). This convergence was started with image optimizers, continued with FLIF and now somewhat completed by WebP joining it, too. To me this is progress for the whole community and we should embrace it and call the somewhat useless exact lossless for example exact lossless. I don't yet understand what is the downside that you see in this development.

    What is your proposal? If you can get the people together who give these names (at least the author of ImageOptim, FLIF, the WebP team, ZopfliPNG, others) and have them to develop on top of your proposal, you might reach an agreement quickly to have the names changed for an increased clarity.

    kindest regards,

    Jyrki

  19. #18
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    There are five kinds of lossless compression touched in this thread.

    1) Binary lossless: One where the respective .BMP/.JPG/.whatever file looks the same byte-by-byte when examined with the binary file.

    2) Exact lossless, where ARGB all are preserved even when A = 0.

    3) Lossy transparent, where RGB is not preserved when A = 0, i.e., the transparent color doesn't have a specified color, other colors are specified losslessly. This is the most common kind of lossless by modern high-performance image compressors such as FLIF, PNG optimizers, and WebP joined this club half a year ago in 0.5.0 (didn't check, can be off by a few months). ZopfliPNG allows this with --lossy_transparent, but probably making it default there too would make more sense. When such an image is rendered using normal alpha blending or additive or subtractive alpha-blended rendering, the rendered result is equivalent to the exact lossless.

    4) Visually-lossless, a method of lossy compression where the loss is small enough that it can be expected that humans don't notice the difference.

    Glossary definition of visually lossless (from http://www.digitizationguidelines.go...suallylossless) is below:
    """A form or manner of lossy compression where the data that is lost after the file is compressed and decompressed is not detectable to the eye; the compressed data appearing identical to the uncompressed data."""


    For example libjpeg quality 100 could be considered visually lossless, even when actual rendered pixel values change a bit.

    I think this is a great definition for visually lossless. We are working hard to be able to decide this automatically by using butteraugli. Being able to automatically choose from a variety of methods and take the smallest file size that gives visually lossless image is often favorable to 'exact lossless' or 'lossless' compression, particularly in the end delivery such as the web sites.


    5) Near-lossless compression. Near-lossless is defined usually with a guarantee of the maximum error. The image may look really bad, but none of the ARGB values are further away from the original than say 8 or 16 values. A good near-lossless encoder needs to apply other constraints to the image than trying to just obtain the maximum error everywhere with the highest possible compression ratio.
    Nice breakdown.
    Slightly offtopic, but:
    I have a sympathy for the visually lossless compression definition because it matches my intuition perfectly. One practical issue is that it's hard to prove that a complex lossy algorithm is visually lossless, because that would require it to be visually lossless for any input. Therefore I'd avoid bold statements like "libjpeg quality 100 could be considered visually lossless" and I actually doubt that this one is true - is it visually lossless for low-res images like icons? Lossy-transparent encoders are obviously visually lossless. While theoretically they are not the only ones with this property, I don't think that we can differentiate them in practice.
    Though my gut tells me that the image compression community uses the term "visually lossless" to mean something close to "suspected to be visually lossless for a possible to define subset of inputs" or even something weaker still.

    Quote Originally Posted by Jyrki Alakuijala View Post
    A modern trend in lossless image compression is to call 1), 2) and 3) just as "lossless". I agree with you that visually lossless is different from lossless, and should never be called lossless. There are two elements in WebP lossless that are in the category of visually lossless; storing 16 bit color channel pngs into WebP discards the 8 least-significant bits (lossy, but typically visually lossless), and near-lossless compression down to quality 60 (no guarantees with this, just the current state) is also typically visually lossless.

    It seems to me that the community of advanced image compressors has now converged to use the term 'lossless' to mean 2). This convergence was started with image optimizers, continued with FLIF and now somewhat completed by WebP joining it, too. To me this is progress for the whole community and we should embrace it and call the somewhat useless exact lossless for example exact lossless.
    It seems to me that at some point people started cheating in benchmarks. Others joined and at some point the community (*) accepted it as standard. Why do I think so? Because everyone in the field surely knows that the term 'lossless compression' has had a well-defined meaning for decades. And this meaning is something that everyone understands. If the point would be to honestly inform, you would invent a different term to mean a different thing. Like 'exact'. You didn't and because of clashes you invent a new meaning for 'lossless'. Seems that you agreed that your field is so special that the general definition does not apply to you and you're entitled to make up your own.
    I don't think it is, I don't think you are.

    Quote Originally Posted by Jyrki Alakuijala View Post
    I don't yet understand what is the downside that you see in this development.
    Confusion.

    Quote Originally Posted by Jyrki Alakuijala View Post
    What is your proposal? If you can get the people together who give these names (at least the author of ImageOptim, FLIF, the WebP team, ZopfliPNG, others) and have them to develop on top of your proposal, you might reach an agreement quickly to have the names changed for an increased clarity.
    No, getting your assess together is not my job. While I believe I am right, being right is not enough to convince people. If that's a standard already, such an accomplishment would take somebody who has a lot of authority in the field. I'm not such person. All I can do is keep explaining the wrongness of what you do.

    Ad (*) Just pointing out that I'm not a member of this community and I don't have a personal opinion on whether this is indeed a standard. I just repeat after yourself.

  20. #19
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by m^2 View Post
    Confusion.
    Lossy transparency it is just saying that a fully transparent pixel has no defined RGB color. The fact that a color is stored anyway is an implementation detail, and a correct implemented client ignores it. That by itself is not extremely confusing.

    Webp is converging to the way others work exactly to avoid confusion. In the past people were comparing flif lossless (with lossy transparent) against webp lossless (with exact). I think lossy transparent is a better default than lossless, so I was very happy to see flif to lead the way in this.

    Regarding your worries in participating in the definition of terminology: If you are passionate about it and argument clearly, I'm sure that people will consider your viewpoint.

  21. #20
    Member
    Join Date
    Nov 2014
    Location
    California
    Posts
    122
    Thanks
    36
    Thanked 33 Times in 24 Posts
    "consider your viewpoint."
    I do not think it is a viewpoint.
    Having 5 definitions of a concept is problematic. This ought to be simple: lossless means bit exact for all pixels. Anything else means that some data has been lost and the original cannot be restored (which is a requirement in some professions).
    Lossy transparent is not lossless and should not be considered as such (what if I restore the original after compression and decide to change the transparency channel values?) . I think it is fine to have a visually lossless mode but it is still lossy.

  22. #21
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by hexagone View Post
    Having 5 definitions of a concept is problematic.
    There are more definitions than the five I mentioned. Those were only the once that were discussed in this thread before. For example for jpeg there are two kinds of lossless re-compression: those that produce the exactly same RGB pixels when re-rendered, and those that produce the exactly same DCT co-efficients.

    Further, for alpha there can be a yet another group of lossless lossy compression. If there is an alpha value of 1, the rendered values of R,G,B necessarily lose a lot of dynamics in alpha multiplication. Storing the RGB premultiplied (=loss) or quantizing them more for low alpha values is the seventh and eight kinds of lossless compression.

    Quote Originally Posted by hexagone View Post
    Lossy transparent is not lossless and should not be considered as such (what if I restore the original after compression and decide to change the transparency channel values?).
    To answer this I need to explain my personal viewpoint and simple my personal reasons why I work in data compression. I want to make the computers and the internet faster.

    Today, if I actually take PNGs from the internet, many have editing artefacts in their fully transparent area. Many have been optimized with an image optimizer to create a complicated pattern inside the transparent area (often to have zero-residuals after paeth prediction). I never saw some magically useful data that the author wanted to store there, just the two above. When editing artefacts are posted into the internet, the author is posting information that he or she doesn't necessarily want public. Further, the bytes cost in transmission making the internet slower. When a complicated zero-residual pattern for one format has been inserted in the transparent area, another compressor with better predictors cannot replicate it exactly and the file can end up 50+ % bigger because of it.

    Taking a sample of images from the internet shows that tools are difficult to use correctly for most people. Last time I sampled, 4 % of a random sample of PNGs was stored with 16 bit color dynamics. This makes no sense what so ever for internet use. It is just people using tools with default settings without thinking. The defaults need to make sense, otherwise people will create inefficient data.

    So, to make the internet faster, we need good defaults. This is why lossless should follow the example from FLIF and be lossy transparent.

    If the goal is not to make the internet faster, one can have another priority and other defaults in an encoder. But this is a viewpoint, nothing absolute.

  23. #22
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Lossy transparency it is just saying that a fully transparent pixel has no defined RGB color. The fact that a color is stored anyway is an implementation detail, and a correct implemented client ignores it. That by itself is not extremely confusing.

    Webp is converging to the way others work exactly to avoid confusion. In the past people were comparing flif lossless (with lossy transparent) against webp lossless (with exact). I think lossy transparent is a better default than lossless, so I was very happy to see flif to lead the way in this.
    I see, being under pressure from unfair competition, you decided to accept their ways. You removed confusion when comparing FLIF and WEBP by introducing confusion when comparing it with any lossless compressor. At least this one makes you look better on paper.

    Quote Originally Posted by Jyrki Alakuijala View Post
    Regarding your worries in participating in the definition of terminology: If you are passionate about it and argument clearly, I'm sure that people will consider your viewpoint.
    I'm starting here.

    Quote Originally Posted by Jyrki Alakuijala View Post
    To answer this I need to explain my personal viewpoint and simple my personal reasons why I work in data compression. I want to make the computers and the internet faster.

    Today, if I actually take PNGs from the internet, many have editing artefacts in their fully transparent area. Many have been optimized with an image optimizer to create a complicated pattern inside the transparent area (often to have zero-residuals after paeth prediction). I never saw some magically useful data that the author wanted to store there, just the two above. When editing artefacts are posted into the internet, the author is posting information that he or she doesn't necessarily want public. Further, the bytes cost in transmission making the internet slower. When a complicated zero-residual pattern for one format has been inserted in the transparent area, another compressor with better predictors cannot replicate it exactly and the file can end up 50+ % bigger because of it.

    Taking a sample of images from the internet shows that tools are difficult to use correctly for most people. Last time I sampled, 4 % of a random sample of PNGs was stored with 16 bit color dynamics. This makes no sense what so ever for internet use. It is just people using tools with default settings without thinking. The defaults need to make sense, otherwise people will create inefficient data.

    So, to make the internet faster, we need good defaults. This is why lossless should follow the example from FLIF and be lossy transparent.

    If the goal is not to make the internet faster, one can have another priority and other defaults in an encoder. But this is a viewpoint, nothing absolute.
    I agree that good defaults are hugely important. But you say that it's good to call grey white because most are better off with grey. White should follow the example from frontrunners.
    No, white may be a niche product. But the way to deal with that is to educate users that they need grey, not to change the meaning of the word for one that works only in your field.

    You play with the definition of what is an image. You say that there are 2 kinds of pixels: visible, colourful ones and invisible, colourless.
    To the eye - it is true.
    Technically - it is not.
    But if you're talking about differences invisible to the eye, there a good term defined for that already.
    You show that there's a spectrum of optimizations that you can use to make something *truly* visually lossless. That's great, don't divide the field, use them all. But be honest when talking about it. Differentiate from so-called visually lossless by emphasizing reliability, repeatability, provability. Or just invent a new word, like "exact".

  24. #23
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by m^3 View Post
    To the eye - it is true.
    Technically - it is not.
    That changes from file format to file format. In GIF it is impossible to store the RGB of the transparent pixel. Those are necessarily and always lost in the process. In PNG you sometimes have to lose the color (or greyness) of the transparent color, the tRNS chunk necessarily loses these for "Colour type 0" and "Colour type 2".

    Usually those formats that store a full alpha channel still emit RGB values for alpha zero for decoding efficiency reasons (to avoid one extra branch per pixel) even though there is a small cost in entropy in doing so.

    Furthermore, we don't need to talk about the eye, when the exactly same pixel values are placed in the video card's frame buffer. It is a lot more guarantees than what a visually lossless compression gives.
    Last edited by Jyrki Alakuijala; 13th June 2016 at 13:56.

  25. #24
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    That changes from file format to file format. In GIF it is impossible to store the RGB of the transparent pixel. Those are necessarily and always lost in the process. In PNG you sometimes have to lose the color (or greyness) of the transparent color, the tRNS chunk necessarily loses these for "Colour type 0" and "Colour type 2".

    Usually those formats that store a full alpha channel still emit RGB values for alpha zero for decoding efficiency reasons (to avoid one extra branch per pixel) even though there is a small cost in entropy in doing so.

    Furthermore, we don't need to talk about the eye, when the exactly same pixel values are placed in the video card's frame buffer. It is a lot more guarantees than what a visually lossless compression gives.
    That's an interesting remark, but it doesn't change anything in the discussion. If your source format supports it and you drop it, it's lost.

  26. #25
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    667
    Thanks
    204
    Thanked 241 Times in 146 Posts
    Quote Originally Posted by m^2 View Post
    That's an interesting remark, but it doesn't change anything in the discussion. If your source format supports it and you drop it, it's lost.
    Thank you for putting so much thought into this. Would renaming the lossless image compression make thing easier or more difficult to understand? What name would you recommend?

Similar Threads

  1. Compressing pi
    By Matt Mahoney in forum Data Compression
    Replies: 24
    Last Post: 26th September 2016, 19:17
  2. Compressing DNG
    By mohanohi in forum Data Compression
    Replies: 13
    Last Post: 9th July 2014, 20:25
  3. Compressing prime numbers
    By Matt Mahoney in forum Data Compression
    Replies: 14
    Last Post: 18th May 2013, 18:41
  4. Compressing small packets
    By cbloom in forum Data Compression
    Replies: 4
    Last Post: 1st March 2013, 02:59
  5. Compressing mp3 Files
    By zhuda in forum Data Compression
    Replies: 12
    Last Post: 7th March 2011, 19:26

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •