Page 1 of 2 12 LastLast
Results 1 to 30 of 38

Thread: Quo Vadis JPEG - New Movements in Still Image Compression

  1. #1
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts

    Quo Vadis JPEG - New Movements in Still Image Compression

    ITU-T.81 - ISO/IEC 10918-1 is one of the dominant image compression standards. If that name doesn't mean anything to you - it is just ordinary JPEG. The JPEG committee - of which I'm a member of - released various new standards in the past view years that provided better performing compression schemes, more flexible image representations and smarter file formats. While some of them were adopted in the industry, as JPEG-LS (aka LOCO) and JPEG 2000 in the digitial movie and medical industry, all the rest of the world still speaks old traditional JPEG.

    It is probably time to update the standard, and some members of the JPEG committee are looking into bringing 10918-1 up to date. New features might include lossless compression, high-bitrate compression and support for alpha channels. However, much care must be taken in extending a standard as predominant as JPEG, and one of the highest priorities in any such attempt should be backwards compatibility to existing implementations.

    What can be done? At first, it takes a "verification model" that implements the proposed extensions. And here it is:

    https://subversion.rus.uni-stuttgart...eg/stable/0.1/

    available for public, under the GPL license for everyone to try. This is a subversion repository. Use "anonymous" as user name, and "jpeg" as password, and you're in.

    This is a completely new implementation of 10918-1,and unlike most (or all) other codecs you might know, this implementation is complete. It features not only the DCT based process most codecs support, but also the JPEG lossless (yes, there is one) process, the hierarchical mode, arithmetic coding (patents run out by now), DNL markers, 12bits per pixel.

    As this is might become a verification model for the proposed extensions, you find of course also new features, as lossless coding with the DCT based process; this codec is able to encode images in a special way such that they can be reconstructed without any loss by using the new codec you find here - and can be viewed with any existing JPEG codec as well - then of course with some minimal loss.

    If you want to contribute, you are more than welcome to do so. You can either add to the program (ask me for write access to the repository) or report any bugs you find. For that, sign up at the bug tracker here:

    https://lila-dev.rus.uni-stuttgart.de/bugs/

    Obviously, the JPEG is also looking for industry to show interest in such applications and extensions, and would be happy to hear from you if this development is interesting for you. If so, please ask me for the official JPEG questionnaire on low-complexity coding, and I will forward to the committee. Or approach the committee yourself via your national delegate. Or contact me:


    thor at math dot tu dash berlin dot de


    Greetings, and Happy JPEGing,

    Thomas


    PS: This will also show up, after a couple of days, on my IEEE Computer Society blog which you (will) find here:

    http://community.comsoc.org/blog/thorfdbg

    Please give the moderators a couple of days to approve.

  2. #2
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    I tried building it: http://nishi.dreamhosters.com/u/jpeg_tr01_v0.rar

    1. There was a problem with setjmp while building it in cygwin.
    autoconf.h contains #define HAVE_SETJMP_H 1
    but setjmp.hpp said "not available" and didn't compile it, so i just commented out that #if and it worked

    2. I'm not sure that it works right in the end - something jpeg-like is generated, but can't be decoded.

  3. #3
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    OT, but i still use/have most your Amiga tools and patches running on my Amigas Thomas

  4. #4
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Shelwien View Post
    I tried building it: http://nishi.dreamhosters.com/u/jpeg_tr01_v0.rar

    1. There was a problem with setjmp while building it in cygwin.
    autoconf.h contains #define HAVE_SETJMP_H 1
    but setjmp.hpp said "not available" and didn't compile it, so i just commented out that #if and it worked
    Did you run configure? If so, what is your config.log.

    Quote Originally Posted by Shelwien View Post
    2. I'm not sure that it works right in the end - something jpeg-like is generated, but can't be decoded.
    I see - possibly one of the dreadful windows problems and one of the reasons why I avoid this platform.

    Could you please go to cmd/main.cpp and replace the "fopen(...,"w")" with "fopen(...,"wb"))" to suppress the LF conversion. If this does the trick, I'll fix the code online ASAP.

    Thanks.

  5. #5
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Thanks, its works better now. Reuploaded http://nishi.dreamhosters.com/u/jpeg_tr01_v0.rar

    > Did you run configure?

    Sure, its kinda hard to compile it at all otherwise

    > If so, what is your config.log

    http://nishi.dreamhosters.com/u/config.log
    http://nishi.dreamhosters.com/u/autoconfig.h
    Maybe my 5 gcc setups got somehow mixed up?

    > "fopen(...,"w")" with "fopen(...,"wb"))"

    Yeah, looks like it fixed the problem.

  6. #6
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I don't understand. What's the goal? Stimulating research of new compression techniques that can be applied for generation good ol' JPEGs? Developing a rich toolset under GPL to make it easier for others to support rare JPEG features?
    Or maybe I understand "backwards compatibility" wrong - for me it's the ability of old tools to work with new files.

  7. #7
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Shelwien View Post
    Thanks, its works better now. Reuploaded http://nishi.dreamhosters.com/u/jpeg_tr01_v0.rar

    > Did you run configure?

    Sure, its kinda hard to compile it at all otherwise

    > If so, what is your config.log

    http://nishi.dreamhosters.com/u/config.log
    http://nishi.dreamhosters.com/u/autoconfig.h
    Maybe my 5 gcc setups got somehow mixed up?
    Thanks. Apparently,setjmp is a macro on your version of cygwin. It isn't on mine. I fixed that by simply removing the test. I guess it is sufficient if longjmp exists for setjmp to exist, too.

    Quote Originally Posted by Shelwien View Post
    > "fopen(...,"w")" with "fopen(...,"wb"))"

    Yeah, looks like it fixed the problem.
    Thanks. I released 0.1.1 on the subversion to address these two problems. Everything else should work fine now.

  8. #8
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    What I would like to have is a free tool that implements the functionalities from eg: http://www.cs.tut.fi/~foi/SA-DCT/ Such transforms require some guidance from user, eg deblocking strength, denoising strength, and so on. It would be then useful to standarize a metadata format that stores such info. Instead of storing one type of parameter once for entire image we could eg store separate parameters copy for each macroblock. Such metadata would allow to avoid reencoding of quantized DCT data and yet allow for rather nice visual improvement.

  9. #9
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by m^2 View Post
    I don't understand. What's the goal? Stimulating research of new compression techniques that can be applied for generation good ol' JPEGs? Developing a rich toolset under GPL to make it easier for others to support rare JPEG features?
    No, not really. The old features are there to have a basis for comparison. The point is not to create a new format, we have plenty already.

    Quote Originally Posted by m^2 View Post
    Or maybe I understand "backwards compatibility" wrong - for me it's the ability of old tools to work with new files.
    Yes, indeed, it is "old tools work with new files". This is exactly what this provides. The files this implementation creates *will* work with old tools, which is exactly the point. Of course then not 100% lossless, but you do not loose compatibility. IOW, you compress with this library, and you get a jpeg file. You can watch this JPEG file with any standard JPEG tool and it will look absolutely fine, no problems. You reconstruct with *this* special code and it will come out exactly as you put it in. No losses, all pixels identical.

    So why all that:

    If you look at today's market share, you'll see that JPEG is (still) the dominant format on the market, even though it shows its age. If you look at today's cameras, vendors will sell proprietary "raw" compression formats as a "premium feature", and "JPEG" as "baseline". This basically creates a vendor-lock in problem for the customers as they do not get all the quality with existing JPEG technology - they can have "interoperable pictures" that look "ok", or "HDR photos" that only work with a proprietary toolchain.

    So one may wonder why JPEG 2000 or JPEG XR never got much attention in this market even though it would allow wider interoperability between tools and cameras - and what can be done about this. The problem is not the compression performance (which is good), but complexity of both formats and the absense of a migration path. Basically, when switching to another format you loose the complete JPEG toolchain from start to end, and you also loose your pictures. Bad idea.

    Thus, if you are facing the problem that you want to create (or restore) interoperability at the camera premium segment, you need to provide a format that is backwards compatible to what the consumers already have. And not a new one - this is important!

    Thus, this implementation: To check whether there is any interest in such a thing, and if so, trigger a standardization initiative in this direction. As I might have said, I am a JPEG member, so it is not too absurd that this might be picked up if there is enough interest in it. If not, and people are fine with proprietary raw formats - then I'm wrong and it was at worst my time I wasted. Thus - test interest. Tell me if you need such a thing, and test it.

  10. #10
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    What I would like to have is a free tool that implements the functionalities from eg: http://www.cs.tut.fi/~foi/SA-DCT/ Such transforms require some guidance from user, eg deblocking strength, denoising strength, and so on. It would be then useful to standarize a metadata format that stores such info. Instead of storing one type of parameter once for entire image we could eg store separate parameters copy for each macroblock. Such metadata would allow to avoid reencoding of quantized DCT data and yet allow for rather nice visual improvement.
    Oh, no problem with that per se - I'm doing research in image compression all the way. Though that's not quite what this project is about. From a pure "technology point of view", this implementation is rather boring as there is not really anything novel in it. However, the point is that you can create files with *new features* - namely lossless DCT - that are still compatible to all the software around in the world.

    Or to put it in a different way: By just creating a new cool standard, you still don't get people to jump on it just because it compresses so nice. I believe JPEG 2000 and XR demonstrated this without any doubt. Any future standard we do should address the needs, the toolchain and the products we already have in the market.

  11. #11
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts

  12. #12
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    What about projects like http://www.elektronik.htw-aalen.de/packjpg/ ?
    Do you think there's a way to integrate that too somehow?

  13. #13
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Shelwien View Post
    What about projects like http://www.elektronik.htw-aalen.de/packjpg/ ? Do you think there's a way to integrate that too somehow?
    Yes, I know packjpg of course, and it works pretty nicely. There is also a similar solution from StuffIt. However, while technically surely brilliant, all these solutions create codestreams that existing codecs do not understand, they are not backwards compatible. And they are relatively complex, probably too complex for a camera integration.

  14. #14
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > Yes, I know packjpg of course, and it works pretty nicely.
    > There is also a similar solution from StuffIt.

    There're quite a few actually, even I made one, see
    http://encode.ru/threads/1243-Open-s...EG-compressors
    Though actually the best compression is provided by paq models,
    I'm not certain which is the best one though, maybe paq8px, as
    there're a few different implementations too.

    > However, while technically surely brilliant, all these solutions
    > create codestreams that existing codecs do not understand, they are
    > not backwards compatible.

    Yeah, but comparing to modern entropy coders, jpeg has 10-20% of overhead,
    so keeping the main chunk of information in old jpeg code while
    adding extended records doesn't seem so good.

    Also imho jpeg-ari isn't much more compatible with existing software
    than packjpg.

    So the question is whether its possible to create a new popular format by
    relying on jpeg for psychovisual parts, or would it cause some legal issues?
    Would there be any similar backward-compatibility tricks applicable in such a case?

    And btw, what do you think about rarvm/zpaql approach to format extension?
    (providing a VM for future extensions and adding decoder code to files).

    > And they are relatively complex, probably too complex for a camera integration.

    Imho actually the baseline jpeg is much harder to parse/decode than most
    "homemade" formats with better compression.
    Also modern cameras can encode h264, so I doubt that any image coders
    would be too "heavy".

  15. #15
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Shelwien View Post
    There're quite a few actually, even I made one, see http://encode.ru/threads/1243-Open-s...EG-compressors Though actually the best compression is provided by paq models, I'm not certain which is the best one though, maybe paq8px, as there're a few different implementations too.
    You are talking about JPEG-postcompressors, is this right?
    Quote Originally Posted by Shelwien View Post
    Yeah, but comparing to modern entropy coders, jpeg has 10-20% of overhead, so keeping the main chunk of information in old jpeg code while adding extended records doesn't seem so good.
    Quite certainly. This is why we had more modern formats like JPEG 2000. Scientifically, JPEG is pretty much outdated. You are absolutely correct that if you classify codecs according to their compression performance, such an idea will not perform very good. The performance you get from the codec posted here is about as good as what you get from PNG, and PNG is even more "stupid" as far as compression is concerned. (LZ is not a good model for image data, and PNG lacks an energy compaction).
    Quote Originally Posted by Shelwien View Post
    Also imho jpeg-ari isn't much more compatible with existing software than packjpg.
    Indeed, indeed. It is at least part of the 10918 standard, even though it has never been adopted by the industry - and thus even nowadays it hasn't found its use. Thus, it is rather superfluous. It is supported by my codec because I wanted to have full 10918-1 compliance, as a point to benchmark against, but more out of scientific interest than necessity for the purpose of this code.
    Quote Originally Posted by Shelwien View Post
    So the question is whether its possible to create a new popular format by relying on jpeg for psychovisual parts, or would it cause some legal issues?
    No, I don't think there are any legal issues nowadays any more - at least as far as the JPEG (Huffman) codestream is concerned. The question is not so much whether one can create a new format - we did in the past and the results were JPEG 2000 and JPEG XR. The former has found some use (Digital cinema, medical), the latter found none. Thus, the problem is not to define a new format - the problem is to provide a migration path for the industry and the customers. I also don't believe that raw compression performance is worth the money nowadays - not any more. If the images are too big - buy a bigger disk. Problem solved. Storage is too cheap today to justify the waste of CPU power. However, we *do* have a problem in digital photography: Representation of sensor data ("digital negatives"). There are a couple of initiatives, DNG by Adobe for example. HDPhoto had this as well in a sense (JPEG XR no more). Unfortunately, these formats require also a new workflow, browsers do not show them, they are rarely standardized, and they are not backwards compatible. Neither does the camera industry pick the idea up. They prefer proprietary "something" formats that lock the customers on their product line. And the customer is required to recode images or lose his collection once a product line is obsoleted. Not a good solution.
    Quote Originally Posted by Shelwien View Post
    Would there be any similar backward-compatibility tricks applicable in such a case?
    Maybe - one can always place a JPEG in the codestream and all the extended data in the application markers. Of course, this also means that the compression performance will go down. Other paths improve JPEG by playing tricks with the quantizer and entropy coder, DCTune by Watson being the first in that direction, and now JPEGMini as another step in this direction. I personally also did quite some research work on this for JPEG 2000 and JPEG XR that in one way or another found its way into the Accusoft product line. Nevertheless, without playing any further tricks, this doesn't solve the problems I mentioned above: HDR, lossless.
    Quote Originally Posted by Shelwien View Post
    And btw, what do you think about rarvm/zpaql approach to format extension? (providing a VM for future extensions and adding decoder code to files).
    Sorry, I don't quite follow you. What do you mean by that? Include a p-code in the codestream that basically represents a decoder for the data within the stream? Is that what you mean?
    Quote Originally Posted by Shelwien View Post
    > And they are relatively complex, probably too complex for a camera integration. Imho actually the baseline jpeg is much harder to parse/decode than most "homemade" formats with better compression.
    Maybe, but then I wonder why people always complain about the JPEG 2000 complexity. Actually my measurements show that a good JPEG 2000 implementation can outperform the "low complexity" JPEG XR.
    Quote Originally Posted by Shelwien View Post
    Also modern cameras can encode h264, so I doubt that any image coders would be too "heavy".
    Reason might be that you get a JPEG IP-core for a couple of cents. There is no "cheap" IP core for JPEG 2000, and neither for JPEG XR. Yes, I know megachips has such a core, but I haven't seen its use. I believe XR is more or less dead by now. So much money went into MPEG that H.264 is now readily available as "by-product", though it is not licence-fee-free, and thus incompatible with the JPEG licensing practise (baseline for free). I also tried a while ago to push H.264 I-frame compression as an image compression format (chips would be there, and it performs quite well), though that also went nowhere. I believe the situation is currently "either it's JPEG compatible or it is uninteresting". Probably I'm wrong with this as well, only time will tell.

  16. #16
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    539
    Thanks
    192
    Thanked 174 Times in 81 Posts
    Quote Originally Posted by thorfdbg View Post
    Quote Originally Posted by Shelwien
    And btw, what do you think about rarvm/zpaql approach to format extension? (providing a VM for future extensions and adding decoder code to files).
    Sorry, I don't quite follow you. What do you mean by that? Include a p-code in the codestream that basically represents a decoder for the data within the stream? Is that what you mean?
    Shelwien is talking about using virtual machines for compression/decompression adjusted for certain kind of data. The code for decompression is stored in the archive, so you don't have to update the compressor itself while still getting better compression from the specialized algorithms. For example, there's a ZPAQ file that demonstrates this. It compresses 1 million digits of pi to 114 bytes by storing the VM code for pi calculation inside the archive.

    Obviously, something like this would be very useful for image compression and would lift image files to the next level, having capabilities like f.e. SVG, but even more powerful because they aren't limited to a handful of operations like painting primitives/drawing text. Especially images have operations that can be represented much more compact by storing the code and the original image instead of storing the processed image. Also, the integration into most image editing software would be quite easy because they already have their own formats that implement similar things without a VM (storing multiple image levels, storing lossless operations).
    Last edited by schnaader; 3rd June 2012 at 18:35.
    http://schnaader.info
    Damn kids. They're all alike.

  17. #17
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > You are talking about JPEG-postcompressors, is this right?

    We're calling them recompressors usually - the coders that undo the format's entropy coding
    and encode the data again with a better method.
    For jpeg there're quite a few independent implementations already, even winzip has one.

    > This is why we had more modern formats like JPEG 2000.

    Are they really modern though? Is there any such format with full-precision entropy coding?
    (which means not huffman and not q/qm/mq/cabac)
    Otherwise there's 5-10% of overhead just from that.
    Well, at least .djvu uses something more reasonable.

    > as good as what you get from PNG

    I suppose efficient coding of lossless image in context of lossy jpeg output
    won't be so hard actually.
    First we'd have to design a good model to work without such contexts though.
    Compression-wise, we have significantly better lossless image formats than
    any of standards afaik, but unfortunately they're still too simple, eg.
    http://encode.ru/threads/1195-Using-...age-similarity

    > No, I don't think there are any legal issues nowadays any more

    I'm talking about a hybrid where we take data transformations from jpeg,
    but use packjpg for entropy coding. Won't there be still some copyright issues?

    > to provide a migration path for the industry and the customers

    Actually packjpg is relatively good in that sense.
    It can be losslessy converted to jpeg, so as an intermediate solution
    we can store/transmit images in .pjg but convert to jpeg for processing.

    > don't believe that raw compression performance is worth the money nowadays

    Well, with photos at max resolution, any SD cards always seem too small.
    Internet doesn't get any better either - in fact wireless/cell connections
    are getting larger share, so transfer speed/price only becomes worse.
    There're all the small devices with limited storage.

    And on other side, there're all kinds of filehostings for which saving 20%
    of space on 100TB of jpegs is not so useless.

    > Neither does the camera industry pick the idea up

    Well, "raw" formats on cameras are usually hardware-specific, so its not
    all about locking the customers.

    > Include a p-code in the codestream that basically represents a decoder for the data within the stream?

    Yes, but a code for a specialized VM, not universal one like java.
    Winrar uses this method to implement backward-compatible preprocessors,
    while zpaq additionally supports custom context models.
    Then files with already known decoders are processed by precompiled handlers,
    and for newer files its necessary to run the decoder in VM, but files still
    can be decoded.

    > I also tried a while ago to push H.264 I-frame compression as an
    > image compression format, though that also went nowhere

    Well, there's
    https://developers.google.com/speed/webp/ (though its not h264)
    http://encode.ru/threads/325-UCI-Image-Compression
    so maybe its just not too visible.

  18. #18
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    Quote Originally Posted by thorfdbg View Post
    Oh, no problem with that per se - I'm doing research in image compression all the way. Though that's not quite what this project is about. From a pure "technology point of view", this implementation is rather boring as there is not really anything novel in it. However, the point is that you can create files with *new features* - namely lossless DCT - that are still compatible to all the software around in the world.

    Or to put it in a different way: By just creating a new cool standard, you still don't get people to jump on it just because it compresses so nice. I believe JPEG 2000 and XR demonstrated this without any doubt. Any future standard we do should address the needs, the toolchain and the products we already have in the market.
    I've referred to that project because it's a very good (IMO) algorithm for reducing JPEG artifacts (one that removes most artifacts while preserving most of details). Some encoders blur the image before compression, probably that improves compression/quality ratio. What I had in mind is adding a little metadata telling how strong blur effect was applied (so decoder would know how strong deblurring filter should apply) and how strong artifacts should be expected.

    SA-DCT does well even without additional metadata, so having something like that in standard JPEG library should make such algorithm widespread, thus many programs that display JPEGs could offer visually nicer images.


    Is lossless mode really backward compatible? JPEG uses RGB to YCbCr transform which itself is lossy, so how could you achieve losslessness?

  19. #19
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Shelwien View Post
    > This is why we had more modern formats like JPEG 2000. Are they really modern though? Is there any such format with full-precision entropy coding? (which means not huffman and not q/qm/mq/cabac) Otherwise there's 5-10% of overhead just from that.
    First of all, no this is not Huffman, it is MQ coding. The coding overhead of the approximate AC coding is, I believe, well studied and I seem to remember that it is well below that ratio. But that is not quite the point - JPEG 2000 made a couple of compromises and clearly looses more coding performance by not employing certain data redundancies - as for example inter-band correlations. The advantage is then that you also gain flexibility, here easy transcoding, a posteriori rate allocation (Tier-3 EBCOT coding). I'm not claiming that you cannot code better - you can. But you also loose features.
    Quote Originally Posted by Shelwien View Post
    > as good as what you get from PNG I suppose efficient coding of lossless image in context of lossy jpeg output won't be so hard actually. First we'd have to design a good model to work without such contexts though. Compression-wise, we have significantly better lossless image formats than any of standards afaik, but unfortunately they're still too simple, eg. http://encode.ru/threads/1195-Using-...age-similarity
    Certainly so - but are they backwards compatible? I don't think so. But anyhow, maybe you want to try - I'm really inviting people, that's why the source has been released. The current method(s) of lossless coding are pretty naive, admittedly. You can likely do better, I don't have much doubt about it. But it first means that you have to do it. (-: For example, the current model for residual coding is quite primitive, and the simplest I could come up with that shows some improvement over raw coding. Yet, that data is almost Gaussian, so it's not *quite* so easy. You could try to model on the conditioned PDF, that might work for example. I haven't tried yet.
    Quote Originally Posted by Shelwien View Post
    > No, I don't think there are any legal issues nowadays any more I'm talking about a hybrid where we take data transformations from jpeg, but use packjpg for entropy coding. Won't there be still some copyright issues?
    For such a code it depends on the entropy coding then. There are surely some variants that are patented, see for example CABAC and friends from H.264. However, even if I sound like a broken record, it wouldn't be JPEG anymore.
    Quote Originally Posted by Shelwien View Post
    > to provide a migration path for the industry and the customers Actually packjpg is relatively good in that sense. It can be losslessy converted to jpeg, so as an intermediate solution we can store/transmit images in .pjg but convert to jpeg for processing.
    Just consider the typical workflow: I would take a picture with my camera - and then would need an additional tool to put it on the web? Well, we could have that already, actually: Take the picture in JPEG 2000, then transcode to JPEG. For this purpose, the minimal loss caused by such transcoding would surely be irrelevant, and for one-time viewing, it would be as well as you keep the original anyhow. So what is exactly gained by packjpg? My opinion aside, I had the chance to talk to the StuffIt folks at the DCC a couple of years ago. The buisiness model there was - not quite unreasonable - that it would at least be great for image archival. However, they had huge problems of taking advantage of this, i.e. to sell it to people. In other words, I don't believe that this is a convincing argument for the average user. It is an additional complication. Images should be viewed immediately by whatever is on the system. And what is on the system does speak JPEG, BMP and maybe TIFF. And then that's it.
    Quote Originally Posted by Shelwien View Post
    > don't believe that raw compression performance is worth the money nowadays Well, with photos at max resolution, any SD cards always seem too small. Internet doesn't get any better either - in fact wireless/cell connections are getting larger share, so transfer speed/price only becomes worse. There're all the small devices with limited storage.
    Ok, so let's have a look. Let's say I take pictures in really good quality, and I can fit - for the sake of an argument - 100 of them on a SD card. Let's say I can compress 10% better. Now it's 110 images. Would I really care, as a customer, if I could go to the next drugstore and buy twice a size an SD card for little money? JPEG did, back then, indeed make a huge difference since it reduced the size of the images by a factor of more than two, typically a factor of ten. But we're beyond that point - the market niche is taken, and the return of investment is minimal for raw compression performance.
    Quote Originally Posted by Shelwien View Post
    And on other side, there're all kinds of filehostings for which saving 20% of space on 100TB of jpegs is not so useless.
    For them, yes, they could use some transparent compression - as long as it goes in and out as JPEG - fine. However, that's a pretty specialized small market, I'd say.
    Quote Originally Posted by Shelwien View Post
    > Neither does the camera industry pick the idea up Well, "raw" formats on cameras are usually hardware-specific, so its not all about locking the customers.
    Not quite so. Actually, what is sold as "raw" is hardly ever "unprocessed". You do not get the sensor outputs, as the vendors might have promised. Typically - but I do not know all the formats - quite some processing was already done. What you get is typically still Baire pattern images with higher bit-depths, but not at all unprocessed as you might believe. That, however, could be fairly well represented in a standard way. Actually, DNG does that.
    Quote Originally Posted by Shelwien View Post
    > I also tried a while ago to push H.264 I-frame compression as an > image compression format, though that also went nowhere Well, there's https://developers.google.com/speed/webp/ (though its not h264) http://encode.ru/threads/325-UCI-Image-Compression so maybe its just not too visible.
    I know WebP, though I don't get the point here. It compresses quite well for low bit depths, but looses quite a lot for higher quality, so is certainly unsuitable for the semi-professional market. Besides, its complexity is overhelming - much higher than JPEG 2000. I'm not quite sure whether I had UCI in my hands, yet, but I regularly pick up formats and run them through the "standard JPEG test suite" (JPEG as in - the JPEG group). While all those developments are quite interesting, I have quite some doubt whether anyone will pick them up. Microsoft pushed HDPhoto aka JPEG XR, and - I would believe one can say that after three years - failed. Google pushed WebP - and how visible is it? My browser certainly does not speak WebP, and I don't miss it in the internet. So maybe we're talking about two different things: Scientific development - which I surely support and consider interesting (and doing myself) - and adoption in the digital photography market. I believe, at least after being in this buisiness for quite a while, that anything but JPEG compatible formats wouldn't have a chance there. At least this is my experience what I've learned from the last ten years.

  20. #20
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    I've referred to that project because it's a very good (IMO) algorithm for reducing JPEG artifacts (one that removes most artifacts while preserving most of details). Some encoders blur the image before compression, probably that improves compression/quality ratio. What I had in mind is adding a little metadata telling how strong blur effect was applied (so decoder would know how strong deblurring filter should apply) and how strong artifacts should be expected. SA-DCT does well even without additional metadata, so having something like that in standard JPEG library should make such algorithm widespread, thus many programs that display JPEGs could offer visually nicer images.
    Oh, I see, so I didn't get the point. Yes, that certainly is interesting then. IOW, you get better quality by applying the metadata in the format, and standard quality without. Yes, indeed, thanks for the idea, that is definitely nice and I should have a look.
    Quote Originally Posted by Piotr Tarsa View Post
    Is lossless mode really backward compatible? JPEG uses RGB to YCbCr transform which itself is lossy, so how could you achieve losslessness?
    Grin. Thanks for the smart question. Yes, lossless is really lossless - PSNR = infinity. There are actually two methods for lossless compression: a) is an int to int DCT combined with the Adobe RGB marker. That is, the image is actually represented in RGB space, not in YCbCr space. However, it is a standard marker that is honoured by all implementations (and, actually, we even standardized that lately so you're fine. b) is the residual coding method. Here we have a standard constrained YCbCr implementation with all constants explicitly spelled out, and a standard implementation of the DCT, also with the implementation spelled out explicitly. You code the DCT by a standard JPEG huffman, and the residual to the original image *in RGB space* in a side channel (see again the code). By applying the rigid backwards transformation, and the color transformation, and by using exactly the constants as defined by the code, and by then adding the residuals, you get back the original picture 1:1. Of course that means that you need to specify the transformations exactly and some implementation freedom in the current JPEG. One could, for example, use the MPEG-2 specified DCT here (though mine is likely different). Anyhow, surprisingly b) works better than a), though is certainly less elegant from an engineering point of view.

  21. #21
    Member Karhunen's Avatar
    Join Date
    Dec 2011
    Location
    USA
    Posts
    91
    Thanks
    2
    Thanked 1 Time in 1 Post
    I also wondered about a(n) H264 still image format, and since I couldn't find a standard that performed well && was consistent, I just used Virtualdubmod with the VFW h264 codec. I don't use lossless, since the CCD chip is lossy to begin with, but many similar images seem to compress better quality factor and at decent speed. And I use ffmpeg to getr back the original frames. Works for me. BTW thanks Bulat for FreeArc's wrapper for Packjpg.dll which preserves filedates when I do want the originals back.
    Last edited by Karhunen; 6th June 2012 at 00:15. Reason: spelling misteak

  22. #22
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Karhunen View Post
    I also wondered about a(n) H264 still image format, and since I couldn't find a standard that performed well && was consistent, I just used Virtualdubmod with the VFW h264 codec.
    One of the problems I see here is the incompatibility of the licensing policy of the MPEG and the JPEG. H.264 typically requires access to patents which are managed by the MPEG LA, JPEG however always grants free access to its baseline standards. It thus also depends on MPEG whether such a standard would be feasible.
    Quote Originally Posted by Karhunen View Post
    I don't use lossless, since the CCD chip is lossy to begin with, but many similar images seem to compress better quality factor and at decent speed. And I use ffmpeg to getr back the original frames. Works for me. BTW thanks Bulat for FreeArc's wrapper for Packjpg.dll which preserves filedates when I do want the originals back.
    Then you're probably the exception. Most of the semi-professional photographers I talked to avoid loss at all cost - probably because they do not understand the impact of compression, and that they don't get the "original" anyhow. I believe that this is probably a market point for such a technology.

  23. #23
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by thorfdbg View Post
    One of the problems I see here is the incompatibility of the licensing policy of the MPEG and the JPEG. H.264 typically requires access to patents which are managed by the MPEG LA, JPEG however always grants free access to its baseline standards. It thus also depends on MPEG whether such a standard would be feasible.
    Webp?

  24. #24
    Member
    Join Date
    May 2007
    Location
    Poland
    Posts
    85
    Thanks
    8
    Thanked 3 Times in 3 Posts
    i would like DLI to become JPEG 2 or Jpeg Next or w/e. It is so good.

  25. #25
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by m^2 View Post
    Webp?
    H.264 and JPEG have hardware support - there are vendors that provide equipment that speak such formats. For WebP - there is really nothing. Besides, I'm not really convinced of WebP. Quite good for low quality, but high complexity, more complex than JPEG 2000. Pretty bad for higher bitrates.

  26. #26
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by jethro View Post
    i would like DLI to become JPEG 2 or Jpeg Next or w/e. It is so good.
    Quite good, indeed, I agree. But again, see above - I don't see that vendors would jump on a new incompatible format for "just" a compression improvement of probably 10% to 20%. They didn't do that for JPEG 2000, nor did they do it for JPEG XR. I know, 20% means much as far as compression science goes, but it is little as far as "consumer experience" goes. Half the size - this would make a difference. Before you mention - yes, I know that this is unrealistic. (-: Besides, DL is really quite demanding as far as the CPU power is concerned, so probably nothing for embedded platforms - plus the authors would need to release it. Remember: JPEG means that baseline technology must be provided license-fee-free.

  27. #27
    Member
    Join Date
    May 2007
    Location
    Poland
    Posts
    85
    Thanks
    8
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by thorfdbg View Post
    Quite good, indeed, I agree. But again, see above - I don't see that vendors would jump on a new incompatible format for "just" a compression improvement of probably 10% to 20%. They didn't do that for JPEG 2000, nor did they do it for JPEG XR. I know, 20% means much as far as compression science goes, but it is little as far as "consumer experience" goes. Half the size - this would make a difference. Before you mention - yes, I know that this is unrealistic. (-: Besides, DL is really quite demanding as far as the CPU power is concerned, so probably nothing for embedded platforms - plus the authors would need to release it. Remember: JPEG means that baseline technology must be provided license-fee-free.
    I imagine it is much better than 20% of improvement over JPEG. Check again . It is arguably the best lossy image codec so being a bit more demanding than JPEG is not a surprise. Also it is still probably not very optimized for speed and such. But of course you are right on the points you mentioned. The thing is, however Jpeg 2k and especially JPEG-XR were hardly an improvement over JPEG, so when people see the actual visual improvement in picture quality with their own eyes they may start using the new format. See video codecs or music codecs.
    I think with all the research improvements in visual compression the world may be ready for a new lossy picture codec

  28. #28
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by thorfdbg View Post
    H.264 and JPEG have hardware support - there are vendors that provide equipment that speak such formats. For WebP - there is really nothing. Besides, I'm not really convinced of WebP. Quite good for low quality, but high complexity, more complex than JPEG 2000. Pretty bad for higher bitrates.
    Google does VP8 hardware. I think webp based on this?

  29. #29
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by jethro View Post
    I imagine it is much better than 20% of improvement over JPEG. Check again . It is arguably the best lossy image codec so being a bit more demanding than JPEG is not a surprise. Also it is still probably not very optimized for speed and such. But of course you are right on the points you mentioned. The thing is, however Jpeg 2k and especially JPEG-XR were hardly an improvement over JPEG, so when people see the actual visual improvement in picture quality with their own eyes they may start using the new format.
    It is really not that much better compared to JPEG 2000, at least according to my last test - getting a lot of improvement in this field is quite hard. JPEG 2000 was actually an improvement, not only in quality, but also in features as it addressed many deficiencies JPEG had. "a bit more demanding" is a bit an euphemism. (-: Anyhow, we have an open call for new technology at the JPEG, so if anyone wants to contribute, we welcome new technology.

  30. #30
    Member Karhunen's Avatar
    Join Date
    Dec 2011
    Location
    USA
    Posts
    91
    Thanks
    2
    Thanked 1 Time in 1 Post
    Anyone have comments on the comparative encode/decode speed for jpegXR/jpeg2000/WebP ? I find that jpegXR/2000 encode quickly enough, but are slow to decode. Opposite is true of WebP, its encoders need a lot of work- I think support for SIMD is coming but not sure where I saw that

Page 1 of 2 12 LastLast

Similar Threads

  1. WebP (lossy image compression)
    By Arkanosis in forum Data Compression
    Replies: 62
    Last Post: 12th April 2019, 18:45
  2. UCI Image Compression
    By maadjordan in forum Data Compression
    Replies: 5
    Last Post: 19th August 2017, 23:15
  3. BCIF image compression program
    By m^2 in forum Data Compression
    Replies: 35
    Last Post: 26th April 2013, 16:02
  4. JPEG Compression Test [April 2010]
    By Skymmer in forum Data Compression
    Replies: 18
    Last Post: 7th February 2011, 23:30
  5. JPEG Compression Test [December 2009]
    By Skymmer in forum Data Compression
    Replies: 9
    Last Post: 23rd December 2009, 21:06

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •