Results 1 to 27 of 27

Thread: JPEG XT new reference software, online test updated

  1. #1
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts

    JPEG XT new reference software, online test updated

    Hi folks,

    a new version of the reference software for JPEG XT is avaiable at http://www.jpeg.org/jpegxt/software.html for download. This new software matches the latest "final draft" version of the specs of XT, and also includes a couple of new options and features to improve the coding performance. Most notably, you can select the quantization matrix and enable an optimized deadzone quantizer.

    A further enhanced experimental version of the same software is available for testing at the jpeg online test facility at http://jpegonline.rus.uni-stuttgart.de/ . The version presented there adds some features from mozjpeg, most notably the trellis quantizer and a (completely different, though) encoder-side deblocking filter.

    I also added three new images, compoound1, compound2 and otto2 which are good tests for the deringing filter. These images are mixed text/graphics content and are hence a typical victim of the Gibbs rining phenomenon which is filtered away by the new option in the experimental software.

    Have fun testing!

  2. #2
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    hi Thomas,
    have you maybe considered upgrading Huffman coding with tANS (like zhuff, lzturbo, ZSTD, Apple LZFSE) or rANS (like LZA, CRAM 3.0, Oodle LZNA)?

  3. #3
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    hi Thomas, have you maybe considered upgrading Huffman coding with tANS (like zhuff, lzturbo, ZSTD, Apple LZFSE) or rANS (like LZA, CRAM 3.0, Oodle LZNA)?
    I afraid not. One of the requirements (i.e. design goals) of JPEG XT is backwards compatibility (or rather, forwards compatibility to be precise), which means that old software should be able to decode new images (though not with all features, necessarily), and to use existing JPEG technology as much as possible, to keep the cost for hardware implementations low. Thus, that pretty much restricts the choice of entropy coding to Huffman. However, if you are interested, there is a separate activity in JPEG, namely jpeg-innovations, where we look for advanced image codecs that go beyond the currently existing technology. If you want to, submit ideas there.

  4. #4
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    Thank you, do you also have an initiative of developing standards without this restriction of forward compatibility?
    For example PackJPG claims to losslessly reduce JPEG size by ~24%.
    ANS-based compressors turn out a few times (3+) faster than Huffman-based for comparable compression ratio (see compressors I've mentioned). We had a poster about ANS at PCS 2015, the proceedings hopefully will be soon.

  5. #5
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    Thank you, do you also have an initiative of developing standards without this restriction of forward compatibility?
    Certainly. As said, we have jpeg-innovations, with a public mailing list at https://listserv.uni-stuttgart.de/ma...eg-innovations. There we currently look into compression of plenoptic images, holographic images, point clouds, and much more. Then, we have an initiative to look for future image compression (daala is a nice example for that) were you may want to contribute. Mailing list at https://listserv.uni-stuttgart.de/ma...tinfo/jpeg-aic, requires review and a simple formal application for non-JPEG members. Then we have a more concrete initiative on low-latency coding (lightweight online compression of image data), named jpeg xs (extra speed), with a mailing list at https://listserv.uni-stuttgart.de/ma...stinfo/jpeg-xs, also requiring a simple formal application.
    Quote Originally Posted by Jarek View Post
    For example PackJPG claims to losslessly reduce JPEG size by ~24%.
    Yes, I know packJPG of course. I tested this probably five years ago. I agree that it performs nicely. It's probably not exactly new, but BPG and Daala are also two interesting candidates. All this fits more into jpeg-aic.
    Quote Originally Posted by Jarek View Post
    ANS-based compressors turn out a few times (3+) faster than Huffman-based for comparable compression ratio (see compressors I've mentioned). We had a poster about ANS at PCS 2015, the proceedings hopefully will be soon.
    Yes, I've read the paper. But to be honest, I don't buy into the speed advantage. If I had to realize a Huffman decoder (or encoder) in hardware, it is little more than a bitshifter, and a ROM, so decoding a Huffman symbol requires only a 16-bit LUT (the ROM), which returns a symbol/length pair, and a bitshifter, that adjusts the input bitstream. Or at least, this is how length-limited Huffman works, and that is precisely what we have in JPEG. For hardware, this is as trivial as it goes. At the end, you can only compare implementations (in software), and there are many other factors like cache locality or compiler optimzation that play into here. Thus, "faster" is really a matter of the platform - software, processor, compiler, implementation, hardware, technology...

  6. The Following User Says Thank You to thorfdbg For This Useful Post:

    Jarek (2nd July 2015)

  7. #6
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    Regarding comparison with standard approaches, indeed the speed advantages I have mentioned apply to software implementations, what in many situations is sufficient also for image compression: there is now a real processor like ARM in nearly every device.
    Let's think about the comparison for hardware implementation.
    The main advantage of ANS is using nearly exact probabilities - like range coding, in contrast to Huffman.
    The main disadvantage is backward encoding - encoder (alternatively: decoder) requires a buffer for a data frame (a few kilobytes).

    Now rANS vs range coding:
    - uses single multiplication instead of two, no division in decoder,
    - don't have to rescale the range,
    - state is one number instead of two (better for vectorizing, interleaving),
    - is more appropriate for adaptive modifications of probability distributions (see e.g. https://fgiesen.wordpress.com/2015/0...hmetic-coding/ ).

    tANS vs Huffman:
    - Huffman encoder requires O(n lg n) sorting, costly for hardware implementations. tANS initialization of coding tables is cheap linear,
    - tANS can have included encryption by choosing coding tables.
    The decoding itself is very similar
    tANS decoding step:
    {t = decodingTable[x];
    produce_symbol(t.symbol);
    x = t.newX | readBitsFromStream(t.NumberOfBits);
    }
    Huffman is exactly the same (can be seen as a special case), but instead of storing t.newX, it is
    t.newX = (x << t.NumberOfBits) & mask
    For example in the standard case of L=2^11 number of states and m=2^8 alphabet, t.NumberOfBits requires L*4 bits, t.symbol requires L*8 bits. Storing t.newX requires additional L*11 bits - naively doubling the memory cost of decoding tables. Total ~6kB for tANS tables vs ~3kB for Huffman.
    However, L=2^11 is restriction to at most 11bit codes in Huffman language, what is a crucial restriction for 8bit alphabet (more realistic for Huffman here is 12bit limit, what means also ~6 kB decoding tables). tANS is more flexible here and its preciseness usually allows to get better compression ratio with smaller number of states and so smaller memory requirement for decoding tables.
    tANS encoding tables require <4kB here.
    Last edited by Jarek; 2nd July 2015 at 05:34.

  8. The Following User Says Thank You to Jarek For This Useful Post:

    Alexander Rhatushnyak (2nd July 2015)

  9. #7
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    Let's think about the comparison for hardware implementation.
    The main advantage of ANS is using nearly exact probabilities - like range coding, in contrast to Huffman.
    The main disadvantage is backward encoding - encoder (alternatively: decoder) requires a buffer for a data frame (a few kilobytes).
    This means that you need a work-buffer that is approximately as large as the encoded image. You usually do not want to do that for small and simple devices. The good part about sequential Huffman is that it is an online compression algorithm, i.e. you only need to go over the data once, and generate output as input arrives. For JPEG XT and the enhanced bitdepths modes, this is unfortunately no longer possible, but that's already an extended application.

    Quote Originally Posted by Jarek View Post
    tANS vs Huffman:
    - Huffman encoder requires O(n lg n) sorting, costly for hardware implementations. tANS initialization of coding tables is cheap linear,
    In reality, hardware implementations almost always use non-adaptive Huffman. Adaptive Huffman would require a two-pass approach, and this is simply not done because it requires an additional buffer and an additional pass over the data. It's usually a plain simple non-adaptive static Huffman encoder, and that's dirt-cheap. Sure, adaptivity buys you an additional 10% compression efficiency, but in reality, nobody cares about that. JPEG is much more about image delivery and image representation than image compression nowadays. There are applications where you need better efficiency, but that's a different market. We had such markets for medical and professional movie production, and that's where JPEG 2000 found its applications, but even that is quite aged these days.

    Quote Originally Posted by Jarek View Post
    - tANS can have included encryption by choosing coding tables.
    The decoding itself is very similar
    tANS decoding step:
    {t = decodingTable[x];
    produce_symbol(t.symbol);
    x = t.newX | readBitsFromStream(t.NumberOfBits);
    }
    Huffman is exactly the same (can be seen as a special case), but instead of storing t.newX, it is
    t.newX = (x << t.NumberOfBits) & mask
    For example in the standard case of L=2^11 number of states and m=2^8 alphabet, t.NumberOfBits requires L*4 bits, t.symbol requires L*8 bits. Storing t.newX requires additional L*11 bits - naively doubling the memory cost of decoding tables. Total ~6kB for tANS tables vs ~3kB for Huffman.
    However, L=2^11 is restriction to at most 11bit codes in Huffman language, what is a crucial restriction for 8bit alphabet (more realistic for Huffman here is 12bit limit, what means also ~6 kB decoding tables). tANS is more flexible here and its preciseness usually allows to get better compression ratio with smaller number of states and so smaller memory requirement for decoding tables.
    tANS encoding tables require <4kB here.
    I don't have tables this large. It's really quite simple: You have a 256 byte entry lookup table for the topmost 8 bits. If that can uniquely decode a symbol, do it. If not, use a second table for the lower bits. It's not rocket science, really. It's all in the XT reference implementation if you want to look it up. It's also faster this way since it ensures cache locality.

    For hardware, I wouldn't probably bother about 4K or 6K if the additional cost for a 4K lookup would be an additional buffer of the size of the image on top.

    Anyhow, that's all about JPEG XT. As said, we are happy to welcome any new ideas for jpeg AIC. There, the competitors are HEVC / BPG and probably Daala. I wouldn't really want to drill up JPEG anymore, for that the overall design is just too aged. After all, the fixed 8x8 DCT is just such a plain simple transformation, what do you expect to gain by the smartest entropy coder in the world? If you want better compression, you'd better start from a fresh design.

  10. #8
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    Regarding the buffer, for the non-adaptive (or Markov) case it is not required: one can just encode symbol sequence in backward direction - it is just a matter of reversing order of pixel scanning for image compression. There is no problem to use a fixed coding/decoding tables e.g. in ROM.
    The data is divided into ~10kB data frames in current LZ+ANS compressors also to update probability distributions.
    For more advanced adaptive compressors like LZNA, the buffer stores probability distributions of symbols.

    Regarding reducing the size e.g. 10% due to using a few kilobytes for coding tables (one step/symbol instead of yours 2-step), a single smartphone photo weights a few megabytes, so we are talking about hundreds of kilobytes saving per picture - it seems being worth the effort.
    Additionally, the smartphone has a powerful processor - e.g. 24% packJPG-scale savings, even software one, might be option worth considering - can be obtained with LZNA-like codec.
    I don't have resources nor experience to develop a new codec alone, just suggesting a natural way of improvements, already used in many places.

  11. #9
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    [doubled]

  12. #10
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    Regarding the buffer, for the non-adaptive (or Markov) case it is not required: one can just encode symbol sequence in backward direction - it is just a matter of reversing order of pixel scanning for image compression.
    This is a big "just". In reality, you usually do not have this freedom in hardware applications. The image comes in in an order you do not control, and that's usually top to bottom. You can either reverse the direction by an input buffer or by an output buffer, but that's usually not practical. RAM does cost memory, and even more so if the resolution is high. It's how the hardware is build. You usually do not have a picture somewhere in RAM you can traverse as you like.

    Quote Originally Posted by Jarek View Post
    Regarding reducing the size e.g. 10% due to using a few kilobytes for coding tables (one step/symbol instead of yours 2-step), a single smartphone photo weights a few megabytes, so we are talking about hundreds of kilobytes saving per picture - it seems being worth the effort.
    "Seems" is a nice word when all practical experience tells us that it is not. JPEG 2000 improved compression efficiency. Yet, there are no JPEG 2000 codecs around in the consumer market. But that's just my experience from 10 years of JPEG, so what do I know...

    In reality, what happened is that back then (at around ~2000), the browser manufacturer "opera" asked whether JPEG 2000 would be better than JPEG, and they would invest some work into it if it could compress images twice as high at the same complexity as JPEG. Now, go figure...

    One way or another, I'm only talking about my experience with standardization here. That shouldn't stop anyone from coming up with new ideas, and who am I to stop interesting research - quite the reverse: We're happy to take new ideas at jpeg-innovations, so if you want to contribute something, you're surely more than welcome. I posted the webside where you can sign up for our mailing list, it's really open.

  13. #11
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    Regarding the order, sure encoder might receive pixels in some fixed order.
    However, I don't see a problem if the decoder produces them in reverse order e.g. to fill the buffer used for displaying the picture on a device ? No additional buffers are required here.

    Regarding improving compression ratio, the wavelet approaches like JPEG2000 is a bit different approach.
    I'm talking about packJPG-like improvements: e.g. remain in JPEG philosophy, but make a better job at the coding level like:
    - replace Huffman with an accurate entropy coder (cheap),
    - use correlations between especially DC of neighboring blocks (also cheap),
    - use some adaptive model: modify probabilities accordingly to local situation (more expensive).
    We are talking about packJPG scale: 20-30% improvements - huge savings of storage and transmission at cost e.g. of a few kilobytes of coding tables.

    Sure the standardization process is a big issue here. However, there is a real processor in nearly every device now, and people are used to regular (nearly invisible) updates - upgrading software compressors is much simpler now. E.g. Apple promises to reduce system size from 4.6 to 1.3GB in iOS9 this fall, and so nearly all their users will have most of their files compressed with LZFSE in a few months. If software compressor will become a standard, hardware will follow.

  14. The Following User Says Thank You to Jarek For This Useful Post:

    Alexander Rhatushnyak (2nd July 2015)

  15. #12
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    Regarding the order, sure encoder might receive pixels in some fixed order.
    However, I don't see a problem if the decoder produces them in reverse order e.g. to fill the buffer used for displaying the picture on a device ? No additional buffers are required here.
    That again depends on the application. If you're only looking at the typical PC basd viewer applications, then this is irrelevant since you need to buffer the image anyhow. It is not necessarily true for other applications, e.g. fax devices, that print the image as data comes in. It is even possible that the "image" is an infinite stripe of image data, this happens for example for satelite images. The image "height" is unlimited, it is a continuous stream of data. JPEG itself supports such "online coding" by the DNL marker (supported by the XT refrenence software, IJG is too lazy to do implement this correctly...)

    Quote Originally Posted by Jarek View Post
    Regarding improving compression ratio, the wavelet approaches like JPEG2000 is a bit different approach.
    I'm talking about packJPG-like improvements: e.g. remain in JPEG philosophy, but make a better job at the coding level like:
    - replace Huffman with an accurate entropy coder (cheap),
    - use correlations between especially DC of neighboring blocks (also cheap),
    - use some adaptive model: modify probabilities accordingly to local situation (more expensive).
    We are talking about packJPG scale: 20-30% improvements - huge savings of storage and transmission at cost e.g. of a few kilobytes of coding tables.
    Look, StuffIt tried to turn this into a business concept - they had a similar JPEG post-processor for JPEG. We met at a DCC a couple of years ago... asked me about a potential business model for this. Well, did this StuffIt JPEG post-compressor sell well? I don't know, but I've never heart anything about it...

    Look, it's all pretty nice science, good research, but as long as "grandma cannot see an image", it is hard to bring such a code to the market. That's all what I can tell you. Either, you're *a lot* better and have a case where you need the bandwidth (video is such a case) and it is acceptable for your customers to invest into new hardware to decode your data, or you're rather limited to academic research.

    For packJPG and friends, there is a possible business case, namely for web services like Flickr or F*c*book that store huge amounts of images on their server, and that could losslessly transcode between JPEG and an internal format. Still, it is quite an investment into their infrastructure, so whether they are willing to make this investment depends also on the compression performance and the complexity of the code. I cannot give you a good answer what is acceptable or what is not. I just don't see at this point a pressing need to do that - storage is still too cheap, or image data is too small.

    In either case, it is then not a mass-product, and any proprietary internal format selected by those vendors will do, hence it is probably not even an issue for standardization. As long as the pictures go in and out of such service in the form of JPEG "grandma will be happy", and everything else can be closed source proprietary.

  16. #13
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    I don't think I have or will use fax, the cost of a few kilobyte buffer shouldn't undermine a budget of a space mission ... the biggest producer and final receiver of photos are probably smartphones now - they should be the main focus, for which the cost of reading symbols in backward order is irrelevant as you say, and ratio improvements can essentially affect transmission and storage.
    And smartphones have frequent invisible updates so even a grandma could see it.
    JPEG2000 has failed also because of having discussable superiority. Just improving the coding level leads to pure 20-30% improvement of compression ratio, and ANS allows to make such upgrade nearly cost-free (or reduce the cost of current arithmetic coding-based compressors).
    But indeed successful introduction of codecs without forward compatibility can be probably done only by the big players, what unfortunately leads to incompatibility issues.

  17. #14
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    JPEG2000 has failed also because of having discussable superiority.
    According to whom? Not on the tests I've seen or I've made. Sure, it doesn't outperform HEVC, but that's a tad younger. Anyhow, you don't need to believe me. http://jpegonline.rus.uni-stuttgart.de exists and allows you to make the tests yourself. Or review the publications if you don't believe objective metrics (you probably should not, I agree, but at least that's an indicator, and subjective tests are expensive and time consuming...).

    Quote Originally Posted by Jarek View Post
    Just improving the coding level leads to pure 20-30% improvement of compression ratio, and ANS allows to make such upgrade nearly cost-free (or reduce the cost of current arithmetic coding-based compressors).
    Just using arithmetic coding allows you a 10% improvement in coding efficiency in JPEG. It is even standardized. Does anyone use it? Not that I know of. Coding efficiency for images is really pretty irrelevant these days, at least most of the time. There are special applications where it is not, but I have really doubts that you can successfully market a new still image format by just improving the efficiency. JPEG 2000 tried. Microsoft tried with HDPhoto aka JPEG XR. See the results?



    Quote Originally Posted by Jarek View Post
    But indeed successful introduction of codecs without forward compatibility can be probably done only by the big players, what unfortunately leads to incompatibility issues.
    How much bigger than Microsoft does it need to be?

  18. #15
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    So basically you are saying that we are doomed to use one of the first codecs till the end of the world?
    I doubt it, the times are changing. While Microsoft couldn't do it, Google e.g. with WebP is in much better situation due to Android domination with frequent automatic updates: it could invisibly make it the default codec for the camera, eventually automatically converting if sending to a incompatible device.

    Regarding "coding efficiency for images is really pretty irrelevant these days", I also disagree - we are talking about huge savings of bandwidth. Also the big players care about storage size: Apple "app thinning", Android M, Windows 10 will compress its files ( http://blogs.windows.com/bloggingwin...act-footprint/ ) ... using smaller size of photos also looks good while fighting for a customer.

  19. #16
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    So basically you are saying that we are doomed to use one of the first codecs till the end of the world?
    As far as JPEG is concerned, I guess you'll have to work with it for a long time until either another image modality becomes more important (plenoptic images?) or the need for higher image precision becomes too pressing, or we find a codec that compresses *a lot* better, and not just a certain percentage. Or storage capacity stops growing by Moore's law.

    Quote Originally Posted by Jarek View Post
    I doubt it, the times are changing. While Microsoft couldn't do it, Google e.g. with WebP is in much better situation due to Android domination with frequent automatic updates: it could invisibly make it the default codec for the camera, eventually automatically converting if sending to a incompatible device.
    This sounds like a bad idea. Images are omnipresent. Will Microsoft update my web browser? Hopefully not! Will Microsoft update my digital picture frame that decodes JPEG only? Hopefully not! Is WebP present in my browser? Nope. An entire ecosystem developed around JPEG, and that's not going away easily.

    It is not a problem of the technology or science at all - it is a problem of the market, which is too persistent here. The advantages of new schemes (20% better, 30% better) are far lower than the improvents in storage capacity (50% more every two years), and unlike video, we don't have a bandwidth problem either (you do not send 60 8K JPEG images per second over the internet. MPEG does that, and uses new techniques for that!) Camera vendors rather prefer to lock in their consumers to their products by offering an "advanced" image format that is proprietary but can represent "higher bit depth" or "no loss". They call it "raw". It's of course mostly marketing ("raw" is only half as "raw" as you think), but it is a marketing instrument that prohibits standardized codecs to enter, or rather shows that vendors have little interest in such solutions because they have no direct advantage in offering interoperable solutions. It's sad, but true.


    Quote Originally Posted by Jarek View Post
    Regarding "coding efficiency for images is really pretty irrelevant these days", I also disagree - we are talking about huge savings of bandwidth.
    For whom? If I download my images from the web via JPEG or something better is rather irrelevant, it's probably not even visible in the total bandwidth requirement. Most internet traffic, at least in terms of bandwidth, is video, not still images. Yes, improving video compression is important, because that's the top-notch of the bandwidth that is used. Everything else is a minor optimization.

    Again, I'm not saying that there is no need to reseach. But I wouldn't invest much time into looking into a market niche that is already taken. The niche is defined by offline compression of low-dynamic range 601 color gamut 8 bit resolution images. There are other image modalities that may become important, or other use cases we can currently not foresee that may make better compression important. But, again, nothing replaced JPEG in the last 25 years for consumer applications, in the niche I defined above. JPEG is not even a good codec by todays standards, but it is apparently good enough for its typical use cases - or at least, have been for the last 25 years. With storage prices going down as they used to, and storage capacity going up as it used to, I guess it will probably last at least as long as this trend continues and everybody is still happy with 8bpp 601 colorspace images.

  20. The Following User Says Thank You to thorfdbg For This Useful Post:

    dnd (5th July 2015)

  21. #17
    Member
    Join Date
    Nov 2014
    Location
    California
    Posts
    122
    Thanks
    36
    Thanked 33 Times in 24 Posts
    Bandwidth and storage capacity are not everything. In the mobile world bandwidth is an issue (which is getting addressed by each new generation of mobile standard) and latency is a bigger issue. Sending less information reduces latency but also limits the risk of re-sending packets (which creates more latency). There is a reason why the Google website main page is mostly empty.
    Mobile network latency has hardly improved in the last 10 years, it has mostly been mitigated by better caching on the servers and more concurrent code on the client but it is not solved. Until we have better protocols, sending fewer bytes is one of the best workarounds.
    I do agree with the other comments: the market has taught us that a marginal improvement in compression does not justify the cost incurred by infrastructure changes.
    Last edited by hexagone; 5th July 2015 at 08:50.

  22. #18
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by hexagone View Post
    Bandwidth and storage capacity are not everything. In the mobile world bandwidth is an issue (which is getting addressed by each new generation of mobile standard) and latency is a bigger issue.
    Maybe. My question at this point would probably be how much of the latency is really due to JPEG, and how much is actually due to html, CSS and javascript on top. I would believe, but maybe I'm wrong, that most of the sluggishness of web spaces on mobile platforms are because the device has to load so many different resources (a HTML to begin with, a CSS here, a javascript there, two or three pictures...) and has to re-open a connection for each of them, i.e. has to go through a DNS lookup, create a socket, get the data....

    It would be an interesting experiment to take an average web page, e.g. the page of your favorite news, and measure how much of latency is really due to the images on the side, and how much is just the average overhead, over a high-latency interface like mobile. If compressing the images there by 20% better gives a notable better user experience (for whatever we define this), then I'm convinced.

    Just to tell you, we're currently building rich web applications for students here (completely different project), and that's already like 200K javascript that has to go over the line just for the interface. A 50K JPEG on top would probably be unnoticed, i.e. whether that's 50K or 40K.

  23. #19
    Member
    Join Date
    Nov 2014
    Location
    California
    Posts
    122
    Thanks
    36
    Thanked 33 Times in 24 Posts
    " If compressing the images there by 20% better gives a notable better user experience (for whatever we define this), then I'm convinced. "
    The latency is very much influenced by the quality of the network. It is non issue on wired networks and WIFI but it is one on congested mobile networks.
    One big culprit is that the TCP protocol is not designed for this kind of scenario where the underlying network layer is very unreliable. The re-sending of packets is a big contributor to latency on mobile networks (especially in congested urban areas with building scattering/reflecting signals). Until we have better transmission protocols, sending less data (whether it is HTML, javascript or images) reduces the risk of increasing the latency.
    As for your scenario, 200K + 50K JPEG, I suspect it is atypical. Usually web sites (commerce web sites, social media, ...) have a lot more pictures that code. Until we have optimized network protocols, limiting data to transmit and hiding the latency with caching/prefetching/concurrent connections is the usual workaround to hide the latency.
    Unrelated to mobile networks (focused on buffer bloat) but still a great read on network latency: http://queue.acm.org/detail.cfm?id=2071893

  24. The Following User Says Thank You to hexagone For This Useful Post:

    Jarek (6th July 2015)

  25. #20
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by hexagone View Post
    The latency is very much influenced by the quality of the network. It is non issue on wired networks and WIFI but it is one on congested mobile networks.
    Yes, but that's the central point. In how far can compressing the images better resolve that?

    Quote Originally Posted by hexagone View Post
    One big culprit is that the TCP protocol is not designed for this kind of scenario where the underlying network layer is very unreliable.
    Sure. But why is then a better compression the right answer for this problem?

  26. #21
    Member
    Join Date
    Nov 2014
    Location
    California
    Posts
    122
    Thanks
    36
    Thanked 33 Times in 24 Posts
    "Sure. But why is then a better compression the right answer for this problem?"
    Because sending fewer packets reduces transmission errors and the need to re-send packets resulting in reduced latency (an extra round trip can stall processing of subsequent packets ...).
    Last edited by hexagone; 6th July 2015 at 07:35. Reason: grammar

  27. #22
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    I have found some stats for webpage content: http://www.webperformancetoday.com/2...-1795-kb-size/

    So images are above 50% - reducing them by 20% would mean >10% improvement in both bandwidth and latency.
    Other content can be also compressed (probably even better, increasing importance of image size), what is done e.g. in Opera Turbo, Chrome "data compression proxy" - it should become a standard.

    Quote Originally Posted by hexagone View Post
    The re-sending of packets is a big contributor to latency on mobile networks (especially in congested urban areas with building scattering/reflecting signals). Until we have better transmission protocols, sending less data (whether it is HTML, javascript or images) reduces the risk of increasing the latency.
    The need of re-sending packets can be removed by using some fountain codes - the sender produces more packets, such that any large enough their subset is sufficient for reconstruction.
    The "scattering/reflecting" problem suggests that the sender often doesn't know how badly damaged will separate packets be - how much redundancy should be applied for each of them. By combining error correction and reconstruction, we can remove this requirement, making that it is sufficient that only the receiver knows approximate damage levels (Joint Reconstruction Codes).
    Fortunately and surprisingly, updating transmission protocols seems a bit simpler than for images ...

    ps. stats for mobile sites - 62% images: http://www.webperformancetoday.com/2...-pages-bigger/
    Last edited by Jarek; 6th July 2015 at 10:53.

  28. The Following User Says Thank You to Jarek For This Useful Post:

    hexagone (6th July 2015)

  29. #23
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    I have found some stats for webpage content: http://www.webperformancetoday.com/2...-1795-kb-size/
    So images are above 50% - reducing them by 20% would mean >10% improvement in both bandwidth and latency.
    Other content can be also compressed (probably even better, increasing importance of image size), what is done e.g. in Opera Turbo, Chrome "data compression proxy" - it should become a standard.
    Thanks, but that's not quite what I was asking for. This shows the bandwidth requirements in bytes, but that does not really mean that if we compress images better that user experience will improve. My main concern is that it is likely the establishment of the connection that takes the time, and not so much to get the actual data across. One could study this by either loading the same web page on a mobile device and on a desktop, and then compare, or by installing a firefox and a firebug debugger on top and there measure the latency. Firebug has tools for that.

    Quote Originally Posted by Jarek View Post
    Fortunately and surprisingly, updating transmission protocols seems a bit simpler than for images ...
    It seems so because there are less implications. Other than that, better compression algorithms are already there, just that nobody picks them.

  30. #24
    Member
    Join Date
    Nov 2014
    Location
    California
    Posts
    122
    Thanks
    36
    Thanked 33 Times in 24 Posts
    "Fortunately and surprisingly, updating transmission protocols seems a bit simpler than for images ..."
    Not at all. Replacing something like TCP is an gigantic endeavor given the number of people and devices using it. Changing image compression at the application level is way way more realistic. Google is working on QUIC (https://en.wikipedia.org/wiki/QUIC) but I am not holding my breath ....

    "but that does not really mean that if we compress images better that user experience will improve"
    Like I said, packet loss is a big contributor to latency in mobile network due to re-transmission need and stalling of local packet processing.

  31. #25
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by hexagone View Post
    Like I said, packet loss is a big contributor to latency in mobile network due to re-transmission need and stalling of local packet processing.
    That seems plausible, but adding additional error resilience at image compression level is not really going to help. That requires in addition a change in the communications layer that allows transmission of robust data formats over less reliable channels. IOWs, adding robustness at JPEG level is only beneficial if you have a way to signal that such images are to be transmitted over UDP and not TCP. Actually, such protocols exist, namely JPIP (ISO/IEC 15444-9). I still don't see them widely deployed.

  32. #26
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    Could you comment on JPEGmini?
    They claim to reduce JPEG 3-6 times with nearly no quality loss ...
    http://www.jpegmini.com/

  33. #27
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    437
    Thanks
    1
    Thanked 96 Times in 57 Posts
    Quote Originally Posted by Jarek View Post
    Could you comment on JPEGmini? They claim to reduce JPEG 3-6 times with nearly no quality loss ... http://www.jpegmini.com/
    Yes, I know this one. Actually, it is one out of many utilities (or online services) that claim so. In reality, all what these tools do is to requantize the image, probably (or hopefully) using a perceptual model for the quantization matrix, giving you a target quality that is *just* high enough to be above the visual threshold. This is not rocket science, it is more based on the fact that most images you take with a camera are under-compressed (i.e. very high quality setting). If you take an image that is already compressed to the visual threshold, then this tool will give you nothing (probably except some image defects).

    Probably the earliest of such tools is DCTune (scientifically valid), you may find it and publications on it in the web.

    Anyhow, most (if not all) of these tools just work on the regular rate-distortion curve of JPEG, i.e. you loose rate, but also quality. However, as long as you are above the visual threshold, you don't see (much of) the quality drop.

    If you "just" want to improve compression performance (in the sense of "moving the rate-distortion curve"), then you do not gain much (i.e. quality improvement at constant rate, or rate-reduction at constant quality). You're invited to test some of these methods here:

    http://jpegonline.rus.uni-stuttgart.de/

  34. The Following User Says Thank You to thorfdbg For This Useful Post:

    Jarek (4th August 2015)

Similar Threads

  1. JPEG XT Demo software available on jpeg.org
    By thorfdbg in forum Data Compression
    Replies: 40
    Last Post: 16th September 2015, 15:30
  2. Test JPEG Standards Online
    By thorfdbg in forum Data Compression
    Replies: 0
    Last Post: 27th November 2012, 22:14
  3. Reference of compression format
    By Silky in forum Data Compression
    Replies: 6
    Last Post: 24th April 2012, 04:18
  4. JPEG Compression Test [April 2010]
    By Skymmer in forum Data Compression
    Replies: 18
    Last Post: 7th February 2011, 23:30
  5. JPEG Compression Test [December 2009]
    By Skymmer in forum Data Compression
    Replies: 9
    Last Post: 23rd December 2009, 21:06

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •