Page 1 of 2 12 LastLast
Results 1 to 30 of 36

Thread: BCIF image compression program

  1. #1
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts

    BCIF image compression program

    I haven't seen it talked up here, but BCIF appears to be the best FOSS image coder.
    It's not really as good as some closed source competitors, but I think it's noteworthy anyway.
    Home page
    Benchmarks:
    http://www.researchandtechnology.net...benchmarks.php
    http://cdb.paradice-insight.us/
    http://encode.ru/threads/1222-FLIC-a...age-compressor

  2. #2
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Technically paq is an opensource image compressor too, though.

  3. #3
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I didn't say 'the strongest' for a reason :P

  4. #4
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    A quick test:

    Code:
    butterfly.bmp          921654
    butterfly.pnm          921615
    butterfly.png          461736
    butterfly.bcif         393086
    butterfly_bcif.rar     393043
    butterfly_bcif.paq8px  391475
    butterfly_bcif.zip     391974
    butterfly.flic         371769
    butterfly.bmp.paq8px   351957
    butterfly.bmf          349084
    
    butterfly.bcif:
    
    00000000:  42 43 49 46 01 00 00 00 │ 01 00 00 00 80 02 00 00
    00000010:  E0 01 00 00 74 12 00 00 │ 74 12 00 00 3E 0D B3 25
    00000020:  A3 33 D6 61 D5 FF 29 BD │ 2A FF FA 75 E9 FB FF D7
    00000030:  B5 AF 5F FF F5 FF EB A9 │ BF FE FF 7F FD FF D7 FA
    00000040:  1C AC F3 7F FD FF FF FF │ 5A FF D7 4B EB EB FA FA
    00000050:  5F 3F EB FF 97 7E FD AF │ FC EB FF E2 5A FF 7F FD
    00000060:  FF BF FE F5 AF 67 FD FA │ 3F FF F5 FF FF FA 97 D6
    00000070:  5F EB EB 6B 5D D7 03 92 │ 24 49 92 24 20 F9 FF 4B
    00000080:  AA EB 9F AF CF D7 FA 2F │ 08 84 24 49 92 24 49 92
    
    00001000:  32 53 CA 94 49 02 44 83 │ 20 84 08 22 BA 72 E9 92
    00001010:  01 32 20 23 23 03 BA 00 │ 00 00 00 00 00 00 00 00
    00001020:  00 00 00 00 00 00 00 00 │ 00 00 00 19 90 01 32 80
    00001030:  8E 90 81 B1 64 4A 32 A5 │ 94 24 40 B1 40 10 44 AA
    00001040:  0C 39 64 64 E8 08 00 80 │ 0C 00 00 00 00 00 00 00
    00001050:  00 00 00 00 00 00 00 00 │ 00 00 00 19 00 40 06 00
    00001060:  92 00 32 52 E8 1A 23 33 │ 64 A4 84 04 C0 B7 31 A8
    00001070:  1C 72 AC 1A 87 2E 5D 80 │ 56 00 00 C8 00 00 00 00
    00001080:  00 00 00 00 00 00 00 00 │ 00 00 00 00 00 00 00 00
    Conclusion - its not a compressor, but a bitcode transform.
    Adding actual entropy coding to it could help.

  5. #5
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Interesting. I found a file which was highly redundant after compression and thought something was wrong, even sent it to the author.
    PCIF, which is a base for BCIF uses entropy coding:
    http://www.researchandtechnology.net...ompression.php

  6. #6
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    BCIF actually kinda uses it too, but its huffman-based if i'm not mistaken.
    CMs at least can cover than kind of redundancy by probability precision - it appears when match runs are so long,
    that even short codes accumulate into runs.
    Anyway, its a good sign that model is inefficient.

  7. #7
    Member
    Join Date
    Mar 2011
    Location
    Italy
    Posts
    6
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hello everyone, I found out this thread and I though I could explain some things.

    First, and probably most important, BCIF has been created in order to allow a very fast decompression of images. This explains why some things can't be done: for example, a context modeling that is symmetrical during encoding and decoding must be reduced to the minimum. For this reason, for entropy coding I used a simple context-adaptive Huffman coding, as arithmetic coding would weight too much on decompression speed. Long runs are pre-processed with RLE.

    And in fact, I think BCIF reached its goal: to my knowledge, there is no other image compressor that reaches the BCIF compression ratio with superior decompression speed, nor closed or open source. Since in many contexts (think, for example, of an image on the web) decompression is executed many more times than compression, regarding speed this is a great feature. Of course, it all depends on the usage cases: where we don't care about decompression speed, then we could just use FLIC, or paq8 if we *really* don't care.

    Then: the first bytes of a BCIF file have nothing to do with compression, they are just a file header with encoder and image data (the first four bytes for example just read 'BCIF' in ascii). Further on in the file there are no such redundancies, also because otherwise it would be surprising for BCIF to be only a 3% worse than BMF (FLIC benchmarks).

    Finally, it is possible that for very compressible images that the BCIF files have redundancies, as factors that do not weight on other images do influence the compression ratio of these ones. I may think about how to solve the problem, but in fact this would mean modifying (and probably slightly slowing) the algorithm for an optimization of a very small family of images that already compress very well.

    Well I hope this explaination helped... if someone is interested in the details, I'll be glad to discuss them.

    Cheers,
    Stefano

  8. #8
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Hello, it's nice to have another specialist in here.
    I'd like to report to you that on my Pentium D 2.66, BCIF decompression is slower than BMF's. So is compression. I did only quick and dirty benchmarks, but the results are:
    Compression / decompression time
     Image1Image2Image3Image4Image5Image6Image7
    BCIF7.843/1.64034.531/7.79617.296/3.7180.234/0.04618.156/3.90617.328/3.76557.406/12.765
    BMF3.203/1.34319.812/6.9535.625/3.1090.156/0.0316.703/3.6256.843/3.67127.593/9.984
    Jasper6.343/5.31222.750/17.8439.546/8.0620.312/0.26512.156/10.79611.125/9.71833.781/28.906
    Size:
     Image1Image2Image3Image4Image5Image6Image7
    Uncompressed117141904852997424400566196662241875182378655082608294
    BCIF2570674105212703881256586477725664721757412829120
    BMF2620296106715924252220583567337040684266013831068
    Jasper3412906121438024866658596477955315719073015118994
    The data set was assembled in rush to somewhat reflect what I compress. Average compression ratio is slightly better on BCIF than BMF, but both compression and decompression is slower.
    Last edited by m^2; 10th March 2011 at 19:28.

  9. #9
    Member
    Join Date
    Mar 2011
    Location
    Italy
    Posts
    6
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Thanks, I'm happy to be part of the community too ! In fact, encode.ru seems to be much more active than other compression forums.

    These result are quite unexpected to me, for example they are in contrast with the benchmarks of FLIC that revealed BCIF to be twice as fast as BMF in decompression (slower compression was, instead, expected). On the other hand, also the superiority of BCIF in compression ratio in several images surprised me, in some benchmarks I did some time ago this was a rare event. Maybe this is due to some 'fast' settings in BMF ? I guess that some more comparisons would be useful.

    Anyway I hope I can agree that BCIF is the best FOSS compressor... or did I miss something ?

    So long,
    Stefano

  10. #10
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by StefanoBrocchi View Post
    These result are quite unexpected to me, for example they are in contrast with the benchmarks of FLIC that revealed BCIF to be twice as fast as BMF in decompression (slower compression was, instead, expected).
    I was just as surprised to see the FLIC results. ^^

    Quote Originally Posted by StefanoBrocchi View Post
    On the other hand, also the superiority of BCIF in compression ratio in several images surprised me, in some benchmarks I did some time ago this was a rare event. Maybe this is due to some 'fast' settings in BMF ? I guess that some more comparisons would be useful.
    Yeah, I would welcome them too.

    Quote Originally Posted by StefanoBrocchi View Post
    Anyway I hope I can agree that BCIF is the best FOSS compressor... or did I miss something ?
    Yeah, I definitely agree about it. I knew little about JPEG 2000 and was surprised how it was roadkilled.

  11. #11
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > Thanks, I'm happy to be part of the community too !

    Welcome!

    > In fact, encode.ru seems to be much more active than other compression forums.

    Like what? I'm only aware of forum.compression.ru , but its unlikely that you meant that.

    > First, and probably most important, BCIF has been created in order
    > to allow a very fast decompression of images. This explains why some
    > things can't be done: for example, a context modeling that is
    > symmetrical during encoding and decoding must be reduced to the
    > minimum.

    So do you have a slow prototype with proper context modelling,
    which you're trying to optimize?

    > For this reason, for entropy coding I used a simple context-adaptive
    > Huffman coding, as arithmetic coding would weight too much on
    > decompression speed. Long runs are pre-processed with RLE.

    Imho its a misunderstanding.
    1. Any bitcoding is more complicated than simple arithmetic coding.
    Sure, _static_ huffman decoding is faster than rangecoder decoding,
    about 4x maybe, but its only true for static huffman and simple optimized
    coders. With a complex coder structure (and always with adaptive huffman)
    rangecoder is likely to be faster.
    2. You don't know the actual strength of your model without precise
    entropy estimation.
    3. Its not an axiom that one entropy coding method has to be used
    everywhere. You can use arithmetic where it helps, and bitcode where
    it doesn't have much effect - like for noise bits.
    4. Imho the right way is to build a model with AC backend first
    (because AC compression is very close to theoretical limit), so
    that you can concentrate of model's performance, then apply speed
    optimizations (and bitcoding is one of these).

    Anyway, I'd suggest to at least look at that - http://encode.ru/threads/1153-Simple...angecoder-demo
    Many people see arithmetic coding as something complex simply because they've looked at wrong implementations.
    For example, AC in jpeg is both slow (at least on PC) and redundant, comparing to rc above.

    > And in fact, I think BCIF reached its goal: to my knowledge, there
    > is no other image compressor that reaches the BCIF compression ratio
    > with superior decompression speed, nor closed or open source.

    Well, there's at least BMF.
    And even if you're right, atm its relatively simple to make
    another codec with paq-like compression.
    Is it ok with you to know that it takes a week to make a better codec,
    just nobody does it for now?

    > Since in many contexts (think, for example, of an image on the web)
    > decompression is executed many more times than compression,
    > regarding speed this is a great feature.

    Afaik for images on the web decompression speed doesn't matter at all,
    because it takes time to render a complex page.
    Well, having a possibility of incremental decompression is a useful
    feature for image codec though, but it doesn't have to be that fast now -
    there're frequently videos on a page (flash ads etc), and these certainly
    take longer to decode than .png.

    > Of course, it all depends on the usage cases: where we don't care
    > about decompression speed, then we could just use FLIC, or paq8 if
    > we *really* don't care.

    Actually paq8 speed isn't the same as speed of its bmp model.
    It could be made a few times faster, if we'd build a standalone
    coder based on it.

    > Further on in the file there are no such redundancies, also because
    > otherwise it would be surprising for BCIF to be only a 3% worse than
    > BMF (FLIC benchmarks).

    Actually BMF is more or less a quick hack based on PPMII (afaik),
    and its results didn't change much since ~1999,
    so I'd not be so happy even with exactly same results.

    > Finally, it is possible that for very compressible images that the
    > BCIF files have redundancies, as factors that do not weight on other
    > images do influence the compression ratio of these ones.

    Well, for now the problem is that when _part_ of image doesn't have
    much detail, the corresponding part of BCIF output becomes redundant.

  12. #12
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts
    I wish there was something for losslessly compressing dds (direct-draw surface), or a translator that can make dds more compressable with LZMA.

  13. #13
    Member
    Join Date
    Mar 2011
    Location
    Italy
    Posts
    6
    Thanks
    0
    Thanked 0 Times in 0 Posts
    > Like what? I'm only aware of forum.compression.ru , but its unlikely that you meant that.

    I didn't have much feedback on the comp.compression group for example

    > So do you have a slow prototype with proper context modelling,
    > which you're trying to optimize?

    I'm not sure I understood the question. I try to gather information during compression that then is encoded in the compressed file, and is decoded and directly used during decompression. For example, filters used for the image and Huffman codes are computed in an adaptive way only during compression, and the decompressor only has to read them and apply them to invert filtering and symbol encoding.

    > Imho its a misunderstanding.
    > 1. Any bitcoding is more complicated than simple arithmetic coding.
    > Sure, _static_ huffman decoding is faster than rangecoder decoding,
    > about 4x maybe, but its only true for static huffman and simple optimized
    > coders. With a complex coder structure (and always with adaptive huffman)
    > rangecoder is likely to be faster.
    > 2. You don't know the actual strength of your model without precise
    > entropy estimation.
    > 3. Its not an axiom that one entropy coding method has to be used
    > everywhere. You can use arithmetic where it helps, and bitcode where
    > it doesn't have much effect - like for noise bits.
    > 4. Imho the right way is to build a model with AC backend first
    > (because AC compression is very close to theoretical limit), so
    > that you can concentrate of model's performance, then apply speed
    > optimizations (and bitcoding is one of these).

    > Anyway, I'd suggest to at least look at that - http://encode.ru/threads/1153-Simple...angecoder-demo
    > Many people see arithmetic coding as something complex simply because they've looked at wrong implementations.
    > For example, AC in jpeg is both slow (at least on PC) and redundant, comparing to rc above.

    In the algorithm I use a set of Huffman trees determined during compression (and that remain statical during decoding). When the images are decompressed, a simple computation is done to determine which family of codes has been used, and then the static codes are used. Thanks to lookup tables in about 90% of cases this can be done by simply reading a value from an array; hardly AC could be faster than that. I agree that it's hard to know the strength of the model without precise (higher order) entropy estimation, but I guess it is one of the compromises necessary for speed. But I will look at the coder you suggested to evaluate the possibility of realizing points 3 and 4.

    > Well, there's at least BMF.
    > And even if you're right, atm its relatively simple to make
    > another codec with paq-like compression.
    > Is it ok with you to know that it takes a week to make a better codec,
    > just nobody does it for now?

    I'm not very convinced of this, I think paq would give better compression but slower speed. But if someone does, be it welcome, it would be a very good codec in my opinion.

    > Afaik for images on the web decompression speed doesn't matter at all,
    > because it takes time to render a complex page.
    > Well, having a possibility of incremental decompression is a useful
    > feature for image codec though, but it doesn't have to be that fast now -
    > there're frequently videos on a page (flash ads etc), and these certainly
    > take longer to decode than .png.

    This depends on the website and how many image it has... there are also other cases, as a photo stored on a disk: an user can bare with a longer encoding phase but expects to see it instantly when he opens it.

    > Actually BMF is more or less a quick hack based on PPMII (afaik),
    > and its results didn't change much since ~1999,
    > so I'd not be so happy even with exactly same results.

    Apart FLIC, are there algorithms that are both noticeably faster and more effective than BMF ? I think BMF is a very good compromise between speed and compression ratio, if you know better ones I'm interested.

    > Well, for now the problem is that when _part_ of image doesn't have
    > much detail, the corresponding part of BCIF output becomes redundant.

    Right, in particular when the image has zones that are absolutely flat.

  14. #14
    Member Surfer's Avatar
    Join Date
    Mar 2009
    Location
    oren
    Posts
    203
    Thanks
    18
    Thanked 7 Times in 1 Post
    Quote Originally Posted by willvarfar View Post
    I wish there was something for losslessly compressing dds (direct-draw surface), or a translator that can make dds more compressable with LZMA.
    Not sure, but suppose that is lossy http://developer.amd.com/gpu/compres...s/default.aspx

  15. #15
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts
    That creates DDS files; NVidia has a more popular tool that does the same, based on the open-source libsquish.

    DDS is a format used internally by all mainstream console and desktop GPUs. (Phone GPUs from Imagination use an equiv but different PowerVR compression.)

    Its lossy compression, in that you give it your input texture - typically TGA - and it compresses it at a fixed bitrate. But the finding the best parameters for compressing any image to this bitrate with minimal artifacts (especially in mipmaps) is demanding, so a game engine prefers to eat DDS files that have been prepared by tools like Compressonator rather than letting the GPU do its quick-and-basic squashing.

    So DDS is dominating in games, and now that games are increasingly distributed online rather than CD/DVD I feel the need to improve compression further if its possible. (Glest mods can be 100s of MB)

    Compressing the simplified DDS version of an image might be done in some cunning way such as lossless JPEG?

    Or simply rearranging it by channel or such so as to increase the visibility of redundancy to LZMA or such?

    Sorry to hijack the thread; I was just capitalising on a new joiner with experience in image compression saying 'hi'
    Last edited by willvarfar; 11th March 2011 at 00:58.

  16. #16
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    It's always entertaining to read some Shelwien's ranting

    I wish there was something for losslessly compressing dds (direct-draw surface), or a translator that can make dds more compressable with LZMA.
    All of the GPU texture compression algorithms are lossy as they have fixed compression ratio. While old compression formats (up to DirectX 9.0c or DirectX 10) are based on gradients and quantization so they are simple, new compression formats introduced in DirectX 11 (named BC6H and BC7 in DirectX or BPTC and BPTC_FLOAT in OpenGL terminology) are quite complicated. I think that with those new formats even making PAQ model would be hard.

    It should be possible to do something like Precomp does for Deflate, but I think no lossless coder is suited for compressing such images (ie. first lossily compressed, then decompressed). Maybe it would be possible but probably only for those simple gradient based formats. Also in those formats alpha layer is coded separately so we can deinterleave alpha and color layers before compression.

    http://www.opengl.org/registry/specs...ssion_bptc.txt
    Look at it. It's freaking complicated.
    Last edited by Piotr Tarsa; 11th March 2011 at 01:33.

  17. #17
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts
    Funnily enough, Shelwein seems to have, a long time ago, brushed with xbmc's texture compression cache. That is DDS textures that they recompress very successfully with LZO... (he just mentioned this on IRC)

    The links about the new texture formats is very interesting, I was not aware the goalposts had moved further recently.

    Real stats from a real image:
    262,162 texture_archer.tga
    104,164 texture_archer.tga.xz
    43,832 texture_archer.dds
    27,748 texture_archer.dds.xz
    Last edited by willvarfar; 11th March 2011 at 03:25.

  18. #18
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Thanks for reminding about that type of images, I'd try experimenting with these after finishing the current projects -
    it really seems to be popular in games and the like.
    Simple recompression would be still simple though (not like precomp, which decodes all data, but like lzmarec) -
    probably it would already work even if we'd just add a bitwise rangecoder and basic contextual stats (with value type
    and previous value bits as context) to a format parser.

    Code:
    <willvarfar> 
     his is 4x faster than RC, right?
    <Shelwien> 
     no, its just my more or less random estimation
    <willvarfar> 
     or you mean 4x if you have a fixed table and that table is encoded
     in the sourcecode?
     you said "Sure, _static_ huffman decoding is faster than rangecoder
     decoding, about 4x maybe, but its only true for static huffman and
     simple optimized coders"
     and his context-adaptive huffman is your static huffman, just like
     say jpeg or zip?
    <Shelwien> 
     yes
     i meant that its like that if we'd make a order0 byte coder or
     something like that
     but its much less certain with complex structured models
     as i said, real adaptive huffman is always slower than arithmetic
     (its simply more complicated) but static huffman is only faster
     when it can be properly optimized
     i mean, it can have lots more branches than rangecoder
    <willvarfar> 
     so was I right in saying you thought he was saying he had 'adaptive
     huffman' as in Vitter? and you answered in those terms?
    <Shelwien> 
     no, i thought that it was likely static (adaptive huffman is very
     hard to implement; its rare), but i was too lazy to check the
     source, so mentioned adaptive huffman too
     anyway, my main point is that choosing entropy coding type just
     because its "fast" is wrong. and the right way imho is to use AC
     first, and build a good model, then optimize the speed if necessary
    <willvarfar> 
     agree, you make a compelling case
     convinced me :)
    <Shelwien> 
     it may be possible to skip the first step in well-researched areas,
     like when making a standard LZ77 coder
     but lossless image coding is clearly not one of these
     because paq8 model only uses nearby points for prediction, and has
     no colorspace or whatever picture-specific elements, but its still
     very good (at least its compression ratio) when compared with other
     lossless image codecs

  19. #19
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > I didn't have much feedback on the comp.compression group for example

    Only perpetuum mobile inventors get feedback there.
    I hoped that you know something unknown to me though :)

    > Thanks to lookup tables in about 90% of cases this can be done
    > by simply reading a value from an array; hardly AC could be faster
    > than that.

    Actually a static AC can be decoded the same way.
    A while ago Cyan posted an order0 coder built like that -
    its processing speed was ~100MB/s afair.
    And even adaptive bitwise coders (like rc_v0, but with buffered i/o)
    demonstrate ~15MB/s or so (fpaq0pv4B which is more optimized, is ~27MB/s).
    For images of reasonable size (like 1-3MB) it doesn't seem that much
    of a delay.

    Also an important point is that advanced codecs basically use
    both methods - they compress original data to static bitcode first,
    then apply arithmetic coding with appropriate contexts.
    Thus static bitcoding works as a speed optimization - it reduces
    the number of arithmetic coder calls, but what's more important,
    also the number of probability estimations, allowing to use
    stronger probability estimation functions.

    Bytewise rangecoding (like in ppmd) also remains a possibility though.

    > I agree that it's hard to know the strength of the model without
    > precise (higher order) entropy estimation,

    In structured models (ones with multiple types of symbols; unlike
    ones which see the data as a uniform string of bytes) the precision
    is more about using right components with right parameters
    ("components" are things like counters, mixers, probability mappings)

    > there are also other cases, as a photo stored on a disk: an user can
    > bare with a longer encoding phase but expects to see it instantly
    > when he opens it.

    My current USB flash drive has a read speed of 15MB/s.
    ie it doesn't matter for me if an image decoder can do faster than that.
    Also SD cards from my camera are much slower than that,
    around 1MB/s maybe.
    And web is normally much slower.
    Either way its not a problem for modern rangecoders.

    > Apart FLIC, are there algorithms that are both noticeably faster and
    > more effective than BMF ?

    Not sure, there's an interesting one there though -
    http://encode.ru/threads/1171-Lossle...ll=1#post23273
    Also MRP. http://itohws03.ee.noda.sut.ac.jp/~matsuda/mrp/

    > I think BMF is a very good compromise between speed and compression ratio,

    Note that its fully based on arithmetic coding (afaik).
    Also afaik, it looks somewhat like that - http://compression.ru/ds/bcdr.rar

  20. #20
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts
    My test image is from here: http://megaglest.svn.sourceforge.net...archer/models/

    I empirically judge it to be representative of a in-game texture

    Updating my stats to show BCIF and other lossless compressors for the original texture - I had to strip the alpha channel and convert it to bmp:

    196662 texture_archer.bmp
    98233 lzma e texture_archer.bmp 1c -d25 -fb273 -mc9999999 -mfbt3 -mt1 -lc5 -lp0 -pb0
    93985 png (pngout)
    82879 glicbawls
    74035 paq8l
    69215 MRP (though as 768x256 grayscale)
    69091 bcif
    67828 texture_archer.bmp.bcif.zip (*ahem*)
    64842 flic
    59944 paq8px69
    59730 zpaq nsicbmp_j4 8
    57388 bmf2
    * thank-you Shelwein for computing most of these numbers

    VERY NICE BCIF!
    Last edited by willvarfar; 11th March 2011 at 11:59.

  21. #21
    Member
    Join Date
    Mar 2011
    Location
    Italy
    Posts
    6
    Thanks
    0
    Thanked 0 Times in 0 Posts
    I trimmed some answers to make the thread more readable...

    > Actually a static AC can be decoded the same way...

    I will check out these coders, but I don't know if I could actually get advantage from a first order static arithmetic coder, using RLE+Huffman I'm already really close to first order entropy.

    > For images of reasonable size (like 1-3MB) [for adaptive bitwise coders] it doesn't seem that much
    > of a delay...
    > Either way [image decoder speed] its not a problem for modern rangecoders...

    I could make more examples to argue in favor of importance of speed (decoding while multitasking, decompressing an image on disk,...) but I suspect that discussing these motivations would create a long discussion and bring us out of topic. Of course, thinking that speed is not so important would remove the assumptions under which BCIF is built; in this case we could just avoid using any fast algorithm such as BCIF, J2K and FLIC in favor of more efficient and slow ones as MRP and GRALIC.

    > Not sure, there's an interesting one there though -
    > http://encode.ru/threads/1171-Lossle...ll=1#post23273
    > Also MRP. http://itohws03.ee.noda.sut.ac.jp/~matsuda/mrp/

    GLICBAWLS does not behave well for color images, often it compresses similarly to PNG. And it is also on a whole other level of complexity in respect to BCIF - don't be tricked by the short source code, for every sample it applies minimum squares to something like 64 values (if I remember well). MRP compresses generally well, but is *hundreds* of times slower in compression, and several times slower also in decompression. Unless ignoring completely time, it is hard to say that these coders are better than BCIF.

    (willvarfar)
    > My test image is from here: http://megaglest.svn.sourceforge.net...archer/models/...

    Thanks for the test ! Could you also give us some information about execution times ?

    > texture_archer.bmp.bcif.zip (*ahem*)

    Ok I'll take a look at this Guess it is still caused by the flat black zones... well the good news is that in these cases there's space for improvement !

    > VERY NICE BCIF!

    Thanks !

  22. #22
    Member
    Join Date
    Mar 2011
    Location
    Italy
    Posts
    6
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by m^2 View Post
    Hello, it's nice to have another specialist in here.
    I'd like to report to you that on my Pentium D 2.66, BCIF decompression is slower than BMF's...
    A doubt: you used the C (or windows executable) version didn't you ? Because the Java version is in fact much slower, expecially in decompression

  23. #23
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts
    Could you also give us some information about execution times ?
    afraid not, as I didn't measure and Shelwein (who did most of those tests) didn't either.

    But I do clearly remember that as its a small image they are all so blindingly fast - certainly even PAQ is finished within the second - so timings are not meaningful and will be hidden in the OS initialisation times as much as the algorithms.

    Certainly if there was a DDS compressor then I would be super attentive to decompression times because, even if for each image its a matter of milliseconds, for all artwork in the game it can be a big deal.

    Have you considered using SOIL to load a wider variety of input formats rather than just 24-bit BMP? And will you consider adding alpha support?
    Last edited by willvarfar; 11th March 2011 at 13:28.

  24. #24
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Q9450 @ 3.52Ghz, ramdrive
    Code:
    18874422 bmp                                              
     1473146 png
     1198829 3.344s 0.672s bcif
     1189002 bcif.zip
     1392876 0.984s 0.187s bmf2
      891208 2.937s 1.359s bmf2 -S

  25. #25
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by StefanoBrocchi View Post
    A doubt: you used the C (or windows executable) version didn't you ? Because the Java version is in fact much slower, expecially in decompression
    Windows exe

    I updated the results with MRP. It failed to compress most of images because it requires dimensions to be multiplies of 8. And it's good that it did because it's extremely slow.
    I lost timing of compression on the 1st file because of a typo in command line and I don't intend to run it again. You can assume it took forever and some more.
    But nevertheless I like it. Decompression speed is good and it may have some uses when turned to something more generic.
    I also tested glicsomething. It's results shouldn't be taken with a grain of salt, at least a spoonful 'cause it's stdin-out compression made it harder for me to measure timing, I did it lazily and measured wall time and the machine was doing many other things in the meantime. I didn't test it's decompression after I saw that it's fully symmetric.


    And later I learned that what that PGM is not equivalent of bitmap and all the results are not comparable. Still, you can see them below, at least timing is worth something. Just remember to triple it for glicsomething and MRP because they got smaller files.

    Compression / decompression time
     Image1Image2Image3Image4Image5Image6Image7
    BCIF7.843/1.64034.531/7.79617.296/3.7180.234/0.04618.156/3.90617.328/3.76557.406/12.765
    BMF3.203/1.34319.812/6.9535.625/3.1090.156/0.0316.703/3.6256.843/3.67127.593/9.984
    Jasper6.343/5.31222.750/17.8439.546/8.0620.312/0.26512.156/10.79611.125/9.71833.781/28.906
    MRP**?/6.48419.765/0.078***
    glicbawls291/30522067162.77287213755
    Size:
     Image1Image2Image3Image4Image5Image6Image7
    Uncompressed117141904852997424400566196662241875182378655082608294
    BCIF2570674105212703881256586477725664721757412829120
    BMF2620296106715924252220583567337040684266013831068
    Jasper3412906121438024866658596477955315719073015118994
    MRP**94508116703***
    glicbawls9680312999989100206017191196439920905453179613


    EDIT: I decided to list codec versions.

    BCIF 1.0 beta
    BMF 2.0
    Jasper 1.900.1
    MRP 0.5
    glicsomething_sh_v0
    Last edited by m^2; 11th March 2011 at 16:19.

  26. #26
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    232
    Thanks
    38
    Thanked 80 Times in 43 Posts
    Quote Originally Posted by m^2 View Post
    I'd like to report to you that on my Pentium D 2.66, BCIF decompression is slower than BMF's. So is compression.
    I guess that's because Pentium D is 64-bit, BCIF.exe has only 32-bit code, and BMF2 has a lot of SSE and/or MMX code.

    Quote Originally Posted by willvarfar View Post
    57388 bmf2
    That's probably BMF2-s, and what about BMF2 with no "-s" ?
    Last edited by Alexander Rhatushnyak; 11th March 2011 at 18:28.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  27. #27
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > I guess that's because Pentium D is 64-bit, BCIF.exe has only 32-bit code, and BMF2 has a lot of SSE and/or MMX code.

    More likely its an i/o overhead.

    > That's probably BMF2-s, and what about BMF2 with no "-s" ?

    70064

  28. #28
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Shelwien View Post
    > I guess that's because Pentium D is 64-bit, BCIF.exe has only 32-bit code, and BMF2 has a lot of SSE and/or MMX code.

    More likely its an i/o overhead.
    No, since I did it on busy computer, wall times would be very inaccurate and I used them only in case of glicsmth. The other times are kernel+user CPU time. Not much different from pure user CPU time in this case.

  29. #29
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Update:
    GraLIC, BMF -s, FLIC, PAQ8
    Didn't decompress PAQ files

    Compression / decompression time
    Highlighted Pareto frontiers
     Image1Image2Image3Image4Image5Image6Image7
    BCIF7.843/1.64034.531/7.79617.296/3.7180.234/0.04618.156/3.90617.328/3.76557.406/12.765
    BMF3.203/1.34319.812/6.9535.625/3.1090.156/0.0316.703/3.6256.843/3.67127.593/9.984
    BMF -s50.656/42.656209.953/152.10990.062/80.2183.062/2.171119.546/105.656113.234/107.406248.578/197.640
    GraLIC17.640/19.65652.421/61.84323.609/26.1870.468/0.51533.843/39.45332.671/37.32884.203/88.031
    Jasper6.343/5.31222.750/17.8439.546/8.0620.312/0.26512.156/10.79611.125/9.71833.781/28.906
    PAQ8px_v69 -330013226776.3597748172566
    FLIC3.125/2.93714.046/13.0316.968/6.2180.062/0.0787.734/7.1257.281/7.23424.093/22.109
    Size:
     Image1Image2Image3Image4Image5Image6Image7
    Uncompressed117141904852997424400566196662241875182378655082608294
    BCIF2570674105212703881256586477725664721757412829120
    BMF2620296106715924252220583567337040684266013831068
    BMF -s19846567577312281784444768601876855106128869500
    GraLIC23187867690232290334648042618744356843898606256
    Jasper3412906121438024866658596477955315719073015118994
    PAQ8px_v69 -321630448106505306010153847672958859356429087336
    FLIC274076593073543516399543456731118627212710522245
    BCIF 1.0 beta
    BMF 2.0
    Jasper 1.900.1
    MRP 0.5
    glicsomething_sh_v0
    GraLIC18d
    FLIC 1.3.demo


    Please note that FLIC/BCIF decompression speed is similar to Alexander's. BMF is relatively much faster and JPEG 2000 much slower (I used Jasper and he - Kakadu).
    Last edited by m^2; 12th March 2011 at 00:36.

  30. #30
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    thanks for benchmarking it. please write your cpu specs. is all the compressors single-threaded?

Page 1 of 2 12 LastLast

Similar Threads

  1. WebP (lossy image compression)
    By Arkanosis in forum Data Compression
    Replies: 62
    Last Post: 12th April 2019, 18:45
  2. UCI Image Compression
    By maadjordan in forum Data Compression
    Replies: 5
    Last Post: 19th August 2017, 23:15
  3. LZP2 - compression program by a newbye
    By Cyan in forum Data Compression
    Replies: 45
    Last Post: 1st May 2009, 13:30
  4. RTIME - New freeware program
    By LovePimple in forum The Off-Topic Lounge
    Replies: 3
    Last Post: 4th July 2008, 10:47
  5. XTIME - New freeware program
    By LovePimple in forum The Off-Topic Lounge
    Replies: 4
    Last Post: 4th July 2008, 03:40

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •