Page 1 of 3 123 LastLast
Results 1 to 30 of 65

Thread: BCM v0.04 is here! [!]

  1. #1
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts

    Exclamation BCM v0.04 is here! [!]

    OK, a new version of BCM is here!

    What's new:
    • Improved CM back end (Added linear interpolation to the SSE)
    • Changed block size to 64 MB
    • Some code optimizations
    • Some user interface improvements
    • Fixed decoder's bug with a fake alert
    Quick comparison:

    calgary.tar
    BCM -> 784,072 bytes
    BLIZ -> 790,491 bytes
    BBB -> 798,705 bytes
    MCOMP -> 800,402 bytes
    Original -> 3,152,896 bytes

    book1
    BCM -> 210,642 bytes
    BLIZ -> 212,130 bytes
    BBB -> 213,162 bytes
    MCOMP -> 217,403 bytes
    Original -> 768,771 bytes

    world95.txt
    BCM -> 469,603 bytes
    BLIZ -> 474,891 bytes
    MCOMP -> 474,985 bytes
    BBB -> 475,408 bytes
    Original -> 2,988,578 bytes

    Attached Files Attached Files

  2. #2
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    BOOKSTAR:
    0.04 9171992 ~30
    0.03 9262010 27.844

  3. #3
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Can you add some switches for disabling BWT and postcoder,
    or just make a separate executables for that?
    If you do, we'd be able to compare your coder to
    http://ctxmodel.net/files/ST2rc/ and some others.

  4. #4
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    Quote Originally Posted by Shelwien View Post
    Can you add some switches for disabling BWT and postcoder,
    or just make a separate executables for that?
    If you do, we'd be able to compare your coder to
    http://ctxmodel.net/files/ST2rc/ and some others.
    If disable BWT and postcoder we will get a COPY command!
    If I will have some spare minute I'll compile standalone CM back end from BCM, so you'll able to ensure that all your RCs are beaten... But not really sure should I produce such LEGO consturctor kit... At the same time, I described you my CM via ICQ - so you know all the stuff in detail!
    All in all, BBB mistakenly uses too much APMs, instead of a good and tuned with each other counters as a main part of the coder.
    Your RC lacks of SSE, yep you use dynamic mixing, it is cool, but as you can see, replacing dynamic mixing with static one and adding SSE is better.
    Having said, if complete the square adding dynamic mixing or SSE2 with main SSE we will get something crazy, but slow! Yep, I tried to avoid BBB trip - overloaded and heavy duty CM that not really works well at the end. My CM is quite simple, but efficient enough. I may continue adding new bells and whistles to BCM to get even further... But still I want to keep BCM light-weight enough to be used in practice...

  5. #5
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts

  6. #6
    Member
    Join Date
    Feb 2009
    Location
    USA
    Posts
    58
    Thanks
    0
    Thanked 0 Times in 0 Posts
    AMD Athlon 64 3000+ @ 2.2GHZ (single core)
    Testing with enwik8

    bcm002a - 23,761,415 bytes - compress: 138 sec, decompress: 54 seconds
    bcm003 - 22,007,655 bytes - compress 69 sec, decompress: 61 seconds
    bcm 004 - 21,450,604 bytes - compress: 76 sec, decompress: 66 seconds

    Way to go! Here's a chart.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	bcm_progress.png 
Views:	563 
Size:	24.7 KB 
ID:	366  

  7. #7
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts

    Thumbs up

    Quote Originally Posted by encode View Post
    OK, a new version of BCM is here!

    What's new:
    • Improved CM back end (Added linear interpolation to the SSE)
    • Changed block size to 64 MB
    • Some code optimizations
    • Some user interface improvements
    • Fixed decoder's bug with a fake alert
    Thanks Ilia!

  8. #8
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Hahobas View Post
    AMD Athlon 64 3000+ @ 2.2GHZ (single core)
    Testing with enwik8

    bcm002a - 23,761,415 bytes - compress: 138 sec, decompress: 54 seconds
    bcm003 - 22,007,655 bytes - compress 69 sec, decompress: 61 seconds
    bcm 004 - 21,450,604 bytes - compress: 76 sec, decompress: 66 seconds

    Way to go! Here's a chart.
    Why do you start the size chart at 20000000?
    That's seriously misleading.

  9. #9
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    410
    Thanks
    37
    Thanked 60 Times in 37 Posts
    ---
    ..\PGM\bcm003 e db.dmp ..\RESULT\tdbbcm3
    bcm v0.03 by ilia muraviev
    encoding...
    ratio: 0.392 bpb
    done
    ---
    ..\PGM\bcm004 e db.dmp ..\RESULT\tdbbcm4
    bcm v0.04 by ilia muraviev
    encoding 65536k block...
    encoding 65536k block...
    encoding 65536k block...
    encoding 65536k block...
    encoding 65536k block...
    encoding 65536k block...
    encoding 65536k block...
    encoding 65536k block...
    encoding 65536k block...
    encoding 43312k block...
    ratio: 0.399 bpb
    done
    ---

    sorry!

    result:

    The Testfile db.dmp (Oracle-dump-file) has 648331264 bytes.

    bcm003 ratio: 0.392 bpb compress to 31783496 bytes 260 s
    bcm004 ratio: 0.399 bpb compress to 32319001 bytes 280 s

    the test takes an unexpected turn:

    with my testfile -
    bcm004 does not better compress then bcm003

    ---
    RINGS v.1.3 (FCM Fast Context Mixing) file compressor. Only for testing
    Copyright 2007 by Nania Francesco Antonio (Italy). All rights reserved.

    rings13 c compress to 26356464 bytes time= 20.77 s.
    ---
    Matt Mahoney:
    rings 1.3 uses 54 MB for compression and 47 MB for decompression
    ---
    ---
    --- BCM004 - Changed block size to 64 MB ---
    ---
    ---
    it would be interesting
    if we can test the new algorithm with another blocksize
    may be 32 MB blocksize / 48 MB blocksize / 256 mb blocksize

  10. #10
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    Well, RINGS is NOT BWT-based compressor!

    As example, all BWT-based compressors are failed on english.dic:

    english.dic
    RINGS v.1.5c -> 566,326 bytes
    BCM -> 1,166,789 bytes
    BBB -> 1,170,489 bytes
    BLIZ -> 1,170,607 bytes
    MCOMP -> 1,190,740 bytes
    DARK -> 1,192,700 bytes
    SBC -> 1,195,254 bytes
    Original -> 4,067,439 bytes

    It's a proof that RINGS v.1.5c is not a BWT-based file compressor.

    The same thing should be at the database compression. So do not compare apples and oranges!
    Such performance on such specific data not displays a super compression...

  11. #11
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by encode View Post
    Well, RINGS is NOT BWT-based compressor!

    As example, all BWT-based compressors are failed on english.dic:

    english.dic
    RINGS v.1.5c -> 566,326 bytes
    BCM -> 1,166,789 bytes
    BBB -> 1,170,489 bytes
    BLIZ -> 1,170,607 bytes
    MCOMP -> 1,190,740 bytes
    DARK -> 1,192,700 bytes
    SBC -> 1,195,254 bytes
    Original -> 4,067,439 bytes

    It's a proof that RINGS v.1.5c is not a BWT-based file compressor.

    The same thing should be at the database compression. So do not compare apples and oranges!
    Such performance on such specific data not displays a super compression...
    So what is it? When you run it, it says "fast context mixing".
    http://heartofcomp.altervista.org/ says "bwt+ari".
    What's up?

  12. #12
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    410
    Thanks
    37
    Thanked 60 Times in 37 Posts
    @encode
    first i want to say:
    thank you for your work!
    the new bcm004 is a good compressor for several files and is it fast too

    ---
    Matt Mahoney on http://www.cs.fit.edu/~mmahoney/compression/text.html
    ---
    rings 1.3 uses 54 MB for compression and 47 MB for decompression.

    rings 1.4c has an option (1-9) which selects memory usage
    Each increment doubles usage
    For option 9, compression uses 526 MB and decompression uses 789 MB.
    The program uses BWT.
    The transformed data is encoded using MTF (move to front),
    pre-Huffman coding followed by arithmetic coding.

    rings 1.5 was released Apr. 21, 2008.
    It improves compression and is symmetric with regard to memory usage.
    Options are like 1.4c.
    ---
    rings 1.3 uses method CM
    rings 1.4c uses method BWT+...
    rings 1.5c uses method BWT+... ???
    ---
    you are right about rings 1.3 it seems to use CM and not BWT

    but why do you think - rings 1.5c does not use BWT ?

    The good compression of a special file
    is a unimpeachable evidence for
    "RINGS v.1.5c is not a BWT-based file compressor" ??

    i think only Nania Francesco knows how work the rings 1.5c inside ?

    best regards

  13. #13
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    I read some posts about RINGS at the old forum. Probably, RINGS uses ST4 or something. ST4 is a "limited order BWT". Nania speaks via translator - it's not a trivial task to understand what he means sometimes. Anyway:
    Rings 1.0 - 1.4c use:
    Fast BWT->MTF->PRE-HUF->ARI

    PRE-HUF is:
    PRE-HUF ->
    0=bit 0
    1=bits 100
    2=101
    3=1100 etc.

    Maybe I should try this PRE-HUF idea? Adding SSE and many contexts...

    Anyway, I was really impressed by RINGS myself. And as with Christian's programs one question comes to mind - "How he did that??"

  14. #14
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    "Pre-Huff" is a very nice feature, since it can improve compression and boosts the coding speed extremely.

    With a byte oriented CM output coding (flat decomposition) you *always* have to do 8 coding steps, normally and you have as may distinct symbols as the source emits.

    With MTF the number of distinct symbols may be reduced, since a context history is mostly dominated by the last seen symbol(s) 0, 1, 2, ... , thus ~50-60% of the data is a "0" after the MTF stage. If the "0" symbol gets a 1 bit huffman code you can code 50-60% of the data with just one coding step.

    Some experience tells me:

    P(0) = 0.60 -> 0.74 bit ~ 1 bit (rounded up to be pessimistic)
    P(1) = 0.15 -> 2.74 bit ~ 3 bit
    P(2) = 0.05 -> 4.32 bit ~ 5 bit

    where P(s) is the probability of s (here s is the context rank, which should be similar to the MTF symbol distribution). As you see the symbols 0..2 cover ~80% of the whole data. The average number of coding steps per symbol (for 80% of the data) would be:

    1 * 0.6 + 3 * 0.15 + 5 * .05 = 1.3 bit

    Your approach uses 8 steps for all of the data. Hence one can see that the post coder can be accelerated by a factor of 4..6 and compression can be improved, too. But that depends on the applied decomposition and data.

  15. #15
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    But by itself MTF may destroy some correlations... Anyway, the idea is worth to implement.

  16. #16
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    You don't need to use the same alphabet for contexts and coded symbols. So using the original alphabet for contexts and the MTFed alphabet for coding the current symbols you won't loose anything.

  17. #17
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    Well, with my CM this not works for sure...

  18. #18
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts
    Is there any chance for a gui (or at least for a readme which contains all commands...)?

  19. #19
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Why shouldn't it work? It's pretty easy, actually...

  20. #20
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    Quote Originally Posted by JangoFatXL View Post
    Is there any chance for a gui (or at least for a readme which contains all commands...)?
    BCM has just two commands:
    e - encode (compress)
    d - decode (decompress)



    To encode 'book1' to 'book1.bcm':
    bcm e book1 book1.bcm

    To decode:
    bcm d book1.bcm book1

    Additionally, you may use the 'Send To' menu with a BAT file.
    Create a new TXT file and type:
    bcm e %1 %1.bcm

    Rename the file to, say, 'bcm.bat'.

    Copy this file to:
    C:\Users\<User Name>\AppData\Roaming\Microsoft\Windows\SendTo\
    (Under Windows Vista)

    Copy bcm.exe to:
    C:\Windows\

    Done!

    Well, I have some plans about making BCM Open Source as well as a part of the PIM archiver! But it will be only after I'll finish working on it!


  21. #21
    Tester

    Join Date
    May 2008
    Location
    St-Petersburg, Russia
    Posts
    182
    Thanks
    3
    Thanked 0 Times in 0 Posts
    I have some plans about making BCM Open Source as well as a part of the PIM archiver!
    Wow! It will be great!

  22. #22
    Member
    Join Date
    Jun 2008
    Location
    Germany
    Posts
    369
    Thanks
    5
    Thanked 6 Times in 4 Posts
    thx

  23. #23
    Member
    Join Date
    Aug 2008
    Location
    Saint Petersburg, Russia
    Posts
    215
    Thanks
    0
    Thanked 0 Times in 0 Posts
    I'm actually impressed by BCM's performance My previous tests showed NanoZip's domination among BWT compressors, but not this time...
    Code:
    aaz@rover:/media/data$ ls -laS reymont*
    -rwxrwxrwx 1 root root 6627202 2009-02-15 01:06 reymont
    -rwxrwxrwx 1 root root 1246230 2009-02-15 01:39 reymont.bz2
    -rwxrwxrwx 1 root root 1004263 2009-02-15 02:00 reymont.bwt.mcp
    -rwxrwxrwx 1 root root 1000560 2009-02-15 01:07 reymont.co.nz
    -rwxrwxrwx 1 root root  988205 2009-02-15 01:15 reymont.cO.nz
    -rwxrwxrwx 1 root root  983275 2009-02-15 02:31 reymont.bbb
    -rwxrwxrwx 1 root root  982045 2009-02-15 01:33 reymont.bcm
    -rwxrwxrwx 1 root root  970558 2009-02-15 01:19 reymont.cc.nz
    Currently I am stumped to find any everyday BWT compressor (apparently nz -cc doesn't belong to them) to beat BCM at this file

    Though it needs some tuning for big files:
    Code:
    aaz@rover:/media/data$ ls -laS wine-1.1.15.*
    -rwxrwxrwx 1 root root 111175680 2009-02-15 02:20 wine-1.1.15.tar
    -rwxrwxrwx 1 root root  12865441 2009-02-15 02:26 wine-1.1.15.tar.co.m723m.nz
    -rwxrwxrwx 1 root root  12830801 2009-02-15 02:23 wine-1.1.15.tar.bcm
    -rwxrwxrwx 1 root root  12828612 2009-02-15 02:46 wine-1.1.15.tar.bwt.mcp
    -rwxrwxrwx 1 root root  12712151 2009-02-15 02:25 wine-1.1.15.tar.cO.m599m.nz
    -rwxrwxrwx 1 root root  12712151 2009-02-15 02:26 wine-1.1.15.tar.cO.m739m.nz
    -rwxrwxrwx 1 root root  12606927 2009-02-15 02:46 wine-1.1.15.tar.bbb
    I know, BBB took way more than the others to compress, but BCM also lost to mcomp and nz -cO

  24. #24
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    It's important to test BCM with others with the same block size. On binary data, for example, BBB with a smaller block size may beat BCM, but with the same block size may not beat. Mostly BCM is the strongest from all!
    BLIZ uses LZP preprocessing that may really help in some cases - pht.psd as example. NanoZip has a heavy preprocessing, additionally to multiple algorithm selection. I beleive that even with '-co' NZ in some cases may choose not its BWT - my test shown that. At the same time BCM has only a simple E8/E9 transformer.
    BWT, check out a small example, how NanoZip performs with and without preprocessing. To disable its preprocessing I XORed the test file with 128:

    book1:
    BCM -> 210,642 bytes
    NZ -> 196,190 bytes

    XORed book1:
    BCM -> 210,745 bytes
    NZ -> 214,693 bytes

    You may see that BCM has closely the same performance on XORed book1, at the same time, NanoZip showed the REAL performance of its BWT. BCM is MUCH stronger!

    Furthermore, v0.04 is not the final version of BCM. I already working on even stronger CM back end - yep it's more complex, but at the same time due to some optimizations and making it more cache efficient I hope that I'll not loose too much processing speed, and probably will make my CM even faster!


  25. #25
    Member
    Join Date
    Aug 2008
    Location
    Saint Petersburg, Russia
    Posts
    215
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by encode View Post
    BWT, check out a small example, how NanoZip performs with and without preprocessing.
    "BWT" - a funny typo there
    Quote Originally Posted by encode View Post
    To disable its preprocessing I XORed the test file with 128:

    book1:
    BCM -> 210,642 bytes
    NZ -> 196,190 bytes

    XORed book1:
    BCM -> 210,745 bytes
    NZ -> 214,693 bytes

    You may see that BCM has closely the same performance on XORed book1, at the same time, NanoZip showed the REAL performance of its BWT. BCM is MUCH stronger!
    Good idea for testing the algos Though... Come on, this time it's not much stronger, is it Just four kilobytes. But the difference between the presence and abscence of preprocessing in nz is obvious.
    Quote Originally Posted by encode View Post
    Furthermore, v0.04 is not the final version of BCM. I already working on even stronger CM back end - yep it's more complex, but at the same time due to some optimizations and making it more cache efficient I hope that I'll not loose too much processing speed, and probably will make my CM even faster!
    This is really important. BCM is quite fast (fast enough to be used in r/l) and powerful indeed, while it doesn't even have any "tweaks" - it's just a compressor, not even an archiver. There's still much room to go and to achieve, I think it might be very very competitive in the future

  26. #26
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    Quote Originally Posted by encode View Post
    I XORed the test file with 128:
    (...)
    XORed book1:
    BCM -> 210,745 bytes
    NZ -> 214,693 bytes
    (...)
    NanoZip showed the REAL performance of its BWT
    You're lucky that Sami doesn't log on his account anymore, otherwise new flamewar would probably begin
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

  27. #27
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    Quote Originally Posted by Black_Fox View Post
    You're lucky that Sami doesn't log on his account anymore, otherwise new flamewar would probably begin
    It's not a flame, it's just a direct comparison of BWT-based coders.

    BTW, Sami said that his BWT is based on QLFC. So, check out some small paper on QLFC:
    Attached Files Attached Files

  28. #28
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by encode View Post
    It's important to test BCM with others with the same block size. On binary data, for example, BBB with a smaller block size may beat BCM, but with the same block size may not beat. Mostly BCM is the strongest from all!
    BLIZ uses LZP preprocessing that may really help in some cases - pht.psd as example. NanoZip has a heavy preprocessing, additionally to multiple algorithm selection. I beleive that even with '-co' NZ in some cases may choose not its BWT - my test shown that. At the same time BCM has only a simple E8/E9 transformer.
    BWT, check out a small example, how NanoZip performs with and without preprocessing. To disable its preprocessing I XORed the test file with 128:

    book1:
    BCM -> 210,642 bytes
    NZ -> 196,190 bytes

    XORed book1:
    BCM -> 210,745 bytes
    NZ -> 214,693 bytes

    You may see that BCM has closely the same performance on XORed book1, at the same time, NanoZip showed the REAL performance of its BWT. BCM is MUCH stronger!

    Furthermore, v0.04 is not the final version of BCM. I already working on even stronger CM back end - yep it's more complex, but at the same time due to some optimizations and making it more cache efficient I hope that I'll not loose too much processing speed, and probably will make my CM even faster!

    So maybe add LZP too? Or unusual option: REP. I tested it a bit with grzip / ppmd and it works pretty well/ In the end output was usually slightly bigger than with LZP, but REP decompresses faster.

  29. #29
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,954
    Thanks
    359
    Thanked 332 Times in 131 Posts
    LZP may hurt compression in some cases.

    The better way is a stronger CM! Currently I'm working really hard on a new CM - analyzing and optimizing each part of a model - this hardcore work may took from a few weeks to a few months...

  30. #30
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by encode View Post
    The better way is a stronger CM!
    Not for decompression speeds.
    LZ should improve them, stronger CM worsens (heuristic rule).

Page 1 of 3 123 LastLast

Similar Threads

  1. BCM v0.10 is here!
    By encode in forum Data Compression
    Replies: 45
    Last Post: 20th June 2010, 21:39
  2. BCM's future
    By encode in forum Data Compression
    Replies: 17
    Last Post: 9th August 2009, 01:00
  3. BCM v0.06,0.07 is here! [!]
    By encode in forum Data Compression
    Replies: 34
    Last Post: 31st May 2009, 16:39
  4. BCM v0.05 is here! [!]
    By encode in forum Data Compression
    Replies: 19
    Last Post: 8th March 2009, 21:12
  5. BCM v0.03 is here! [!]
    By encode in forum Data Compression
    Replies: 25
    Last Post: 14th February 2009, 15:42

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •