Page 2 of 5 FirstFirst 1234 ... LastLast
Results 31 to 60 of 138

Thread: BIT Archiver

  1. #31
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Black_Fox View Post
    You're quite right in this, all files can either be compressed nicely straight away or are already compressed with ~gzip (with MP3 being not much compressible exception) which gets fixed by Precomp.
    I was very dissappointed when I see very close results with my previous implementation

    Rank 44 - BIT 0.2b -> 13,085,734 bytes
    Rank 60 - BIT 0.1 -> 13,984,485 bytes

    I must do something about stationary and redundant files. Because, my current implementation bit model is semi-stationary. When I copy&paste PAQ's match model into my code (of course with changing bit model), speeds slow down about half of the current speed and compression sometimes even worser (also, does not help as expected). Do you have any idea?

  2. #32
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > I must do something about stationary and redundant files.

    1. For "redundant" (meaning highly compressible?) files it might be good
    to pay some attention to deterministic contexts. Though that's helpful for
    most common files. If your model is not byte-oriented (its probably binary?),
    then deterministic contexts alone (contexts always followed by a single byte
    value) can cause a significant redundancy.

    2. In general, stationary files are hardest to compress, because they need
    too much precision. But in your case the model probably just is not very
    adaptive and also tuned to non-stationary data (it is tuned to something even
    if you didn't tune it intentionally).

    > Because, my current implementation bit model is semi-stationary.

    Its best to leave the speed optimizations which reduce the model precision
    until you have good compression, even if its slow.
    Because its incomparably harder to improve the precision of speed-optimized
    model than to do the reverse.

    > When I copy&paste PAQ's match model into my code (of course with changing bit model),
    > speeds slow down about half of the current speed and compression sometimes even worser
    > (also, does not help as expected).

    1. Are you sure that you're able to mix it in right?
    2. I don't think that PAQ's match model is any good.
    Did you try cutting it off from PAQ and testing it like that?

    > Do you have any idea?

    You can get codelength measurements for your sample files with any good opensource compressor.
    Then it would be possible to sort your context types by relative redundancy and see which
    cases need attention first.

  3. #33
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    > 1. For "redundant" (meaning highly compressible?) files it might be good
    > to pay some attention to deterministic contexts. Though that's helpful for
    > most common files. If your model is not byte-oriented (its probably
    > binary?), then deterministic contexts alone (contexts always followed by a
    > single byte value) can cause a significant redundancy.
    - Yes, I mean highly compressible files (eg. fp.log).
    - Now, I sure that, my counter model is very poor (it's bit oriented)

    > 2. In general, stationary files are hardest to compress, because they need
    > too much precision. But in your case the model probably just is not very
    > adaptive and also tuned to non-stationary data (it is tuned to something
    > even if you didn't tune it intentionally).
    I think, my tuning only helps at non-stationary data. I must focus on a general handler for this any type of data. Do you have any idea? (Surely, you have )

    > Its best to leave the speed optimizations which reduce the model precision
    > until you have good compression, even if its slow.
    > Because its incomparably harder to improve the precision of speed-
    > optimized model than to do the reverse.
    Without releasing anything, I mainly focus on compression level. Speed comes at end as you mention. It seems the main problem with my compressor is counters.

    > 1. Are you sure that you're able to mix it in right?
    I'm sure that I can mix correctly. But, when copy&paste PAQ's match model, I only change counter model (PAQ uses statemap). So, this seems another problem for my counter.

    > 2. I don't think that PAQ's match model is any good.
    I used it for only test. It's too slow for me. I'm sure that there must be better thing around anywhere.

    > Did you try cutting it off from PAQ and testing it like that?
    It's on my todo list

    Another question about parallelism. How can we model or approximately simulate the order-0 as multi-threading? You may think this is unnecessary. But, I have another idea about it and I'm sure that you already knew this

  4. #34
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Well, bitwise models are simple up to the certain level of compression.
    But the source model (an algorithm of generation) for most of common
    data types is byte-oriented so the difference shows at some point.

    But still, why don't you try to test the frequencies?
    I mean, a counter which counts 0 and 1 occurences like
    c0=c0*(1-wr)+1-bit, c1=c1*(1-wr)+bit, p0=(c0+d)/(c0+c1+2*d),
    with wr starting from 0 for stationary data.
    That's a good starting point imho, and it's not necessary
    to keep it slow (with division etc) after that.

    > I think, my tuning only helps at non-stationary data.
    > I must focus on a general handler for this any type of data.
    > Do you have any idea? (Surely, you have )

    I think its better to start writing a CM compressor with stationary samples
    (like book1 etc; not world95.txt, fp.log, or non-filtered enwik though).
    The model's behavior is less random with stationary data, and its always
    possible to tune the parameters for faster adaptation after you reach
    a good compression level with stationary data, as it requires much better
    precision than nonstationary.
    Also, BWT output is as good as stationary for such applications.

    > I'm sure that I can mix correctly. But, when copy&paste
    > PAQ's match model, I only change counter model (PAQ uses
    > statemap). So, this seems another problem for my counter.

    Again, in such case it might be useful to compare the performance
    of the match model alone in original version vs version with your
    counters.

    > > 2. I don't think that PAQ's match model is any good.
    > I'm sure that there must be better thing around anywhere.

    I think you could forget about it for now, as its basically
    only a speed optimization. Eg. look at ppmonstr - it doesn't
    have a match model, but has good compression even like at order6.

    > > Did you try cutting it off from PAQ and testing it like that?
    > It's on my todo list

    To make it clear, I meant disabling the match model in paq, and
    testing if its compression became worse.

    > Another question about parallelism. How can we model or
    > approximately simulate the order-0 as multi-threading?

    The main problem is decoding - its quite easy to thread
    the compression part as much as you want because it can
    access whole data.
    And a multi-threading implementation of sequential decoding
    might be sensible at some higher orders using speculative
    execution (that is, you start processing the updates for
    most probable symbol before really testing that your guess is right),
    because it would be 50-70% hits at high orders.
    But with order0 I think that most reasonable would be to
    simply process the data blocks in different threads - its
    even possible to avoid any compression degradation by compressing
    the frequency tables for each block.
    Then, another possible approach would be the one I talked about
    in fpaq0pv4B thread... you can separate the bit sequences.
    That is, instead of sequentially compressing bit7-0 for each
    byte, you can first compress all bit7's, then all bit6's using
    already known bit7's, etc. Like that it would be easy to perform
    in parallel and I think that it won't affect the compression at
    order0. But using this technique it would be really troublesome
    to keep the good compression level at higher orders.

  5. #35
    Member
    Join Date
    Jun 2008
    Location
    USA
    Posts
    111
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Shelwien View Post
    Then, another possible approach would be the one I talked about
    in fpaq0pv4B thread...
    In order not to clog up another old thread, I'll ask here:

    What version of Intel's compiler did you use for fpaq0pv4b? Have you tried (or can you try) the latest version? Is it any better with working for AMD cpus? Or did you intentionally disable compatibility for more speed? (10.1 for Linux is free for non-commercial use, right?) I just find it odd that Intel would even allow such a weird quirk, making you manually hack it to disable the bogus vendor ID check. (Maybe that was before AMD64 implemented SSE2? Maybe you can't upgrade because it's not free for Win32 ?? I dunno, just curious ....)

  6. #36
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > What version of Intel's compiler did you use for fpaq0pv4b?

    10.1.021

    > Have you tried (or can you try) the latest version?

    Still no newer version it seems.

    > Is it any better with working for AMD cpus?

    It has the GenuineIntel checks if you meant that.

    > Or did you intentionally disable compatibility for more speed?

    Its plain C++ and you can compile it yourself.

    > (10.1 for Linux is free for non-commercial use, right?)

    Whatever. Didn't check, but it probably has the same cpu detection code.
    You can look for a "GenuineIntel" string in the executable.

    > I just find it odd that Intel would even allow such a weird quirk,
    > making you manually hack it to disable the bogus vendor ID check.

    And I find it completely logical. They make their own compiler to
    sell more CPUs, because there's still no other C++ compiler able to
    perform the automatical vectorization, and also they include their
    optimized math library with it. Now, why would they want to improve
    somebody else's sells as well?
    Actually, we're quite lucky, because they could include some real quirks,
    which would slow down their programs on AMD cpus in a non-patchable way
    (eg. using a cache management or branch prediction implementation differences).

    > (Maybe that was before AMD64 implemented SSE2?

    There're actually _two_ checks - one for executable with SSE2 code to run,
    and another one for math.h functions to use SSE2 implementations.
    Anyway its easy to patch, so whatever.
    I have 6 computers here, but not a single AMD cpu, so don't care anyway.

    > Maybe you can't upgrade because it's not free for Win32 ??

    I can't upgrade because they won't release any newer version.
    But believe me, nothing would change.

  7. #37
    Member
    Join Date
    Jun 2008
    Location
    USA
    Posts
    111
    Thanks
    0
    Thanked 0 Times in 0 Posts
    >> Or did you intentionally disable compatibility for more speed?
    >
    > Its plain C++ and you can compile it yourself.
    I meant in whatever compiler switches you used. -xK, perhaps? Try -axK instead. (Or whatever, never used it, just quoting what I heard.)

    >> I just find it odd that Intel would even allow such a weird quirk,
    >> making you manually hack it to disable the bogus vendor ID check.
    >
    >And I find it completely logical. They make their own compiler to
    >sell more CPUs, because there's still no other C++ compiler able to
    >perform the automatical vectorization
    Vector C? MSVC? (No idea, honestly.) Besides, Intel has contributed to GCC, so they aren't against collaboration. (And latest GCC turns on -ftree-vectorize if you use -O2 or so with -msse2 or whatever.)

    > and also they include their
    >optimized math library with it. Now, why would they want to improve
    > somebody else's sells as well?
    Because it's compatible, it's going to be sold no matter what, it's a trivial fix, and it makes them look dumb, especially since they charge high amounts for this "incompatible" compiler!

    > Actually, we're quite lucky, because they could include some real quirks,
    > which would slow down their programs on AMD cpus in a non-patchable
    > way (eg. using a cache management or branch prediction implementation > differences).
    If they want to optimize for Core 2, we'll let them. Many people do that for personal stuff, and that's not bad. It's just kinda silly to break optimization when it could work fine. But whatever. They have 80% of the market, and their compiler group apparently doesn't care.

    >> (Maybe that was before AMD64 implemented SSE2?
    >
    > There're actually _two_ checks - one for executable with SSE2 code
    > to run, and another one for math.h functions to use SSE2
    > implementations. Anyway its easy to patch, so whatever.
    > I have 6 computers here, but not a single AMD cpu, so don't care anyway.
    My AMD has SSE 1-3, so that's a lot of vectorization going to waste (plus 3dnow!, MMX, FPU, CMOVxx, etc.) Yet another reason that GCC is a good idea.

    >> Maybe you can't upgrade because it's not free for Win32 ??
    >
    > I can't upgrade because they won't release any newer version.
    > But believe me, nothing would change.
    In this case, you're allowed to do what you want. Hey, it's no huge deal. But if it doesn't run (for some), it doesn't run. I just don't see the "advantage" to that. I don't know of any other compiler that does that.
    Last edited by Rugxulo; 20th June 2008 at 04:43.

  8. #38
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    > > Its plain C++ and you can compile it yourself.
    >
    > I meant in whatever compiler switches you used. -xK, perhaps?

    xK is SSE1, I used xN for SSE2 (xW would work too).
    SSE1 didn't support the integer vectors, so it doesn't help there.

    > Try -axK instead. (Or whatever, never used it, just quoting what I heard.)

    Its generally bad, because in such a case compiler generates both
    versions of code and switches them - its slow and executable is bloated.

    > > no other C++ compiler able to perform the automatical vectorization
    >
    > Vector C? MSVC? (No idea, honestly.)

    VectorC yes, but its not C++.

    > Besides, Intel has contributed to GCC,
    > so they aren't against collaboration.
    > (And latest GCC turns on -ftree-vectorize if you use -O2 or so with -msse2 or whatever.)

    Don't know. They might make it properly in time, but IntelC does it much better for now.
    Come back when GCC would start to vectorize code like
    Code:
    for( j=0; j<CNUM; j++ ) Rank[j] += ( Rank[j]<dv );
    > Because it's compatible, it's going to be sold no matter what,
    > it's a trivial fix, and it makes them look dumb,
    > especially since they charge high amounts for this "incompatible" compiler!

    Its worth it anyway, and they probably don't care whether they look dumb,
    because there still are enough even dumber people who use IntelC-compiled
    code in benchmarks.

    >> (Maybe that was before AMD64 implemented SSE2?
    >
    > There're actually _two_ checks - one for executable with SSE2 code
    > to run, and another one for math.h functions to use SSE2
    > implementations. Anyway its easy to patch, so whatever.
    > I have 6 computers here, but not a single AMD cpu, so don't care anyway.
    >
    > My AMD has SSE 1-3, so that's a lot of vectorization going to waste

    Just patch the GenuineIntel checks and it would be used.
    Though IntelC developers intended for it to go to waste.
    http://www.ixbt.com/cpu/cpu-spec2k/pk/iccpatch.rar

    > (plus 3dnow!, MMX, FPU, CMOVxx, etc.) Yet another reason that GCC is a good idea.

    CMOVcc is used anywhere... it was present even on some 486s.
    And I also wonder why they discarded the MMX support, because SSE2 can't process
    byte vectors. Well, I have older IntelC versions for that anyway.
    As to 3dnow - its for AMD to write their own compiler.

    > But if it doesn't run (for some), it doesn't run.
    > I just don't see the "advantage" to that.

    Intel has added the -ax switches for that, but this "compatibility" slows
    down the native code too. And imho its better to see right away that
    something has to be patched, instead of wondering about intel cpu superiority,
    looking at the benchmarks.

    > I don't know of any other compiler that does that.

    I don't like that either.
    But there's also no other compiler that generates 2x faster code than MSVC
    (10x faster than MSVC6) and 30% faster than GCC 4.3.0. (even not considering
    the cases with applicable vectorization).

  9. #39
    Member
    Join Date
    Jun 2008
    Location
    USA
    Posts
    111
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Shelwien View Post
    > Try -axK instead. (Or whatever, never used it, just quoting what I heard.)

    Its generally bad, because in such a case compiler generates both
    versions of code and switches them - its slow and executable is bloated.
    What, never heard of .EXE compression?

    Quote Originally Posted by Shelwien
    > Because it's compatible, it's going to be sold no matter what,
    > it's a trivial fix, and it makes them look dumb,
    > especially since they charge high amounts for this "incompatible" compiler!

    Its worth it anyway, and they probably don't care whether they look dumb,
    because there still are enough even dumber people who use IntelC-compiled
    code in benchmarks.
    Right, and maybe AMD should tweak everything to use FDIV and other instructions known to have bugs in some older Intel processors. (Note to AMD: please don't.)

    Quote Originally Posted by Shelwien
    > (plus 3dnow!, MMX, FPU, CMOVxx, etc.) Yet another reason that GCC
    > is a good idea.

    CMOVcc is used anywhere... it was present even on some 486s.
    Uh ... no. At least, not on my 486 and early Pentium I. And DOSBox doesn't support it either (only emulates 486DX). Not even all PPros have it, that's why there's a CPUID check for it.

    Quote Originally Posted by Shelwien
    And I also wonder why they discarded the MMX support, because SSE2 can't process byte vectors. Well, I have older IntelC versions for that anyway.
    Dunno, maybe because its less of a selling point these days? And people do claim (falsely or not) that MMX is deprecated, that SSE2 supersedes it. But MMX ties up the FPU, so maybe that's why?? (Dunno, honestly.)

    Quote Originally Posted by Shelwien
    > I don't know of any other compiler that does that.

    I don't like that either.
    But there's also no other compiler that generates 2x faster code than MSVC
    (10x faster than MSVC6) and 30% faster than GCC 4.3.0. (even not considering the cases with applicable vectorization).
    I haven't benchmarked every compiler (Code Warrior, Borland, etc.), especially the latest versions, so I dunno. I have heard though (unconfirmed) that Intel's compiler is much slower to run than GCC (and both are slower than Digital Mars). Has your experience matched that?

  10. #40
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts

    MOC RESULTS ! TEST PASSED !

    BIT 0.2B
    MOC test-> 405.549.829 Bytes
    Compression time 1465,427 sec.
    Decompression time 1465,385 sec.

  11. #41
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Thanks a lot! I hope the next releases will have a rank in top ten BTW, are you still using TARred archives for testing compressors? I think, this is a bad idea to find a practical archiver. Because, most of practical archivers (such as FreeArc) suffers from this kind of tests. Also, I think, small files in a TAR archive are a problem for adaptive compressors (such as PPM, context mixing etc.). Because, their adaptation will be broken on files' boundries.

  12. #42
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Arrow BIT 0.3

    Hi everyone,
    Here is the new version of my archiver: BIT 0.3. Changes are:

    - Some minor speed optimizations (some branches were removed, templates were implemented on bit coding, hardware prefetching etc)
    - A coding mistake were fixed. Actually, it was not caused to any data loss but it was caused to hurt compression (Only first bit of the order-1 was modelled as order-1. What a funny mistake isn't it? )
    - Some other small bugs were fixed.

    Overall, you should expect ~1.2% compression gain with 10-15% speed gain.

    Here is the link: http://www.osmanturan.com/bit03.zip

    Also there is a profiling version. I have put some high precision timing in critical points. It shows some timing information about that critical points. It's much more slower due to lots timing computation (speed around 8 kb/sec). If you test the normal version besides profiling version, I will be very happy. Please, post your test machine specifications with timing and processing speed. I wonder, what's the real bottleneck. I suspect only my counters.

    Here is the link: http://www.osmanturan.com/bit03p.zip

    Actually, profiling version does not show the true statistics. Because, it breaks the CPU prediction. So, actually we are able to only comment on something - we cannot reach real bottlenecks.

  13. #43
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts

    Thumbs up

    Thanks Osman!

  14. #44
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    I nearly forgot. Comman line options were changed. "-mem" option obsolute now. New "-p" option was added. Here is the best option for compressing ENWIK8:

    bit.exe a enwik8.bit -m lwcx -p best -files enwik8

    All of the parameters for "-p" option: fastest, fast, normal, good, best

  15. #45
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Arrow Some small tests with BIT 0.3

    Test machine: Core2 Duo 2.2 GHz, 2 GB RAM, Vista Business x64 SP1

    Code:
    Calgary Corpus (TAR)
    751,514 bytes @ 503 KB/sec (6.344 Seconds)
    
    Valley.cmb
    8,908,444 bytes @ 526 KB/sec (36.904 Seconds)
    
    ENWIK8
    21,847,018 bytes @ 557 KB/sec (163.526 Seconds)
    
    SFC
    a10.jpg      -> 834,176 bytes
    AcroRd32.exe -> 1,414,206 bytes
    english.dic  -> 579,376 bytes
    FlashMX.pdf  -> 3,699,941 bytes
    FP.log       -> 576,981 bytes
    MSO97.dll    -> 1,773,039 bytes
    ohs.doc      -> 847,548 bytes
    rafale.bmp   -> 757,328 bytes
    vcfiu.hlp    -> 634,408 bytes
    world95.txt  -> 516,128 bytes
    
    Total Size of SFC: 11,633,131 bytes
    Total Time for SFC: 97.189 seconds

  16. #46
    Member
    Join Date
    May 2008
    Location
    Antwerp , country:Belgium , W.Europe
    Posts
    487
    Thanks
    1
    Thanked 3 Times in 3 Posts
    Results form bit3 and bit3p on calgary.tar :
    Code:
    timer bit3p a calgary_bit03p_best -m lwcx -p best -files calgar.tar
    
    Timer 3.01  Copyright (c) 2002-2003 Igor Pavlov  2003-07-10
    Bit Archiver v0.3p (Jul 31 2008 11:16:36)
    (c) 2007-2008, Osman Turan
    WARNING: PROFILING ENABLED!!! MUCH MORE SLOWER PROCESSING!!!
    USE THE OTHER VERSIONS FOR BENCHMARKING!!!
    Archive not found. Creating a new archive: calgary_bit03p_best
    Compressing: calgar.tar
    Processed:      3152896/     3152896 bytes (Speed:        4 KB/s)
    
    [ Profiling Results (Total:     323.03 seconds) ] -----------------------------
    Entropy Coding                                          42.88 seconds ( 13.3% )
    Counter Prediction                                      42.95 seconds ( 13.3% )
    Counter Updating                                        42.92 seconds ( 13.3% )
    Mixer Prediction                                        42.91 seconds ( 13.3% )
    Mixer Updating                                          42.89 seconds ( 13.3% )
    SSE                                                     42.90 seconds ( 13.3% )
    Context Updating                                        42.86 seconds ( 13.3% )
    Hash Class-1 Computation                                 5.36 seconds (  1.7% )
    Hash Class-2 Computation                                 5.36 seconds (  1.7% )
    Hash Location                                           12.01 seconds (  3.7% )
    
            Elapsed Time: 646.468 seconds
    Kernel Time  =   622.194 = 00:10:22.194 =  96%
    User Time    =    20.966 = 00:00:20.966 =   3%
    Process Time =   643.160 = 00:10:43.160 =  99%
    Global Time  =   646.484 = 00:10:46.484 = 100%
    
    timer bit3 a calgary_bit03_best -m lwcx -p best -files calgar.tar
    Timer 3.01  Copyright (c) 2002-2003 Igor Pavlov  2003-07-10
    Bit Archiver v0.3 (Jul 31 2008 11:11:11)
    (c) 2007-2008, Osman Turan
    WARNING: EXPERIMENTAL RELEASE. DO NOT USE FOR REAL BACKUPS!
    Archive not found. Creating a new archive: calgary_bit03_best
    Compressing: calgar.tar
    Processed:      3152896/     3152896 bytes (Speed:      526 KB/s)
    
            Elapsed Time: 5.975 seconds
    Kernel Time  =     0.436 = 00:00:00.436 =   7%
    User Time    =     5.335 = 00:00:05.335 =  88%
    Process Time =     5.772 = 00:00:05.772 =  95%
    Global Time  =     6.022 = 00:00:06.022 = 100%
    system : C2Q E6600 2.4 GHz / 4 GB RAM fsb533 /Vista Prem.

  17. #47
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Thanks pat357! Here is my profiling results on calgary corpus (Core2 Duo 2.2 GHz, 2 GB RAM, Vista Business x64 SP1)

    Code:
    [ Profiling Results (Total:     172.27 seconds) ] -----------------------------
    Entropy Coding                                          20.35 seconds ( 11.8% )
    Counter Prediction                                      22.01 seconds ( 12.8% )
    Counter Updating                                        20.60 seconds ( 12.0% )
    Mixer Prediction                                        20.63 seconds ( 12.0% )
    Mixer Updating                                          20.36 seconds ( 11.8% )
    SSE                                                     20.52 seconds ( 11.9% )
    Context Updating                                        20.04 seconds ( 11.6% )
    Hash Class-1 Computation                                 2.50 seconds (  1.5% )
    Hash Class-2 Computation                                 2.63 seconds (  1.5% )
    Hash Location                                           22.62 seconds ( 13.1% )
    
    
            Elapsed Time: 332.469 seconds
    It's better to explain what meaning of that table. Let's explain:
    Entropy Coding: Arithmetic coding + buffered I/O routines
    Counter Prediction: Semi-stationary counter prediction + quantization for mixing
    Counter Updating: Semi-stationary counter updating
    Mixer Prediction: Neural network mixing stage
    Mixer Updating: Neural network back-propagation part (updating weights)
    SSE: SSE stage (both prediction and updating)
    Context Updating: Locating bit models in hash entry
    Hash Class-1 Computation: Intermediate hash function computation (per nibble)
    Hash Class-2 Computation: Full hash function computation (per byte)
    Hash Location: Hash table collision resolving and locating the hash entry

    What's your opinion about this timing?

  18. #48
    Member
    Join Date
    May 2008
    Location
    Antwerp , country:Belgium , W.Europe
    Posts
    487
    Thanks
    1
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by osmanturan View Post
    Thanks pat357! Here is my profiling results on calgary corpus (Core2 Duo 2.2 GHz, 2 GB RAM, Vista Business x64 SP1)

    Code:
    [ Profiling Results (Total:     172.27 seconds) ] -----------------------------
    Entropy Coding                                          20.35 seconds ( 11.8% )
    Counter Prediction                                      22.01 seconds ( 12.8% )
    Counter Updating                                        20.60 seconds ( 12.0% )
    Mixer Prediction                                        20.63 seconds ( 12.0% )
    Mixer Updating                                          20.36 seconds ( 11.8% )
    SSE                                                     20.52 seconds ( 11.9% )
    Context Updating                                        20.04 seconds ( 11.6% )
    Hash Class-1 Computation                                 2.50 seconds (  1.5% )
    Hash Class-2 Computation                                 2.63 seconds (  1.5% )
    Hash Location                                           22.62 seconds ( 13.1% )
    
    
            Elapsed Time: 332.469 seconds
    It's better to explain what meaning of that table. Let's explain:
    Entropy Coding: Arithmetic coding + buffered I/O routines
    Counter Prediction: Semi-stationary counter prediction + quantization for mixing
    Counter Updating: Semi-stationary counter updating
    Mixer Prediction: Neural network mixing stage
    Mixer Updating: Neural network back-propagation part (updating weights)
    SSE: SSE stage (both prediction and updating)
    Context Updating: Locating bit models in hash entry
    Hash Class-1 Computation: Intermediate hash function computation (per nibble)
    Hash Class-2 Computation: Full hash function computation (per byte)
    Hash Location: Hash table collision resolving and locating the hash entry

    What's your opinion about this timing?
    I'm puzzled as why BIT is apparently running much slower on my system...
    Do you maybe have a clue why this is ? Could this be due to the "only 533Mhz" DRAM ?
    My E6600 has L1 =4*32kb data + 4*32kB instructions and L2 = 2*4096kB; similar (maybe even better) compared to your C2D I guess...

    I did a second run : this time there were no other programs running :

    Code:
    [ Profiling Results (Total:     246.32 seconds) ] -----------------------------
    Entropy Coding                                          32.13 seconds ( 13.0% )
    Counter Prediction                                      33.83 seconds ( 13.7% )
    Counter Updating                                        32.68 seconds ( 13.3% )
    Mixer Prediction                                        32.64 seconds ( 13.3% )
    Mixer Updating                                          32.41 seconds ( 13.2% )
    SSE                                                     32.56 seconds ( 13.2% )
    Context Updating                                        32.08 seconds ( 13.0% )
    Hash Class-1 Computation                                 4.01 seconds (  1.6% )
    Hash Class-2 Computation                                 4.11 seconds (  1.7% )
    Hash Location                                            9.88 seconds (  4.0% )
            Elapsed Time: 488.143 seconds
    
    The numbers don't tell me much.
    The "Hash location" takes 4% on my system, while 13,1% on yours.

  19. #49
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Test Machine: AMD Sempron 2400+, Windows XP SP2

    Code:
    bit3p a calgary_bit03p_best -m lwcx -p best -files calgar.tar
    
    Bit Archiver v0.3p (Jul 31 2008 11:16:36)
    (c) 2007-2008, Osman Turan
    
    WARNING: PROFILING ENABLED!!! MUCH MORE SLOWER PROCESSING!!!
    USE THE OTHER VERSIONS FOR BENCHMARKING!!!
    
    Archive not found. Creating a new archive: calgary_bit03p_best
    
    Compressing: calgar.tar
    Processed:      3152896/     3152896 bytes (Speed:        8 KB/s)
    
    [ Profiling Results (Total:     197.21 seconds) ] -----------------------------
    Entropy Coding                                          23.82 seconds ( 12.1% )
    Counter Prediction                                      30.77 seconds ( 15.6% )
    Counter Updating                                        24.56 seconds ( 12.5% )
    Mixer Prediction                                        24.57 seconds ( 12.5% )
    Mixer Updating                                          23.87 seconds ( 12.1% )
    SSE                                                     24.10 seconds ( 12.2% )
    Context Updating                                        23.15 seconds ( 11.7% )
    Hash Class-1 Computation                                 2.85 seconds (  1.4% )
    Hash Class-2 Computation                                 3.23 seconds (  1.6% )
    Hash Location                                           16.30 seconds (  8.3% )
    
    
            Elapsed Time: 383.735 seconds

    Code:
    bit3 a calgary_bit03_best -m lwcx -p best -files calgar.tar
    
    Bit Archiver v0.3 (Jul 31 2008 11:11:11)
    (c) 2007-2008, Osman Turan
    
    WARNING: EXPERIMENTAL RELEASE. DO NOT USE FOR REAL BACKUPS!
    
    Archive not found. Creating a new archive: calgary_bit03_best
    
    Compressing: calgar.tar
    Processed:      3152896/     3152896 bytes (Speed:      115 KB/s)
    
            Elapsed Time: 26.812 seconds

  20. #50
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    Just doing a run on the corpus on my Athlon XP 3200+...but err is this right? Processed: 19084399/ 4294967295 bytes ... seems...odd?

    Edit: i see the reason for this odd-ness, i copied pat's command line from above, and his file is called calgar.tar, whereas mine was calgary.tar so no clue what it was compressing on my system, so i broke out of it ;p

    Edit 2: It's insanely slow on my system, it's just done 200k after around 5 mins...it's a bit odd as i'd expect it to be at least as fast as on LP's system. I maybe cancel it again and do a fresh reboot ;p
    Last edited by Intrinsic; 31st July 2008 at 21:43.

  21. #51
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Intrinsic View Post
    Just doing a run on the corpus on my Athlon XP 3200+...but err is this right? Processed: 19084399/ 4294967295 bytes ... seems...odd?

    Edit: i see the reason for this odd-ness, i copied pat's command line from above, and his file is called calgar.tar, whereas mine was calgary.tar so no clue what it was compressing on my system, so i broke out of it ;p
    I missed this bug. Sorry. The compressor itself never check the the file which will be compressed whether it's exist or not. So, my file size routine return -1 which equal your unsigned long's maximum value. It will be fixed. Thanks.

  22. #52
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Intrinsic View Post
    Edit 2: It's insanely slow on my system, it's just done 200k after around 5 mins...it's a bit odd as i'd expect it to be at least as fast as on LP's system. I maybe cancel it again and do a fresh reboot ;p
    Profiled version is extremely slow (due to 64-bit multipliaction and double precision timing). The compressor itself may caused disk trashing on your system if your system's free memory gets low. I hope, a reboot fix this problem. If not, could you tell me there is disk trashing or not. I have to point out something before you try: memory usages

    best ~1 GB
    good ~512 MB
    normal ~256 MB
    fast ~128 MB
    fastest ~64 MB

    For example, when I run some programs on my system (2GB RAM), it causes some disk trashing with "-p best" option. But, don't forget, I use Vista which eats approximately 800 MB RAM.

  23. #53
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    It was the memory limit, i have 1gig in this machine. So it was busy swapping stuff to the pagefile constantly ;p

    Here is just a quick test in fastest mode, gonna watch some TV now so will do a run with good

    Machine:
    AMD Athlon XP 3200+
    1Gb Kingston HyperX @ 2-3-3-7
    XP Corp SP2

    bit03 a calgary_bit03_fastest -m lwcx -p fastest -files calgar.tar

    Compressing: calgar.tar
    Processed: 3152896/ 3152896 bytes (Speed: 331 KB/s)

    Elapsed Time: 9.312 seconds


    bit03p a calgary_bit03_fastest -m lwcx -p fastest -files calgar.tar

    Compressing: calgar.tar
    Processed: 3152896/ 3152896 bytes (Speed: 6 KB/s)

    [ Profiling Results (Total: 244.84 seconds) ] -----------------------------
    Entropy Coding 31.80 seconds ( 13.0% )
    Counter Prediction 35.82 seconds ( 14.6% )
    Counter Updating 32.20 seconds ( 13.2% )
    Mixer Prediction 31.84 seconds ( 13.0% )
    Mixer Updating 31.85 seconds ( 13.0% )
    SSE 31.80 seconds ( 13.0% )
    Context Updating 31.33 seconds ( 12.8% )
    Hash Class-1 Computation 3.93 seconds ( 1.6% )
    Hash Class-2 Computation 4.10 seconds ( 1.7% )
    Hash Location 10.16 seconds ( 4.1% )

    Elapsed Time: 481.968 seconds

    GOOD:

    Compressing: calgar.tar
    Processed: 3152896/ 3152896 bytes (Speed: 279 KB/s)

    Elapsed Time: 11.015 seconds


    Compressing: calgar.tar
    Processed: 3152896/ 3152896 bytes (Speed: 6 KB/s)

    [ Profiling Results (Total: 245.88 seconds) ] -----------------------------
    Entropy Coding 31.76 seconds ( 12.9% )
    Counter Prediction 35.82 seconds ( 14.6% )
    Counter Updating 32.21 seconds ( 13.1% )
    Mixer Prediction 31.86 seconds ( 13.0% )
    Mixer Updating 31.85 seconds ( 13.0% )
    SSE 31.79 seconds ( 12.9% )
    Context Updating 31.32 seconds ( 12.7% )
    Hash Class-1 Computation 3.92 seconds ( 1.6% )
    Hash Class-2 Computation 4.10 seconds ( 1.7% )
    Hash Location 11.24 seconds ( 4.6% )

    Elapsed Time: 483.625 seconds

    Ran up CPU-Z as you did below, luckily i had already downloaded the latest version earlier this week as i was testing something else heh.

    Number of cores 1 (max 1)
    Number of threads 1 (max 1)
    Name AMD Athlon XP
    Codename Barton
    Specification AMD Athlon(tm) XP 3200+
    Package Socket A (462)
    CPUID 6.A.0
    Extended CPUID 7.A
    Core Stepping
    Technology 0.13 um
    Core Speed 2205.0 MHz (11.0 x 200.5 MHz)
    Rated Bus speed 400.9 MHz
    Instructions sets MMX (+), 3DNow! (+), SSE
    L1 Data cache 64 KBytes, 2-way set associative, 64-byte line size
    L1 Instruction cache 64 KBytes, 2-way set associative, 64-byte line size
    L2 cache 512 KBytes, 16-way set associative, 64-byte line size
    Last edited by Intrinsic; 31st July 2008 at 22:51.

  24. #54
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by pat357 View Post
    I'm puzzled as why BIT is apparently running much slower on my system...
    Do you maybe have a clue why this is ? Could this be due to the "only 533Mhz" DRAM ?
    My E6600 has L1 =4*32kb data + 4*32kB instructions and L2 = 2*4096kB; similar (maybe even better) compared to your C2D I guess...
    Interesting. My system is a laptop while yours is a desktop. I would expect more speed on your system. By taking the timing into account, it seems your CPU spends much more computation on simple algebra. For example, hash class-1 function consists only couple of bit-wise arithmetic operations and hash class-2 function consists a bit heavier bit-wise arithmetic operations which lead a 1 KB lookup table. As you see, in such a simple case your system processing times doubled. Really interesting. Cache sizes are same for both of us. It seems the memory timing is difference. IIRC, my system has a kingston 800 MHz RAM.

  25. #55
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    For making clear something I have just downloaded CPU-Z utility for giving detailed information about my system. Here is the details:
    Code:
    Intel Mobile Core 2 Duo T7500 @ 2.2 GHz
    Multiplier: x11
    Bus Speed: 199.5 MHz
    Rated FSB: 789.0 MHz
    
    L1 Data: 2x32 KB (8-way set associative, 64-bytes line size)
    L1 Instr.: 2x32 KB (8-way set associative, 64-bytes line size)
    L2: 4096 KB (16-way set associative, 64-bytes line size)
    
    Chipset: Intel PM965
    
    Memory: DDR2, Dual, Symmetric (2xKingston)
    Max Bandwidth: PC2-5300 (333 MHz)
    DRAM Freq.: 332.5 MHz
    FSB:DRAM: 3:5
    CAS# Latency (CL): 5.0 clocks
    RAS# to CAS# Delay (tRCD): 5 clocks
    RAS# Precharge (tRP): 5 clocks
    Cycle Time: 15 clocks

  26. #56
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Test Machine:Intel Pentium 3 @750 MHz, 512 MB RAM, Windows 2000 Pro SP4

    Code:
    bit3p a calgary_bit03p_normal -m lwcx -p normal -files calgar.tar
    
    Bit Archiver v0.3p (Jul 31 2008 11:16:36)
    (c) 2007-2008, Osman Turan
    
    WARNING: PROFILING ENABLED!!! MUCH MORE SLOWER PROCESSING!!!
    USE THE OTHER VERSIONS FOR BENCHMARKING!!!
    
    Archive not found. Creating a new archive: calgary_bit03p_normal
    
    Compressing: calgar.tar
    Processed:      3152896/     3152896 bytes (Speed:        1 KB/s)
    
    [ Profiling Results (Total:    1303.56 seconds) ] -----------------------------
    Entropy Coding                                         169.36 seconds ( 13.0% )
    Counter Prediction                                     182.83 seconds ( 14.0% )
    Counter Updating                                       170.09 seconds ( 13.0% )
    Mixer Prediction                                       172.78 seconds ( 13.3% )
    Mixer Updating                                         169.01 seconds ( 13.0% )
    SSE                                                    172.05 seconds ( 13.2% )
    Context Updating                                       168.50 seconds ( 12.9% )
    Hash Class-1 Computation                                21.13 seconds (  1.6% )
    Hash Class-2 Computation                                21.80 seconds (  1.7% )
    Hash Location                                           56.00 seconds (  4.3% )
    
    
            Elapsed Time: 2575.173 seconds

    Code:
    bit3 a calgary_bit03_normal -m lwcx -p normal -files calgar.tar
    
    Bit Archiver v0.3 (Jul 31 2008 11:11:11)
    (c) 2007-2008, Osman Turan
    
    WARNING: EXPERIMENTAL RELEASE. DO NOT USE FOR REAL BACKUPS!
    
    Archive not found. Creating a new archive: calgary_bit03_normal
    
    Compressing: calgar.tar
    Processed:      3152896/     3152896 bytes (Speed:      104 KB/s)

  27. #57
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    CPU: Athlon 64 X2 3800+ (2.0 GHz)
    RAM: 2 GB PC-5300 CL5
    WIN XP SP1 32-bit

    Code:
    C:\Documents and Settings\Black_Fox\Plocha\bit03p>bitp a bitcal -m LWCX -p best
    -files calgary.tar
    Bit Archiver v0.3p (Jul 31 2008 11:16:36)
    (c) 2007-2008, Osman Turan
    
    WARNING: PROFILING ENABLED!!! MUCH MORE SLOWER PROCESSING!!!
    USE THE OTHER VERSIONS FOR BENCHMARKING!!!
    
    Archive not found. Creating a new archive: bitcal
    
    Compressing: calgary.tar
    Processed:      3152896/     3152896 bytes (Speed:       42 KB/s)
    
    [ Profiling Results (Total:      41.31 seconds) ] -----------------------------
    Entropy Coding                                           4.09 seconds (  9.9% )
    Counter Prediction                                       8.74 seconds ( 21.2% )
    Counter Updating                                         4.85 seconds ( 11.7% )
    Mixer Prediction                                         4.69 seconds ( 11.4% )
    Mixer Updating                                           4.06 seconds (  9.8% )
    SSE                                                      4.75 seconds ( 11.5% )
    Context Updating                                         4.31 seconds ( 10.4% )
    Hash Class-1 Computation                                 0.41 seconds (  1.0% )
    Hash Class-2 Computation                                 1.01 seconds (  2.4% )
    Hash Location                                            4.42 seconds ( 10.7% )
    
    
            Elapsed Time: 73.266 seconds
    For this one I used my testset, tarred. It's compressed to 13.416.632, while sum of single-file archives is a bit better.
    Code:
    C:\Documents and Settings\Black_Fox\Plocha\bit03p>bitp a bittes -m LWCX -p best
    -files mytestset.tar
    Bit Archiver v0.3p (Jul 31 2008 11:16:36)
    (c) 2007-2008, Osman Turan
    
    WARNING: PROFILING ENABLED!!! MUCH MORE SLOWER PROCESSING!!!
    USE THE OTHER VERSIONS FOR BENCHMARKING!!!
    
    Archive not found. Creating a new archive: bittes
    
    Compressing: mytestset.tar
    Processed:     30318592/    30318592 bytes (Speed:       41 KB/s)
    
    [ Profiling Results (Total:     428.70 seconds) ] -----------------------------
    Entropy Coding                                          40.58 seconds (  9.5% )
    Counter Prediction                                      91.07 seconds ( 21.2% )
    Counter Updating                                        53.06 seconds ( 12.4% )
    Mixer Prediction                                        53.77 seconds ( 12.5% )
    Mixer Updating                                          37.65 seconds (  8.8% )
    SSE                                                     51.00 seconds ( 11.9% )
    Context Updating                                        31.80 seconds (  7.4% )
    Hash Class-1 Computation                                 5.30 seconds (  1.2% )
    Hash Class-2 Computation                                10.39 seconds (  2.4% )
    Hash Location                                           54.08 seconds ( 12.6% )
    
    
            Elapsed Time: 707.219 seconds
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

  28. #58
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Arrow BIT 0.4

    Here is another the fresh release of BIT. There is some small compression releated tweaks. Overall, there is 0.7-1.2% compression gain. Also, the "non-exists file bug" have been fixed (thanks Intrinsic). Here is some tests for "best" profile:

    Code:
    valley.cmb -> 8,672,996 bytes @ 502 KB/sec (38.927 Seconds)
    calgary.tar -> 739,428 bytes @ 472 KB/sec (6.712 Seconds)
    ENWIK8 -> 21,686,636 bytes @ 550 KB/sec (177.897 Seconds)
    Design2.tif -> 11,369,649 bytes @ 638 KB/sec (75.816 Seconds)
    Brosur1.tif -> 3,510,043 bytes @ 753 KB/sec (40.804 Seconds)
    
    SFC
    a10.jpg -> 832,508 bytes @ 222 KB/sec
    AcroRd32.exe -> 1,374,631 bytes @ 456 KB/sec
    english.dic -> 580,332 bytes @ 552 KB/sec
    FlashMX.pdf -> 3,683,584 bytes @ 400 KB/sec
    FP.log -> 580,971 bytes @ 727 KB/sec
    MSO97.dll -> 1,729,895 bytes @ 415 KB/sec
    ohs.doc -> 823,791 bytes @ 514 KB/sec
    rafale.bmp -> 762,084 bytes @ 512 KB/sec
    vcfiu.hlp -> 618,889 bytes @ 540 KB/sec
    world95.txt -> 510,941 bytes @ 430 KB/sec
    
    Total Size of SFC: 11,497,626 bytes (10,9 MB)
    Total Time for SFC: 99.051 Seconds
    Peew, I'm happy to got improvement on SFC especially - BIT have passed 11 MB barier with latest releases. I hope, I can reach "10-10,5 MB @ 500KB-1MB/sec" barier which is my goal for LWCX.

    Note that, with this release, I expect some speed improvement especially on small cached systems. My timing might be incorrect. Because, there were some disk trashing during compressing. A reboot did not fix the problem. (It's time to reinstall my fresh Vista Ultimate x64+SP1 license instead of current Vista Business x64 with separate SP1 )

    Thanks a lot for your contributions by testing my compressor. Thanks!

    Here is the link of the new release:
    http://www.osmanturan.com/bit04.zip

    Edit: spelling...

  29. #59
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    More test with Shelwien's testset. Here is BIT 0.4's scores:
    Code:
    bookstar (35,594,240 bytes)
    9,182,623 bytes @ 514 KB/sec (67.705 seconds)
    
    cyg_bin (52,459,520 bytes)
    12,799,524 bytes @ 527 KB/sec (97.282 seconds)
    
    cyg_lib (95,436,800 bytes)
    16,236,713 bytes @ 617 KB/sec (151.087 seconds)
    
    enwik8 (100,000,000 bytes)
    21,686,636 bytes @ 551 KB/sec (177.140 seconds)
    
    finn_lst (31,851,425 bytes)
    4,526,729 bytes @ 608 KB/sec (51.215 seconds)
    
    gits2op_mkv (64,368,429 bytes)
    64,639,087 bytes @ 409 KB/sec (153.802 seconds)
    
    Total Size: 129,071,312 bytes
    Total Time: 702.428 Seconds
    There is still some disk trashing. So, again the compressor must be faster. BTW, I like new timing scores: 528 KB/sec avarage speed with 33% compression ratio on this testset!

  30. #60
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    Thank you, tested That's very decent improvement during just one day!
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

Page 2 of 5 FirstFirst 1234 ... LastLast

Similar Threads

  1. Poor compression of bit-version of PPM
    By Stefan in forum Data Compression
    Replies: 20
    Last Post: 16th March 2010, 16:58
  2. Do you have a 64-bit machine at home?
    By encode in forum The Off-Topic Lounge
    Replies: 22
    Last Post: 4th December 2009, 14:09
  3. Bit guessing game
    By Shelwien in forum Data Compression
    Replies: 11
    Last Post: 24th November 2009, 02:22
  4. RINGS Fast Bit Compressor.
    By Nania Francesco in forum Forum Archive
    Replies: 115
    Last Post: 26th April 2008, 21:58
  5. Bit Archive Format
    By osmanturan in forum Forum Archive
    Replies: 39
    Last Post: 29th December 2007, 00:57

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •