Im i too dumb?
see this pdf
now tell me... why do we need 00 01 10 when we can use 0 10 11 to compress!!!!!!
im just a designer!!
but this bit mistake its houfull!
use 5 bits instead of 6!
once you see it its quite a joke!
a degreed girl of lots of math dont see the obvious!
lets have a lol
but now i might find some good soul to tell me why this doesnt works even with 5 bits instead of 6 bits
whatever im a newbie here and i need to be a tester of files for free soo if you having to much work on some exe dos file or windows for xp 32 bits do contact me here you know better than anyone i never been to foruns but its a privilege to be on this one! (theres some brilliant minds in here)
any good speed slow speed good compression rate bad compression rate i test it on different file formats and extentions
and post if you need some benchmark appearce on your topic (I test versions)
and i have a lot of free time!
dont ask me why!
Last edited by toi007; 28th June 2011 at 20:11.
Reason: still get no response
That is a good point. If they had done any benchmarks then they would have seen that compression is worse. In any case, normal canonical Huffman decoding without big tables is O(n) anyway, the same as ternary decoding (not O(log n) like the paper claims). You read bits until the code has a value in the right range for that length. I don't see how their algorithm is more efficient. You still have to read the code and detect when you reached the end, which isn't even mentioned in their algorithms.
Also, even with your fix, ternary coding still gives worse compression. Suppose you could encode base 3 with each symbol taking the same space (say, 3 level memory or something) or you encoded each ternary code ideally using log(3)/log(2) = 1.585 bits. Even then, a ternary code implicitly rounds each probability to a power of 1/3 instead of 1/2 which would increase the average case rounding error.
amasingly i do understand you but i must be off the math as tester the same number of 0 and 10 and 11 gives bad distribuition witch i know that a good compressed file gives 50% 0 25% 10 and 25% 11 i do understand somthing! in my special way.
its good to have such a good professors!
I stay for the tests dont worry.
The paper doesn't say how to build a ternary Huffman tree. It just describes how to decode using it, and claims it is faster (without testing). Of course using codes 00, 01, 10 is bad because you need 2 bits to code a symbol with probability 1/3. Using 0, 10, 11 would average 1.667 bits per symbol. The best you can do is log(3)/log(2) = 1.585 bits, maybe using arithmetic coding. Still this is worse than a regular Huffman code because a ternary tree forces probabilities to be rounded to 1/3, 1/9, 1/27... instead of 1/2, 1/4, 1/8... before coding.
we just have to pay attention to matt its hilghy recomemded!
except for those how already knows this!
this is theaching!