Been thinking about this lately. A file may be represented using a large number (like when Base64), right? So considering this, I have a 2 ideas which may be used in compression.
1) Say a large number X is made up of smaller numbers x1, x2, x3, xn
e.g. X=2510202550100 x1=2 x2=5 x3=10 x4=20 x5=25 x6=50 x7=100
Now, all of those smaller numbers are factors of 100, so X may then simply be represented as 100. Of course, issues arise when the smaller numbers are not in ascending/descending order, or when not all of the factors of 100 are included. But other than that, what do you guys think of this idea?
2) Say a number Y is a perfect power of another number?
e.g. Y=10460353203 but Y=3^21 then Y may simply be represented as 3^21 or 3,21.
The main issue with both of these ideas is that there isn't yet an algorithm that can factorise any number almost instantly, so factorising a huge number can take a huge amount of time. Now, I've been thinking about this and I came up with another idea. Until someone can come up with a super fast factorising algorithm, why not use a predefined list of factors that a compression algorithm can use to look up factors? This way, the factors are already known, so all that needs doing is for the compression algorithm to look through the list and choose the appropriate factors.
So, what do you guys think? Is it doable using current processing technologies?
I'm busy doing some informal research to see if a quick factorising algorithm exists. I like to think of it as a fun puzzle that I must solve in my lifetime. I'm strange in that regard - I find unsolved physics/maths problems and try to solve them. If I can solve at least one of those then I would have left my mark on humanity.