Page 1 of 3 123 LastLast
Results 1 to 30 of 86

Thread: FRIENDLY OPEN LETTER TO M MAHONEY : in same spirit as Bohr - Einstein's

  1. #1
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts

    FRIENDLY OPEN LETTER TO M MAHONEY : in same spirit as Bohr - Einstein's

    Hi Matt :

    Among the proven technical competent respected long time experts on comp.compression I have
    established respected status for many
    earlier 'breakthrough' 1st of its kind in world
    works introduced to Forum... Among them I can
    safely add the illustrious 'Thomas Richter'/
    'James Dow Allen'/'David Scott's/'Sportman'

    I can assure I was earlier had been similar
    annoyed by many previous 'ill-conceived'
    groundless completely wild claims circulated ( without any new 'proofs' supporting)

    But as we all know time does nor stand still,
    what was once 'bible' truth faces way to
    mankind's new breakthrough discoveries :
    At one time 'time-travel' has 0 probability, now
    now after Einstein & the likes of recent Nobel
    laureates New works

    I am now 100% satisfied with New 'breakthrough'
    Theory in place, that I can immediate now
    confirm indeed it complete overturned centuries
    old 'pigeonhole' beyond any doubt... & this is not
    so difficult!

    Perhaps we can agree to come to some private
    confidential agreement, whereby this would
    now be confidential peer reviewed by you...
    Of course you may then announce publicly
    whether you indeed find them true ( & we can proceed collaborate claim jointly various $1M Clay prizes) or 'same as many before groundless'

    Let me know how to proceed : )

    Warm Regards,
    LawCounsels

  2. #2
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    Good luck, new Einstein!

  3. #3
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Science advances on open publication and independent replication of results. That doesn't mean you can't get rich from your discovery. That's what patents are for.

  4. #4
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    THIS TO INTRODUCE THE 'BREAKTHROUGH' :

    Stephen Wolfram's NEW KIND OF SCIENCE : simple basic rules CREATE COMPLEXITIESHe stumbled upon simple rules ( small # of bits of data ) creating generating RANDOM numbers (very large # of data bits ) : satisfied every single possible tests for RANDOMNESS

    ... This occupied him tremendous eventual wrote "ORIGIN OF RANDOMNESS" whereby he cryptically hypothesize various conclusions ...

    he was held back because only able to generate RANDOM numbers from simple basic rules BUT could not start with the RANDOM numbers and reduce back to simple basic rules

    WE HAVE SUCCEEDED START WITH RANDOM NUMBERS ITSELF VERY LARGE # OF DATA BITS & REDUCES BACK TO MUCH SMALLER # OF DATA BITS .... AND THIS WORKS FOR ANY POSSIBLE RANDOM NUMBERS SERIES !!!!

    New future data inventor and implementor ... recruiting now : contact LawCo...@aol.com IMMEDIATE if fluent with C# or C++ or C & basic combinatorics index ranking/unranking & optional data compressions basics , ALSO if experienced CFO etc etc collaborators and research scientists welcome to contact with LawCo...@aol.com

    URL : https://app.box.com/s/qisaye2da410jzbebdzk










  5. #5
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    if you can squeeze any 2-bit data into single bit, it will be great math breakthrough

  6. #6
    Member Bloax's Avatar
    Join Date
    Feb 2013
    Location
    Dreamland
    Posts
    52
    Thanks
    11
    Thanked 2 Times in 2 Posts
    Bulat: Are you implying that 0 and 1 can't be used to store either of 0, 1, 2 and 3 - and that 0, 1, 2 and 3 can't store either of 0, 1, 2, 3, 4, 5, 6 and 7? surely you ought to know better than that!1!1eleven

  7. #7
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    if you can squeeze any 2-bit data into single bit, it will be great math breakthrough
    I'm afraid this is impossible!

    There is some fundamental mathematical
    requirement/ limitation that the compressed
    bitstring be of certain minimum fundamental
    size ( like Wolfram's basic rules requires certain
    minimum storage, even 'blackhole' s compressed
    information must be fundamentally minimum
    proportional to its spherical area ( transforming
    3 dimensions amount of volume of informations
    dropped into it into 2 dimensions amount) )

  8. #8
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    COINCIDENTAL : PHYSICISTS ALSO DISCUSS QUANTUM PIGEONHOLE PRINCIPLE , FROM THEIR
    QUANTUM PERSPECTIVE

    http://www.google.com/url?q=http%3A%...rAkTfvopgdE9sQ

  9. #9
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    >>http://phys.org/news/2014-07-physici...principle.html

    Those physicists will be mightily even more astounded to now find out realise traditional 'binary to binary direct mapping' pigeonhole was in fact had been a total complete illusion in 1st place ( Svengali like retarded advancements in world sciences)

    ... Needs not had proceed resort to seek a quantum solution : )

  10. #10
    Member Bloax's Avatar
    Join Date
    Feb 2013
    Location
    Dreamland
    Posts
    52
    Thanks
    11
    Thanked 2 Times in 2 Posts
    Well how do you stuff four pigeons into two holes without there being two pigeons in either of the holes? Because if you can't do that then you can't stuff eight pigeons into four holes either, nor can you stuff 16 pigeons into 8 holes (continue ad nauseum).

  11. #11
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Bloax View Post
    Well how do you stuff four pigeons into two holes without there being two pigeons in either of the holes? Because if you can't do that then you can't stuff eight pigeons into four holes either, nor can you stuff 16 pigeons into 8 holes (continue ad nauseum).
    BUT so to speak Wolfram's basic rules ( or blackhole's
    transform volume information into spherical area quantity) can SUPERCODE represent any sufficient large random inputs eg 1Mbytes into much smaller eg
    200Kbytes equivalent compressed representations!

    If you have 2^500,000 pairs of 2-bits of information
    units, you can always group these together into
    1Mbytes input random file, then proceed SUPERCODE
    into smaller eg 200Kbytes compressed information
    representations : )

    NOTE : topmost mathematicians have always
    known pigeonhole was never any kind of mathematical disprove of random compressibility ... it was always
    from beginning like 'very good common sense housewife story/parable which stood the time acquired
    mythical truth status ( because no one had
    before succeed prove the other way )

  12. #12
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    *If you have 8* 512K pairs of 2-bits of information units
    ........

  13. #13
    Member
    Join Date
    Jan 2014
    Location
    Bothell, Washington, USA
    Posts
    685
    Thanks
    153
    Thanked 177 Times in 105 Posts
    Quote Originally Posted by LawCounsels View Post
    BUT so to speak Wolfram's basic rules ( or blackhole's
    transform volume information into spherical area quantity) can SUPERCODE represent any sufficient large random inputs eg 1Mbytes into much smaller eg
    200Kbytes equivalent compressed representations!

    If you have 2^500,000 pairs of 2-bits of information
    units, you can always group these together into
    1Mbytes input random file, then proceed SUPERCODE
    into smaller eg 200Kbytes compressed information
    representations : )

    NOTE : topmost mathematicians have always
    known pigeonhole was never any kind of mathematical disprove of random compressibility ... it was always
    from beginning like 'very good common sense housewife story/parable which stood the time acquired
    mythical truth status ( because no one had
    before succeed prove the other way )

    *If you have 8* 512K pairs of 2-bits of information units
    ........
    Ah, of course, SUPERCODE! What an amazing concept! If only we had thought of that sooner......

    Has it occured to you that a compressor that consistently fits the 2^8,000,000 equally likely unique combinations of 4,000,000 pairs of random bits into only eg. 1,600,000 bits would defy basic laws of compression?
    Last edited by Kennon Conrad; 27th July 2014 at 21:13.

  14. #14
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    In reply to Kenneth Conràd :

    I will 1st digress post here another fast gaining reputation researcher's proof how he provable overcame illusory 'pigeonhole' myth :

    [ this is his particular solution, there can be various
    different solutions! Not only this here! RE : making
    a rare enough bitsstring even RARER! ]

    [ From comp.compression : Jacko ]

    > >> if you can squeeze any 2-bit data into single bit, it will be great math breakthrough

    it would

    > I'm afraid this is impossible!

    yes, but not for the exact reason you may suggest.

    > There is some fundamental mathematical
    >
    > requirement/ limitation that the compressed bitstring be of certain minimum fundamental size ( like Wolfram's basic rules requires certain
    >
    > minimum storage, even 'blackhole' s compressed information must be fundamentally minimum proportional to its spherical area ( transforming
    >
    > 3 dimensions amount of volume of informations dropped into it into 2 dimensions amount) )

    This is more like it. A possibility of a rare enough event which can be made rarer, so that over time this level of rarity can be detected does limit the minimum information size of a single stream.

    http://phys.org/news/2014-07-physici...principle.html

    Just for fun, discover the 3 things in two boxes with none being doubled up. I'm sure the experiment will be done. Non-commutative field algebra is not your standard binary to binary direct mapping.

  15. #15
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    In reply to Kennon Conrad :

    More over the so-called
    'Supercoded' compressed bitstring
    DOES NOT need to be a fixed particular
    Eg 1,600,000 bits... 'Supercoded' compressed
    bitstring can be variable lengths from Eg 1,600,000 bits
    to Eg close to 8Mbits ( slightly less) !

    ITS EASY ENOUGH CLEAR SEE NOW THAT
    'pigeonhole' weaved its illusions falsely
    imposed its own unasked for biased-coin ( head I win - tail I win) restriction that
    the compressed bitstring be ALWAYS an exact particular fixed size!!!!

    [ ... there really are a lot more than
    reasoned shown above : ) ]

  16. #16
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    >>Ah, of course, SUPERCODE! What an amazing concept! If only we had thought of that sooner......

    >>Has it occured to you that a compressor that consistently fits the 2^8,000,000 equally likely unique combinations of 4,000,000 pairs of random bits into only eg. 1,600,000 bits would defy basic laws of compression?
    Last edited by Kennon Conrad; Today at 22:13.


    BLACKHOLE ALREADY PROVEN BY BERKEINSTEIN/
    LEONARD SUSSKIND ( holographic Universe fame) etal, to compresses 3 Dimensions volume of informations INTO
    much smaller spherical area quantity of informations

    ... proving 'pigeonhole' fundamentally
    illusory false

  17. #17
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    If you have 2^500,000 pairs of 2-bits of information
    units, you can always group these together into
    1Mbytes input random file, then proceed SUPERCODE
    into smaller eg 200Kbytes compressed information
    representations : )
    what is the minimum data size that can be squeezed with guarantee? i.e. is it enough to have 3,4,5... bits to reduce them by 1 bit with guarantee?

  18. #18
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    what is the minimum data size that can be squeezed with guarantee? i.e. is it enough to have 3,4,5... bits to reduce them by 1 bit with guarantee?
    ( I'm sure you will certain appreciate professionals integrity/ courtesy here)

    All I am allowed to say at this time, & as best I can :

    If you have Eg 8,000,000 bits random input you
    are already very highly certain you can 'squeeze'
    8 bits savings ( actually paltry 1Millionth bits savings)

    .. If you have Eg 10 Times as much
    input random full-size THEN your are already closest
    to 'GUARANTEED' ( which you sought)

    This may look small 8 bits savings an literation ( you had thought just Eg few bits random input may guarantee 1 bit saving ?)

    however resultant 8 bits smaller 10* 8,000,000 bits
    can further be processed iterated upon, easily ends up
    be some 2 * 8,000,000 bits 'Supercoded'
    compressed information representation

  19. #19
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    Quote Originally Posted by LawCounsels View Post
    COINCIDENTAL : PHYSICISTS ALSO DISCUSS QUANTUM PIGEONHOLE PRINCIPLE , FROM THEIR
    QUANTUM PERSPECTIVE

    http://www.google.com/url?q=http%3A%...rAkTfvopgdE9sQ
    I read the paper at http://arxiv.org/pdf/1407.3194v1.pdf I hope it was reviewed by sharper people than those that did the review of my stuff. And I hope it's not the same crowd that worships at the altar of Al Gore. But quantum bits are not the same as binary bits that are either exactly one or zero. I am under the impression so what quantum bits in my view carry more than one bit of information. They are not black and white but various dynamic shades of gray.

  20. #20
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Quote Originally Posted by LawCounsels View Post
    however resultant 8 bits smaller 10* 8,000,000 bits
    can further be processed iterated upon, easily ends up
    be some 2 * 8,000,000 bits 'Supercoded'
    compressed information representation
    can it be further reduced to 2*8,000,000-1 bits? do you have problem with understanding meaning of the "minimum" word?

  21. #21
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    >>can it be further reduced to 2*8,000,000-1 bits? do you have problem with understanding meaning of the "minimum" word?

    The minimum unit of bits saved each time is 2 bits ( not 1 bit ) !

    YES it can further reduced to 2*800,000 -2 bits , & continuously do so , BUT each time the chance of successful attaining the minimum 2-bits unit savings gets less & less ( UNTIL its no longer worthwhile continue , when chance of success goes below 50% chance )

    ITS A CONTINUOUSLY LESSER CHANCE WITH SMALLER & SMALLER ITERATION'S INPUT BITSLENGTH, & CHANCE CONTINUOUS INCREASES WITH LARGER & LARGER ITERATION'S INPUT BITSLENGTH ( nearer & nearer to 'GUARANTEED' )

    SO YOU CAN SEE THERE IS NO ABSOLUTE BITSLENGTH WHERE ITS 'ABSOLUTE GUARANTEED' , ONLY CONTINUOUS HIGHER & HIGHER CHANCE OF SUCCESS ( continuous approaching nearest to 'GUARANTEED' )

    ON THE OTHER HAND , ITERATION'S INPUT BITSLENGTH MUST BE VERY MINIMUM OF say 1,000,000 bits ( I am withholding the real actual number ! for usual trade secret reason ) ... & yes it cannot work if iteration's input becomes 1,000,000 -1 bits

    [ long before you even get to iteration's input bitslength 1,000,000 bits , it already long becomes not worthwhile continues at around eg 4,000,000 bitslength when success chance already goes below 50%

  22. #22
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by LawCounsels View Post
    >>can it be further reduced to 2*8,000,000-1 bits? do you have problem with understanding meaning of the "minimum" word?

    The minimum unit of bits saved each time is 2 bits ( not 1 bit ) !

    YES it can further reduced to 2*800,000 -2 bits , & continuously do so , BUT each time the chance of successful attaining the minimum 2-bits unit savings gets less & less ( UNTIL its no longer worthwhile continue , when chance of success goes below 50% chance )

    ITS A CONTINUOUSLY LESSER CHANCE WITH SMALLER & SMALLER ITERATION'S INPUT BITSLENGTH, & CHANCE CONTINUOUS INCREASES WITH LARGER & LARGER ITERATION'S INPUT BITSLENGTH ( nearer & nearer to 'GUARANTEED' )

    SO YOU CAN SEE THERE IS NO ABSOLUTE BITSLENGTH WHERE ITS 'ABSOLUTE GUARANTEED' , ONLY CONTINUOUS HIGHER & HIGHER CHANCE OF SUCCESS ( continuous approaching nearest to 'GUARANTEED' )

    ON THE OTHER HAND , ITERATION'S INPUT BITSLENGTH MUST BE VERY MINIMUM OF say 1,000,000 bits ( I am withholding the real actual number ! for usual trade secret reason ) ... & yes it cannot work if iteration's input becomes 1,000,000 -1 bits

    [ long before you even get to iteration's input bitslength 1,000,000 bits , it already long becomes not worthwhile continues at around eg 4,000,000 bitslength when success chance already goes below 50%
    The more Astute readers in this Forum would undoubted now thinking :

    That perhaps a sequence of random numbers, if
    sufficiently long enough, invariable will ha've some kind of 'inherent' weakness so far no-one has realised' fundamental thing, renders it compressible reducible!
    ( yes, readers would also have been weaned of 'pigeonhole' parable)

    In fact, Mark Nelson himself
    has already said AMillionRandomDigit
    likely able be compressed reduced smaller
    by 1 byte!

  23. #23
    Member
    Join Date
    Jul 2014
    Location
    Kenya
    Posts
    59
    Thanks
    0
    Thanked 1 Time in 1 Post
    http://encode.ru/threads/2006-Hello-...e-an-algorithm
    I've done some research in general on compression and determined there are general strategies to suit specific data arrangement, and prevalence in general is required (repetition).
    This can look at specific patterns of 2 bit combinations for their prevalence in a row and locate specific sections of data to process individually for a more optimal compression where all sections have common/unique strings accounted for a more effective compression tree for distribution and this can also in general assist in detection of compressible/uncompressible areas, can get specific variable length strings for the least strings in a code tree and to have the smallest file from least relevant strings, repetition in a row, internal repeating and such, with a reasonable expectation from the potential 50% compression ability from prevalent data (generally close).


    I looked at the notion of compressing data based on having a specific decimal number and treating it with an algorithm. While this may be a generally suitable approach, having an algorithm based on looking at multiples of 2 and some means to account for in a row and a repeated sequence, I also looked at the notion of hashing.

    If you want to shrink data by a factor such as hashing where you have a length of data and you use less data to represent a specific chunk, this will need to be reflected in the executable which will host the data.
    I.e. you are essentially taking a hash of a specific length of data, putting into the header, and having some extra data which in general will cumulatively equal the original length of data as there is no algorithm to compress/shrink from prevalence. This means you have a portion of a specific string of data embedded in the executable, and if you are dealing with large amounts of data, you can use this hashed header information repetitively for chunks of data.

    It is appropriate to determine the exact strings which can be found in a file (as such in the heavy scan algorithm in the pdf) and perhaps a means to treat 'uncompressible' dispersed and balanced bit strings to fake decoding and then encode with a stronger algorithm (the concept does exist to take data which looks compressed and attempt to fake uncompress and recompress in a stronger fashion). This approach should result, including with the other aspects mentioned a generally specific and compact result than having a static hash of data in an executable and using it for chunks of data for example, requiring a fair amount of chunks to break even in a reasonable amount and where there are things like a lot of repetition and something like in a row the compression option is weak.



    EDIT:
    It is worth noting I also did research into the equation nCr which looks at prevalence of 0/1 bits, where on either side the more the number is away from 50/50 the less possible combinations/iterations of placing 0/1s there is.
    I.e. you can have 256 bits, and 20 are 0 and 236 are 1, and with nCr there are 280437550101996454288136030400 possible iterations, and one can use this as a reference to ID the specific string. This is like 12 bytes (96 bits). When your number is more like 128 0s and 128 1s, the number is almost 256 bits, and the curve where compression like this is acceptable is of about 30% of total combinations outside of 0 being more and the opposite of 1.

    However, where nCr is performed on very very very very large data there can be an exponential saving of very very little data considering, and it requires extreme computation.



    It is suitable to treat the specific data for what it is and repetition it has to compress.
    Last edited by SoraK05; 28th July 2014 at 23:24.

  24. #24
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    [ From comp.compression ]

    > >>Originally Posted by Bloax
    >
    >
    >
    > >>Well how do you stuff four pigeons into two holes without there being two pigeons in either of the holes? Because if you can't do that then you can't stuff eight pigeons into four holes either, nor can you stuff 16 pigeons into 8 holes (continue ad nauseum).
    >

    [ posting by Jacko ]

    It's not stuffing a greater number of pigeons into a lesser number of holes that is important. It's stuffing a large number of pigeons into a virtual hole structure created in an abstract representation from a set of holes which constantly move into different organizations in an infinite reversible sequence.

  25. #25
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    >[ From comp.compression posting by Jacko ]


    > >>Originally Posted by Bloax
    >
    > >>Well how do you stuff four pigeons into two holes without there being two pigeons in either of the holes? Because if you can't do that then you can't stuff eight pigeons into four holes either, nor can you stuff 16 pigeons into 8 holes (continue ad nauseum).
    >

    It's not stuffing a greater number of pigeons into a lesser number of holes that is important. It's stuffing a large number of pigeons into a virtual hole structure created in an abstract representation from a set of holes which constantly move into different organizations in an infinite reversible sequence.

  26. #26
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    Many years ago I've thought that the size of entire universe of possible 3-minute MP3s is in the order of millions (ie under billion) so by iterating through some number generator it would be possible to generate every possible MP3 containing 3-minute song. Much later I've realized how dumb that was.

  27. #27
    Member
    Join Date
    Jul 2014
    Location
    Kenya
    Posts
    59
    Thanks
    0
    Thanked 1 Time in 1 Post
    http://www.masmforum.com/board/index.php?topic=13454.0
    I posted on this forum before learning of this one and was referred to this thread about super compression

    Like I mentioned, you require the data to refer at some point, and will ultimately be 'hashed' in a sense in the header/exe (exaggerated by the compression BARF).
    Tools that have schemes for data types are perhaps automating a structure for detecting specific expected patterns, but beyond this you are putting data into the header/exe.

    Converting a string of bits to something that makes it more compressible requires data stored for that reference, and this is going along lines of being a hash.
    Having data to convert 1MB to 200KB means 256^(1024x1024) possibilities being represented in 256^(1024*200) possibilities. This is almost the same as representing 100% of data in 20% of its space. Without having some data externally stored like in a header, this is generally impossible. In general the expectation for a max compressing is accommodating a 50/50 targeting prevalence, where you have data and its compressed/represented version of a range of data (<reasonable> length), with the benefit being where in common use there is a lot of repetition it can reduce the total size.


    EDIT:
    To demonstrate what I mean, if you have a file with a decimal number when looking at its data as decimal and accounting for any 0s in the beginning before the first 1 bit, to use this number you will need all the bits the file consumes. If you decide to have 256^1024 for a 1KB file and that is the length, and then you use a number to represent the rest you pretty much end with a file of the exact same length except rewritten if accounting for general expectations/options, and otherwise depending on the number/content you can gain compression (ideally 1s being as close to the far right/end).
    You can rewrite a file in ways, and uses in general the same amount of data, and where you do this to make data more compressible you can gain compression depending, and is considered shuffling/moving data and retaining this movement calculating. This allows for breaking even with repetition/prevalence when rewriting that specific repetitive/prevalent data.

    Where you have many many 0 bits, you can use nCr for example which guarantees less total data. If a majority of the data is constituted of something like 8 byte patterns and then with a couple more, the fact that out of 256 options only 10 are used cuts down space, and then you have prevalence of 8 patterns meaning you can write these 8 once and give it a representation of very little data like 2-5 bits each instead of a static 4 bits, breaking even by rewriting the repetitive data in less bits and its data storage info.

    Prevalence in general is required to be used to rewrite the repetitive data in less bits as well as its database entry, and break even providing that this process will result in a smaller file with any other data around it.


    1MB as 200KB means storing something like 80% of data in the header and then using 20% data, and otherwise you have something like each increment of data will result in iterations of the equivalent of adding 10 instead of 1, and this is not specific to cater for the entire range. This can work if your expected values of data are always a multiple of 10, and in a way is doing something similar to shrinking the 10 byte patterns above from 256 to even a static 4 bits instead of 8 each.
    Last edited by SoraK05; 29th July 2014 at 17:48.

  28. #28
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Yes far more profitable useful do the other way,
    take the 3 mins MP3 music THEN try make a
    particular Number Generator unique to it which is
    smaller in size

    This may initially also at 1st sight seems like hard
    P=NP intractable time, which Wolfram was not
    able solve earlier ( world's P=NP experts are
    at this time divided out whether P=NP? )

    solved now!

  29. #29
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    Quote Originally Posted by LawCounsels View Post
    The more Astute readers in this Forum would undoubted now thinking :

    That perhaps a sequence of random numbers, if
    sufficiently long enough, invariable will ha've some kind of 'inherent' weakness so far no-one has realised' fundamental thing, renders it compressible reducible!
    ( yes, readers would also have been weaned of 'pigeonhole' parable)

    In fact, Mark Nelson himself
    has already said AMillionRandomDigit
    likely able be compressed reduced smaller
    by 1 byte!
    Yes I am sure the so called one million random digit file can be compressed a byte or more. I have a gut feeling that if you do a enough bijective transforms for example BWTS enough times then do something like my bijective LZW it will compress to a smaller size. However that's not fair since you have to take into account the number of times the BWTS was done. The number of BWTS would likely be so great that most computers could never do the number of BWTS needed before they break down since BWTS just creates a new permutation of the data each time until the starting one is recreated again after many many iterations.

  30. #30
    Member
    Join Date
    Jan 2014
    Location
    Bothell, Washington, USA
    Posts
    685
    Thanks
    153
    Thanked 177 Times in 105 Posts
    Quote Originally Posted by biject.bwts View Post
    Yes I am sure the so called one million random digit file can be compressed a byte or more.
    We could get it down to 1 bit if allowed to customize the compressor and decompressor

Page 1 of 3 123 LastLast

Similar Threads

  1. Why not open source?
    By nemequ in forum Data Compression
    Replies: 65
    Last Post: 25th November 2013, 23:05
  2. MCM open source
    By Mat Chartier in forum Data Compression
    Replies: 12
    Last Post: 29th August 2013, 20:22
  3. (Open) Office help needed
    By m^2 in forum The Off-Topic Lounge
    Replies: 2
    Last Post: 19th August 2011, 21:36
  4. Open Sourcing
    By Cyan in forum Data Compression
    Replies: 3
    Last Post: 9th December 2009, 00:39
  5. PeaZip - open source archiver
    By squxe in forum Data Compression
    Replies: 1
    Last Post: 3rd December 2009, 22:01

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •