Page 1 of 3 123 LastLast
Results 1 to 30 of 82

Thread: Justification for data compression to be very foundation of AI

  1. #1
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts

    Justification for data compression to be very foundation of AI

    This is often said to be so, but no one has seen data compression doing what neural network does recognising cats and dogs, playing go... Except shrinking redundancy making file smaller


    or some vague similarity

    Is there more to really deserve?

  2. #2
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Statistical compression and AI basically do the same thing - build a model of data, which allows to predict what happens next, recognize, or generate.

    1) Recognition / similarity measure: if csize(sample1)+csize(sample2) < K*csize(sample1+sample2), then sample1 and sample2 are similar.
    Finding a sample with most similarity from a set lets you recognize stuff.
    2) Generation: feeding random data to a decoder with a trained model.
    For example, here generated text starts after "===": http://nishi.dreamhosters.com/u/pmj_gen.txt
    3) Prediction is what statistical models in compressors do naturally.
    Arguably, they are better at this than NN, since NN don't set any records in compression.

    Making a lossy AI model with a subjective quality metric is actually simpler than writing a compressor -
    tasks are basically the same, but compression has much more restrictions (like processing speed).

  3. The Following 2 Users Say Thank You to Shelwien For This Useful Post:

    Gotty (5th April 2019),Hakan Abbas (5th April 2019)

  4. #3
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    We have yet to see any data compressions driving forth next advancements in AI/general AI, which if as said is very foundation of intelligence

    It's hard to envisage data compressions as the driving factor for intelligent brain

  5. #4
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Its not hard, since its basically the same thing.
    They just use different terminology when talking about compression, like "maximum likelihood" or "minimum description length".

    Also http://www.hutter1.net/ai/uaibook.htm#approx

  6. #5
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    It's a bit like saying chemistry is foundation of all intelligent lifeforms on earth, but our mastery of chemistry at present is yet centuries away from producing anew an entire elephant

    very vast gulf yet to showcase data compressionists
    replicate intelligent brain...there is not yet any data compression method/s which point the way to practical replicate this

    Even when hutter challenge is next improved further by 90% it doesn't really get us anywhere nearer

  7. #6
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Replicating the human brain is another thing... its not really the task for computer science.
    And solving various AI problems is quite doable... what slows it down is not our inability to build models, but rather copyright.

    For example, testing and training translation software requires access to lots of text samples, preferably same text in multiple languages,
    preferably properly aligned and with useful markup. Can we use professional translations of modern fiction for this?
    In theory, yes - one can find plenty of pirated books online. But on other hand, that can't be openly used in academic papers or commercial software.

    Same with everything else really. And what's worse - a lot of common knowledge is not really digitized anywhere at all,
    so the only way to collect it would be to build some kind of android platform and let it communicate with random people.
    I'm pretty sure that some companies already started doing it with their robots (call-center bots for one).
    But do you think they would open-source their data?

    > Even when hutter challenge is next improved further by 90% it doesn't really get us anywhere nearer

    Hutter challenge unfortunately has lots of problems with target data and rules, so I agree -
    manually adding more handlers for structures found in enwik8 (xml,html,wiki,url,utf8...) does improve compression,
    but doesn't make any progress for AI purposes.

    But I do think that a similar contest with a target file which doesn't consist mostly of markup languages
    (maybe even plaintext extracted from enwik would do), without 1Gb RAM limit (which only lets paq participate),
    and maybe with some other minor changes, could be really helpful for AI research.

  8. #7
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    A basic architecture of neural networks is autoencoder ( https://en.wikipedia.org/wiki/Autoencoder ) - "compressing" (usually lossy) higher dimensional data into lower dimensional "latent space", from which we can reconstruct distorted original e.g. image.
    From the other side there is this "information bottleneck" in neural networks: that lower layers just extract features from statistics, higher layers operate on these features: https://arxiv.org/pdf/1703.00810

  9. The Following 3 Users Say Thank You to Jarek For This Useful Post:

    compgt (6th April 2019),Cyan (5th April 2019),Hakan Abbas (5th April 2019)

  10. #8
    Member
    Join Date
    Sep 2018
    Location
    Philippines
    Posts
    29
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by LawCounsels View Post
    We have yet to see any data compressions driving forth next advancements in AI/general AI, which if as said is very foundation of intelligence

    It's hard to envisage data compressions as the driving factor for intelligent brain
    The thing that immediately comes to mind about AI/compression relation is in chatbot technologies, because both operates with text and characters.

    In general AI, the input symbols of compression are "stimuli". How the hidden layers of neural nets connect to each other is the same "context" in data compression.

  11. #9
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    would you ask a data compression expert to build AlphaGo? We don't think of DeepMind as bunch of traditional data compression guys

    someone even if able shrink file by 90% will not help much building AlphaGo

  12. #10
    Member
    Join Date
    Sep 2018
    Location
    Philippines
    Posts
    29
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by LawCounsels View Post
    would you ask a data compression expert to build AlphaGo? We don't think of DeepMind as bunch of traditional data compression guys

    someone even if able shrink file by 90% will not help much building AlphaGo

    I haven't seen AlphaGo code, but AlphaGo i know it is based on finding better "strategies". These strategies are like forward-context symbols of BWT. Strategies ultimately turn into symbols, if decomposed into a series of steps.

    Data compression considers just one finite size input or data source. You predict based on past data fragment to achieve just minimum description length. But in AI, it is having many inputs and finding patterns, correlates, and rules in those input data.

    So you're right that compression is somewhat limited than AI algorithm. Unless those symbols translate into strategies and somehow their compressed state is equivalent to the efficiency (running time) of the strategies.

    Oh wait, the compression algorithm is tailored to accept many inputs too, of varying sizes. So the compression algorithm is one kind of AI's strategy. AlphaGo, in a sense, can compress.

    Given limited resources or storage media, it is "intelligent" to compress.

  13. #11
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Intelligence (https://en.wikipedia.org/wiki/Theory..._intelligences) has many sides:

    1. musical-rhythmic,
    2. visual-spatial,
    3. verbal-linguistic,
    4. logical-mathematical,
    5. bodily-kinesthetic,
    6. interpersonal,
    7. intrapersonal,
    8. naturalistic

    Human intelligence covers the entire spectrum because of both hardwired instincts and learnt abilities. The emphasis is on the latter: we can learn.
    An agent with Strong Artificial Intelligence also needs to covers the entire spectrum. That's why it is hard to get there.

    Please note, that a calculator usually beats us in arithmetic intelligence. A gps device may beat us in specific tasks in spatial intelligence. They do not learn however. They are hardwired, like our instincts. They are intelligent, but in a very limited sense. It is not really the artificial intelligence we are looking for.
    So what are we looking for?

    https://en.wikipedia.org/wiki/Intelligence

    Chapter: Artificial intelligence
    Quote: "Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception, and the ability to move and to manipulate objects."

    Can we find those traits in compression software?

    If you consider gzip, it may have little to do with the above. Does gzip learn? It gathers and uses information as it compresses a file - so it learns in a limited sense. Does it reason? Well... not really.

    On the other hand, I'd say that a context mixing compression software using a neural network *learns* and *reasons* in the classical sense.

    A context mixing compressor is very similar to our brain: it percepts sensory input (the bits of a file), it remembers, learns, reasons, and acts (outputs a prediction what the next bit would be).
    So do we have a link between intelligence and data compression? Yes we do.
    To achieve better compression you need to enhance the above abilities of a compression software. Especially learning and reasoning.

    Lately I was following the uprising of AlphaGo, AlphaZero and AlphaStar. How do they work? They have a neural network. On youtube you can find a video about AlphaStar playing games (and beating the top human players): there is a realtime visualization of its neural network and how it predicts the outcome of the game based on the current situation, and how it decides what to do next in the game.
    By watching the realtime prediction of the outcome, I couldn't help but link it to paq. It is the same! A probability between 0 and 1. Watch the video. It's worth it.
    Last edited by Gotty; 6th April 2019 at 18:30. Reason: Fixed link to youtube video

  14. The Following 3 Users Say Thank You to Gotty For This Useful Post:

    compgt (6th April 2019),Gonzalo Muñoz (6th April 2019),snowcat (9th April 2019)

  15. #12
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Maybe it can be convincing made out that AlphaGo and paq are same giving between 0 and 1....but paq works only on given fixed input whereas AlphaGo 's input is unpredictable not fixed depending on opponents choice of moves at each turn and yet does well

    It would not be possible for paq to similar do well when input is not given fixed? You may say paq can be adaptive but in Go one needs to play at top best every single time , not 'slow' to do so

    Also AI and paq may take same input ( board position), AI makes a decision 0 or 1 whereas paq wants a smaller representation of the board

  16. #13
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Quote Originally Posted by LawCounsels View Post
    would you ask a data compression expert to build AlphaGo? We don't think of DeepMind as bunch of traditional data compression guys

    someone even if able shrink file by 90% will not help much building AlphaGo
    I think a context mixing / neural network data compression expert would not be completely lost in the DeepMind team.

  17. #14
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Quote Originally Posted by LawCounsels View Post
    Maybe it can be convincing made out that AlphaGo and paq are same giving between 0 and 1....but paq works only on given fixed input whereas AlphaGo 's input is unpredictable not fixed depending on opponents choice of moves at each turn and yet does well

    It would not be possible for paq to similar do well when input is not given fixed?
    Their output is strikingly similar, and that was what I wanted to emphasize. But it's not just the output, the whole learning and reasoning process.

    Paq and AlphaGo/AlphaZero/AlphaStar are built for different purpose (data compression vs. Go, Chess, StarCraft) but they all exhibit traits of Artificial Intelligence. None of them are general though.

    Paq does not have a "fixed input". It can process *any* file. The number of possible files is, well more than astronomical.
    But AlphaGo/AlphaZero/AlphaStar do not have a general domain either: they can play go/chess/starcraft only.
    Which domain is larger? General data compression or playing strategic games? (It does not really matter because both domains are inconceivably huge.)

  18. #15
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Theoretically an unsupervised self learning AI can do Go and data compression and cats and dogs etc

    Here seems clear data compression is just one small part of things AI can do

  19. #16
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    >>Paq does not have a "fixed input". It can process *any* file.

    The input file possibilities is more than astronomical BUT given fixed at start

    It would not be possible for paq to similar do well when input is not given fixed? You may say paq can be adaptive but in Go one needs to play at top best every single time , not 'slow' to do so

  20. #17
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Quote Originally Posted by LawCounsels View Post
    Here seems clear data compression is just one small part of things AI can do
    Yes.
    Artificial Intelligence has a broad target domain. Data compression is one of them.


    And I agree with you on that "data compression to be very foundation of AI" is not true. Where does this statement come from?


    A data compression software that exhibits trails of Intelligence can be considered of an Artificial Intelligence agent in this particular domain.
    I would not considere data compression to be the foundation of AI. It is just one manifestation. Shelwien put it very elegantly earlier:
    Quote Originally Posted by Shelwien View Post
    Statistical compression and AI basically do the same thing - build a model of data, which allows to predict what happens next, recognize, or generate.

  21. #18
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    >>
    An I agree with you on that "data compression to be very foundation of AI" is not true. Where does this statement come from?

    among them Hutter prize website

  22. #19
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Maybe this one?
    On the rationale page: "In 2000, Hutter [21,22] proved that finding the optimal behavior of a rational agent is equivalent to compressing its observations."
    (Emphasis by me.)

    Read it this way: Bulding the perfect AI agent (perfect = that behaves optimally) requires the Agent to predict what comes/happens next optimally. Building the perfect data compression program requires the program to predict what comes next in the input optimally. It is the same. See? The foundation is the same.

    I would not say that data compression is the foundation of Artificial Intelligence. But to build an optimally behaving AI agent you'll have to use the same tools as in data compression.

    Here is the fix: "[LawCounsels:] data compression to be very foundation of optimally behaving AI" (addition by me)
    Last edited by Gotty; 5th April 2019 at 18:47. Reason: Deletion

  23. The Following User Says Thank You to Gotty For This Useful Post:

    Shelwien (5th April 2019)

  24. #20
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    It also says Compression is Equivalent to General Intelligence

    ( one and the same )

    http://mattmahoney.net/dc/rationale.html

    maybe meant as in Max Tegmark ( MIT) explanation of tie-in of NN with the physical universe sense?

    "The Holographic Principle: Why Deep Learning Works"
    Deep Learning networks are also tensor networks. Deep Learning networks however are not as uniform as a MERA network, however they exhibit similar entanglements. As information flows from input to output in either a fully connected network or a convolution network, the information are similarly entangled.
    Last edited by LawCounsels; 5th April 2019 at 18:59.

  25. #21
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    Would you agree with "prediction is foundation of AI"?
    Well, statistical data compression is prediction + shannon entropy as metric to determine prediction quality.
    Its certainly just a Hutter's theory that best data compression = best AI.
    But personally I do agree with it.
    I also think that the recent (2011+) breakthrough in "deep learning" was caused by more active use of log-based functions,
    which basically turn prediction quality metric into a function of entropy.
    Compared to Euclidean distance (or worse approximations) which was more popular before, its certainly a breakthough.

    Btw, LZ compression can't be really discarded either.
    Its inconvenient that LZ doesn't provide symbol probability distributions explictly.
    But there's a general solution for this:
    1. Loop through several bits of possible compressed data (enough for LZ token - maybe 32 bits or so)
    2. Attempt LZ decoding from this data
    3. See the first decoded symbol and its probability (determined via 2^-codelength from bits consumed by LZ)
    4. Restore LZ state to before decoding attempt
    5. Sum probabilities by decoded symbol
    6. We have our probability distribution

  26. #22
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    Quote Originally Posted by Gotty View Post
    I think a context mixing / neural network data compression expert would not be completely lost in the DeepMind team.
    There actually is one ...

  27. The Following User Says Thank You to Cyan For This Useful Post:

    Gotty (5th April 2019)

  28. #23
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    41
    Thanks
    9
    Thanked 16 Times in 11 Posts
    The recent breakthroughs in deep learning (apart from compute power + training set sizes) are actually from using less log-based functions and more linear rectified units (log-based only on the output layer in most cases), skip connections to fight vanishing gradients, etc tweaks; Another big idea was using unsupervised training layer-by-layer (as autoencoders) to help get deeper networks (for which Bengio is getting a Turing award), even though that's no longer done either. Autoencoders, as was already mentioned are very "adjacent" to lossy compression.

  29. The Following 3 Users Say Thank You to Stefan Atev For This Useful Post:

    Cyan (5th April 2019),Gotty (5th April 2019),Shelwien (5th April 2019)

  30. #24
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Quote Originally Posted by LawCounsels View Post
    It also says Compression is Equivalent to General Intelligence

    ( one and the same )

    http://mattmahoney.net/dc/rationale.html
    Yes, that is the title of the chapter where I got my quote, too.
    In that chapter it clearly talks about an optimal agent. See:

    "In 2000, Hutter [21,22] proved that finding the optimal behavior of a rational agent is equivalent to compressing its observations."
    "In addition, the environment outputs a reward signal (a number) to the agent, and the agent's goal is to maximize the accumulated reward."
    "What Hutter proved is that the optimal behavior of an agent is to guess that the environment is controlled by the shortest program that is consistent with all of the interaction observed so far."

    What is the case when you don't want your AI to be optimal?
    It is true that if you don't want to build an optimally behaving Intelligent Agent, you can still use the tools from data compression: finding patterns, generalization, prediction. Of course the agent will not use the output (the probability of what comes next) to compress data, but to predict the most reasonable outcome in order to act accordingly. This implies that the compression software emits predictions or is guided by predictions! If it does not use predictions, if it is "mechanical" (RLE, LZW, etc) then we can't consider it being intelligent.

    When data compression is "mechanical" (RLE, LZW, etc.) its like a chess engine, that is "programmed" what to do. It may work extremely well in its domain, but it's not really intelligent. The more reasoning, heuristics you add to your compression program (neural network, context mixing), the more intelligent it becomes. Stockfish, the current top 1 chess engine has a lot of sophisticated algorithms: position analysis, excellent pruning, heuristically found weights, but it is still mechanical from our point of view. AlphaZero is the first chess engine that is truly Intelligent. It learnt chess on its own and beat Stockfish instantly.

    In that sense I would fix the title:
    "Optimal Compression is Equivalent to optimally behaving General Intelligence" (in the strongest sense)
    or
    "Prediction based compression is Equivalent to Intelligence" (disallowing mechanical compressors and allowing different levels of intelligence, not just AGI)

  31. #25
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Quote Originally Posted by Shelwien View Post
    I also think that the recent (2011+) breakthrough in "deep learning" was caused by more active use of log-based functions,
    which basically turn prediction quality metric into a function of entropy.
    Compared to Euclidean distance (or worse approximations) which was more popular before, its certainly a breakthough.
    How true! Well put.

    Quote Originally Posted by Shelwien View Post
    Btw, LZ compression can't be really discarded either.
    I agree with you. I consider the intelligence level of such compressors as the intelligence of a plant. They are quite "mechanical". They do not exhibit much traits of intelligence. I'd personally discard them from the game, but I have no problem if they stay. Borderline cases.

  32. #26
    Member
    Join Date
    Apr 2012
    Location
    London
    Posts
    239
    Thanks
    12
    Thanked 0 Times in 0 Posts
    Experts community's observation differ from pro-compression assertions here: prediction data compression breakthroughs does not drive rapid pace of AI advancements

  33. #27
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Quote Originally Posted by Shelwien View Post
    manually adding more handlers for structures found in enwik8 (xml,html,wiki,url,utf8...) does improve compression,
    but doesn't make any progress for AI purposes.
    Yepp. Adding more handlers requires your intelligence. It does not advance AI. When a compression software recognizes those patterns so that you don't have to add any handlers, than that is intelligent.

  34. #28
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    We don't really have anything completely unknown to AI community,
    nor do we have models with 2x better compression than what NN can provide.
    Code:
    31,255,092 p5  
    25,377,998 p6  
    24,714,219 p12 
    20,488,816 lstm-compress v1
    20,494,577 lstm-compress v2
    20,318,653 lstm-compress v3
    20,307,295 bwmonstr 0.02 (BWT)
    20,356,097 glza 0.10.1 (LZ78?)
    19,963,105 ash 04a (bytewise CM)
    19,055,092 ppmonstr vJ (PPM)
    So yeah, here we have a model unique to data compression beating lstm-compress by (1-19055092/20318653)*100 = 6.22%.
    Do you think this would impress an AI expert?
    They're too used to lossy processing, so they'd keep ignoring the data compression research until we'd get like a 50% advantage (which is impossible),
    or until they'd stop getting any progress on their own and would have to start branching off (also unlikely, as there're always visible improvements from using more hardware).

    Data compression really has some things that AI research could use:
    1. SSE/APM = secondary statistics
    2. entropy-based parsing optimization
    3. entropy as quality metric
    4. model verification via data compression (its easy to make a mistake with "future information" otherwise)
    5. some speed optimizations

    But there's nothing impressive enough to beat all competition once used.

  35. #29
    Member Gotty's Avatar
    Join Date
    Oct 2017
    Location
    Hungary
    Posts
    343
    Thanks
    235
    Thanked 226 Times in 123 Posts
    Quote Originally Posted by LawCounsels View Post
    Experts community's observation differ from pro-compression assertions here: prediction data compression breakthroughs does not drive rapid pace of AI advancements
    Oh, where have you been? You missed the breaking news!:

    Quote Originally Posted by Blindtech View Post
    We made an awesome breakthrough tonight which is key to the decompression side of things. Finishing and tweaking the compressor this week. On current test we were able to compress a 5.14gb pst file to 13 bytes.
    (Don't take me seriously. And especially don't take that seriously.)

    It's difficult to compare the paces. There may not be a lot of breakthroughs but there are certainly some. Meanwhile there is a solid and constant progress with a very good metric: file size.

  36. The Following User Says Thank You to Gotty For This Useful Post:

    LawCounsels (5th April 2019)

  37. #30
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,134
    Thanks
    179
    Thanked 921 Times in 469 Posts
    @Gotty:
    > When a compression software recognizes those patterns so that you don't have to add any handlers, than that is intelligent.

    That's blocked by enwik8 size and decoder size inclusion rule, also time limit.
    We can't let decoder discover structures in already decoded data and learn from it, since that would add too much code.
    For example, I've seen a parser of english language with a 20M+ exe size (that's not counting various dictionaries which expanded to 400M+).

Page 1 of 3 123 LastLast

Similar Threads

  1. loseless data compression method for all digital data type
    By rarkyan in forum Data Compression
    Replies: 157
    Last Post: 9th July 2019, 17:28
  2. Data Compression PC
    By encode in forum The Off-Topic Lounge
    Replies: 202
    Last Post: 3rd January 2019, 23:28
  3. Unstructured data compression
    By hsu mon in forum The Off-Topic Lounge
    Replies: 2
    Last Post: 24th October 2015, 20:39
  4. lossless data compression
    By SLS in forum Data Compression
    Replies: 21
    Last Post: 15th March 2011, 11:35
  5. Data Compression Crisis
    By encode in forum The Off-Topic Lounge
    Replies: 15
    Last Post: 24th May 2009, 19:30

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •