# Thread: The lag-based compression algorithm (worth 1B\$!)

1. ## The lag-based compression algorithm (worth 1B\$!)

Ok, here is a funny thing I have been thinking about (the "worth 1B\$" is a joke guys!).
We know that it is not possible to compress 2 bits into 1 or zero (!) bits in a static memory system, it does not make sense. But we can do it when transferring data over the network, with a loss-less compression algorithm

Here is the trick.

First you have to remember that any common network transfer over TCP/IP happens after that the systems made a connection opening, and ends with a connection closure. This will be restated later on.

Now, if you want to send 2 bits to your receiver, by sending just 1 bit or zero bit over the network, you can 'encode' the extra bit into another dimension of information, which is... time.

Let's assume that sender and receiver both know the algorithm. They can agree on the following, where t is a specific 'blank' of time, aka a lag in the connection. An example of t: if your usual latency with your receiver is 30ms, you can set t to be 60ms.

Step 1: connection opens

Step 2: send the bit (according to the following example schema)

00 => 0 + t (1 bit is sent)
01 => t + 1 (1 bit is sent)
10 => 1 + t (1 bit is sent)
11 => t + t (0 bit is sent - t+0 could have worked here also, 1 bit is sent in such case)

Step 3: connection closes.

By analyzing time between events, the receiver can now properly rebuild the 2 bits of information with only 1 bit or zero bits that have been consumed in the network statistics!

Well it's not really compression, because you consume 'something' (time) to save something else (data), so this does not break the pigeonhole principle. However, from a pure bandwidth perspective, it does save something

Let's go a bit farther.

We could imagine a system with hyper-precise clocks. For example, quantum clocks able to proceed billions or more of ticks per seconds. You could transfer large numbers by starting the receiving clock and measuring the numbers of ticks elapsed after that a very short impulse has been received (sent by emitter).

It's like if you have a computer computing all the possible numbers, and have the sender telling the receiving computer when to stops. The last number the receiving computer has computed is the data that was meant to be transferred.

2. wrong place mate.
http://encode.ru/forums/3-Off-Topic
also latency induced from a wireless connection or ping could hurt this system to the point where it's no longer practical. Hilarious joke btw.

3. It should work within the bounds of the Shannon-Hartley theorem.

4. Originally Posted by Matt Mahoney
It should work within the bounds of the Shannon-Hartley theorem.
I am not exactly sure what you meant, but I did read the SH theorem, thank you for the link!

My theoretical example is abstract, and is not really meant to be taken as a real life example (it makes no consideration for noise on the wires). In real life you would also consider that most network packets are padded (to the next % 32 bits according to the current TCP or UDP standard). This could be used however to encode pure data transfer protocols (between two devices) disregarding any known transfer protocol.

The example is meant to show that if we can't compress random data beyond a certain threshold, we can however do something near it by transforming raw data into other kind of data (here from electricity impulses to time). There is a real economy of data, and a practical 2:1 ratio. It is not 'absolute compression' because you do have to get extra information (time) for information you cutted off, BUT, from a pure perspective that considers compression as reducing the quantity of data, it is compression applied to a stream of data (and works only with stream of data).

What I mean is that if some people do care about transmitting less information, while latency does not matter to them, this would work. The trick is replacing data by something else, something we care less about (well I do not think you should care about wasted time in your life, but we can admit we want to, here).

Let's have other fun. In order to save space, you can try to save your data into something else. The BARF joke-algorithm did that: it transforms a set of bytes to 0 bytes, but stores the data in the filenames. Depending of your filesystem it could actually be considered as compression: if the filesystem forces a fixed-length (array) for any filename on disk, then storing some data into the filename would actually really save you some place, because no matter what you do this space will be used by your filesystem.

Let's take another example with Internet Protocols, and most specifically IPv6.

IPv6 addresses are coded on 16 bytes (128 bits). When you buy an IPv6 address at IANA, you are granted a fixed 64bits address and the lower 64bits part of the address are free for you to use as you intend to. Now, no matter what IP address you chose, all your network packets will be encoded using a 128 bits address (the friendly address ":1" will be encoded in the network packets as "0000:0000:0000:0000:0000:0000:0000:0001").

As you are free to use 64 bits, you can send any data encoded into the address of these 64bits. This means you can send any set of 8 bytes of data with 0 sized network packets. It would actually really save you some space in network transfers. And there are other fields in network packets that might be used also (I think nearly 4 additional bytes might be easily used by reencoding data into the forced data structures of network packets).

But is all this compression or just bullshit data reencoding?

According to Wikipedia, "in digital signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation". A network packet containing data is a set of data, 'reencoding' these data somehow to achieve less bits is highly feasible, and for any purely random set of data

5. I think that it will be hard to find a good example where a time based information transmission (or whatever we should call your idea) can be practically used.

The latency allows only a small amount of information to be stored. You could store more if you had a straight connection between 2 computers (no wireless lan or the unreliable internet). Then you could give the packet a latency of 53 ms to transmit the value "53" for example.

But a packet is usually several bytes in size even if you transmit 0 bits of payload. So you would also need to use a different protocoll to reduce the overhead. Maybe something where you can really send just a single bit with no overhead at all. But what would that be good for? 2 computers directly connected and not transmitting any data while they could use the whole bandwidth of the connection so that they can use their watches to decode the information. Waiting and not sending could also be counted as sending 100% but still not getting more information over the wire.

If we are talking about a network with other senders and recievers and limited bandwidth then saving something has a value. But then we also have more interference and can't use the latency as efficient.

The best example I came up with is the following: Sometimes it's an issue how much energy is necessary to change the voltage on a wire. So sending data over a wire needs power. The more often I change the voltage the more power I need. So some protocols count a "not changing the voltage" as a 0 and a "changing the voltage" as a 1 or something like this. I think that USB works like this. USB 2.0 has 2 differential data wires. A zero is not indicated by pulling the wires down to 0 volts but by doing nothing. And a 1 is indicated by changing the voltage. But it is long ago when I last looked in the specifications so my description might not be accurate.

6. Actually i think its very interesting, but not transfering via time, but instead using something more prodictable such as a systems cycle rate. So essentially a program that uses its systems cps eg 2 cycle =1+2bit 3 cycle=1+4bit etc We can call it Cycle Compression :-P

7. You just transformed binary (0, 1) into trinary (0, 1, lag)

8. The Shannon-Hartley theorem says that if you have a noise free communication channel then you can transmit information infinitely fast. For example, if you could measure time or voltage or some other real quantity with infinite precision, then you could encode an infinite number of digits as a real number and transmit it.

Of course the real world is not like this. Ultimately you are limited by quantum mechanics, which put a finite bound on the information capacity of the observable universe. Any proposal to store more than 2.95 x 10^122 bits would exceed the Bekenstein bound of the Hubble radius. Seth Lloyd has an interesting paper on this topic.

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•