Results 1 to 6 of 6

Thread: Lexicographic transaction ordering (LTOR)

  1. #1
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    222
    Thanks
    89
    Thanked 46 Times in 30 Posts

    Lexicographic transaction ordering (LTOR)

    Hi all -- I just read about the Xthinner project. He uses something called lexicographic transaction ordering (LTOR) to get better than 99% compression of bitcoin blocks. Has anyone heard of LTOR? I haven't. It sounds like it could be a form of delta encoding. Otherwise, I'm not sure what it is.

  2. #2
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    Briefly looked at https://medium.com/@j_73307/benefits...r-8d5b77cc2ab0
    and it is about distinguishing transactions - producing unique identifiers.
    Theoretical limit for this problem is 1.44 bits/element (transaction), but it is really tough to achieve.
    2.07 bits/element is said to be practical:
    https://cstheory.stackexchange.com/q...-using-less-th

  3. #3
    Member
    Join Date
    Dec 2017
    Location
    china
    Posts
    8
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Most anti-asic blcokchain mining algorithm (Ethash/Grin) requires huge DDR storage and bandwidth. Are there any good ways to compress them?

  4. #4
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    blue, I don't know the details, but "anti-asic" should depend on resources you cannot put into ASIC, like large memory requirements ( https://en.wikipedia.org/wiki/Proof-of-space ) - long sequence to which you need random access.
    Such sequence is deterministic, but designed to be practically impossible to calculate in a smarter way, like sequence of hashes of previous values.
    In this case, compressing such pseudorandom sequence for fast random access should be practically impossible in properly designed cryptocurrency, what is not that difficult.

  5. #5
    Member
    Join Date
    Dec 2017
    Location
    china
    Posts
    8
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Jarek,you are right. Random accessing of the pseudo random data dominates processing. Modern GPU usually has huge memory bandwidth and calculation ability. These algorithms are always thirsty for memory bandwidth and the arithmetic units inside are usually wasted. So that is my interest : would there be some way to save some memory bandwidth by increasing the usage of arithmetic units instead? Compression is the first thing in glance.

  6. #6
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    blue, it was done by design - to avoid domination of cryptocurrency by hardware manufacturers.
    And it can be done right, e.g. having a sequence of hashes of previous values, if new hash is calculated only from previous value you could use arithmetic units to optimize the process ... however, if e.g. new hash is calculated based on randomly accessed entire past - there are rather no ways for optimization (... unless there is some complex dependency hidden by designers) - you just have to rely on memory.

Similar Threads

  1. Files ordering + content compression
    By CompressMaster in forum Data Compression
    Replies: 5
    Last Post: 14th August 2018, 14:00

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •