* A note on compression: Big Data is heavily (and losslessly) compressed - typically 2x to 4x better than GZIP on disk (YMMV), and can be accessed like a Java Array (a Giant greater-than-4billion-element distributed Java array). H2O guarantees that if the data is accessed linearly then the access time will match what you can get out of C or Fortran - i.e., be memory bandwidth bound, not CPU bound. You can access the array (for both reads and writes) in any order, of course, but you get strong speed guarantees for accessing in-order. You can do pretty much anything to an H2O array that you can do with a Java array, although due to size/scale you'll probably want to access the array in a blatantly parallel style. * A note on decompression: The data is decompressed Just-In-Time strictly in CPU registers in the hot inner loops - and THIS IS FASTER than decompressing beforehand because most algorithms are memory bandwidth bound. Moving a 32byte cached line of compressed data into CPU registers gets more data per-cache-miss than moving 4 8-byte doubles. Decompression typically takes 2-4 instructions of shift/scale/add per element, and is well covered by the cache-miss costs. As an example, adding a billion compressed doubles (e.g., to compute the average) takes 12ms on a 32-core (dual-socket) E5-2670 Intel - an achieved bandwidth of over 600Gb/sec - hugely faster than all other Big Data solutions. Accounting for compression, H2O achieved 83Gb/sec out of the theoretical maximum of 100Gb/sec on this hardware.
What got me interested is that they had to do the decompression code by themselves because there seem to be no library that would serve them well enough; the output interface is very different from what basically everybody does.