I got this rough idea by observing this forum: To combine compression & encryption to counter cryptoanalysis, esp. watermarking attacks.
This would be based on encryption on filesystem (or device) level, not file level.
When you put a file on such a filesystem, it is first compressed, even if the result is bigger than the original, and then encrypted. This should throw all watermarking attacks out of the window, given that the compressing stage can deal with big files: Even if the result is bigger, the compression changes the data and (more important) moves all offsets within the file, so the encryption stage gets input data that is not repeating itself at certain distances that can be aligned to filesystem blocks borders. In the best case, the poisened file would be shrunk to something that is not recognizable anymore even before the encrytion stage comes into play.
With such an underlying compression stage, simple algorithms for initialization vectors like "IV= block number" would be safe.
Can anybody see a flaw in here?