A year ago I was involved in a project that dealt with compressing a few hundred iso-images. Since isos themselves tend to be treated as a single file by archivers they are not as efficiently handled as they could be.
My idea is to parse images. Every image comes with a TOC which an archiver can use to virtually handle all files in it separately. This way all normal benefits such as sorting and grouping similar files together as well as being able to use filters on certain files become available and should boost the compressibility quite a bit.
Parsing an ISO is also not too complicated. The archiver could assign the names of the files from the TOC and later - when extracting - could use the TOC to append the files in the correct order to the iso-header. The excess space of the last sector of each file could be interpreted as part of the file (meaning that each file would have a size that would be a multiple of the sector size, e.g. 204. This might be a little bit less efficient than just filling up the excess space through an algorithm, but in rare cases where this space is not zeroed out this would prevent the archiver from being lossy.
Of course this gets more complex with increasing complexity of the image, but basically most of the formats are not just similar to iso, but direct derivatives of it, such as cdi, nrg, bin and many more. In any case iso itself would be easy to handle. Also, it's the most widespread format so a lot of people could profit from such an algorithm. Back then, I had planned to write an external tool for this job, but quickly gave up on this idea since my programming skills were too rudimentary and other solutions I could have written wouldn't have been up to standard and far too slow to be usable.
Anyway, that's my idea. Tell me what you think about it.
Edit: This is the 10.000th post in this subforum. Woohoo!