Hello! I was referred to here for this question because you people are very intelligent. ^_^
I am preparing an archive which contains about 3GB of BMP files, many of which are slight modifications of a single BMP (there are several of these files). However, since this archive is going to be part of a public build, I cannot risk more than an 800MB dictionary size because some of my users will only have 1GB RAM. This dictionary size is less than the archive size. Would I get better compression if I were to split the archive into six 500MB parts and forcing an 800MB dictionary? I would split them so that all the key BMP files would be grouped together with the modified BMP files, allowing for the best possible compression.
I'm looking at using CCMx (setting 6 for 786MB) as the compressor because it's 12-15% better than 7zip (ultra) and decompresses in about 20 minutes which is fine by me. FreeARC (-mx) performs consistently worse (about 3-5%) than 7zip, while PAQ is out of the question because it's not practical.
Thank you for your time!