One example of this is a simple garbage collector, which allocates all the available memory, then garbage collects most of it, then allocates whatever it reclaims all over again. If your GC does that with more memory than fit in RAM, you are screwed. LRU will tend to ensure you have page faults at at least the rate of memory allocation, because you reuse more memory at each cycle than fits in RAM, and no replacement policy can help a whole lot. (Even if you get the GC to reuse memory in a more stacklike order, you're still usually going to be paging unacceptably.)
What you want is for your replacement policy to have a general idea of the large-scale locality properties of each process it deals with, knowing or learning whether a program (or memory-intensive phase of a program) tends to access a lot of memory in a loopy fashion, a stacky back-and-forth fashion, or apparently randomly (like for a program dominated by an enormous hash table without a huge hot/cold skew), or in a quasi-statically biased way that frequency-based replacement would work well for.
(Frequency-based replacement usually doesn't work well at all, but there are a few things it does work well for, like programs whose locality is dominated by a huge hash table that has very hot and not-hot parts at the VM page granularity. Most don't, though because if you have hot parts scattered pseudo-randomly through an array, it's likely most pages will contain at least one hot entry. From the point of view of virtual memory, a typical hash table is usually pretty hot all over if any of it is.)
In hard cases, like programs that loop (or don't) through more than a memory-full of data, you want to keep information between runs of the program, so you notice things like whether the program usually just reads a bunch of data and ends before iterating over it, or typically goes back through the data twice or three times or more.