Open georgefst opened 11 months ago
As was answered in many other comments, the speedup is not necessarily small.
As was answered in many other comments, the speedup is not necessarily small.
Fair enough. My hunch is that the trade-off is worth it, but admittedly I'd have to collect some stats to be sure. If we did do per-package caching like #9360 then I'd expect incremental calls to always be significantly faster (except in an environment with implausibly frequent cache invalidation). But as has been discussed in various linked threads, this isn't easy, and #9422 is mostly an adequate substitute.
Could we consider just dropping the batch query completely? This was asked in https://github.com/haskell/cabal/pull/9134#issuecomment-1645351542 and https://github.com/haskell/cabal/pull/9134#issuecomment-1791857630. AFAICT all it gives us is a fairly small speedup on some systems, while doubling the time for anyone with any broken packages (which is common: #8930) and adding complexity.
(It would be even better if we cached results on a per-package basis, as in #9360.)
Originally posted by @georgefst in https://github.com/haskell/cabal/issues/9391#issuecomment-1816366109