to compress the chunk lists in the entries of the files cache, we need that the ChunkIndex actually has entries for all these chunks.
but that is not always the case, e.g. if the last archive was deleted and the repo was compacted, some chunks might be gone from the repo and from the index.
but if the client still has the files cache that was built for that archive, some files in it might refer to chunks that don't exist any more. so, guess we need to drop these files from the files cache in that case. a lot of other files in the files cache usually will still be ok as they were unchanged and some other archives in the repo still reference their chunks.
guess that is still faster than invalidating the files cache and rebuilding it from the latest archive (from that series) we have in the repo (also that only works if the user uses the archive series way of naming archives).
to compress the chunk lists in the entries of the files cache, we need that the ChunkIndex actually has entries for all these chunks.
but that is not always the case, e.g. if the last archive was deleted and the repo was compacted, some chunks might be gone from the repo and from the index.
but if the client still has the files cache that was built for that archive, some files in it might refer to chunks that don't exist any more. so, guess we need to drop these files from the files cache in that case. a lot of other files in the files cache usually will still be ok as they were unchanged and some other archives in the repo still reference their chunks.
guess that is still faster than invalidating the files cache and rebuilding it from the latest archive (from that series) we have in the repo (also that only works if the user uses the archive series way of naming archives).