Closed mungojam closed 4 years ago
I've now read the documentation more carefully and I realise that the local cache is fine and won't re-download packages even if they have no-store
set.
Therefore the only benefit is the cloudfront caching, which is of limited benefit in practise for us.
I'm going to close this as I don't think the need is great enough.
That's correct, the no-store would be metadata on the file in s3 used outside of sleet.
Sleet itself keeps a local cache for the operation it is doing. Unless a package is being removed or updated the nupkg/nuspec will be left alone on s3.
Sleet doesn't do any check to see if the package being pushed is the same as the one on the feed already, so if you are continually uploading the same packages you might benefit from additional checks/caching.
Sleet disables caching for all uploads including nupkg and nuspec, even though in general they should be immutable:
https://github.com/emgarten/Sleet/blob/d19f2eaf4ae02e6ab1262462312342d490afaa62/src/SleetLib/FileSystem/AmazonS3FileSystemAbstraction.cs#L138
Could we make an option to specify the cache duration for these file types?
~I understand that with .net core 3, there is no longer a cache of all downloaded packages. Instead there is an http cache, a temporary cache and a global packages cache.~
Also, even with local caches, by specifying a longer cache, we can take advantage of Cloudfront caching for these files without having any caches for the index files.
I haven't looked into the Azure treatment as I'm not too familiar with it yet.