Open jonashaag opened 2 years ago
There are two mirroring modes: full mirror and proxy.
In the proxy mode, we forward to the repodata.json from upstream but download the requested packages and cache them on the server.
Streaming repodata.json is almost always a bad idea, it's much bigger than the gzip compressed one. We pre-compute the .gz file but you can also configure nginx to do it on the fly.
This is where the .gz and .bz2 files are created:
Thanks!
cache them on the server.
server = pkgstore? What do you think about an additional layer of caching/storage for remote pkgstores like S3?
This is where the .gz and .bz2 files are created:
It doesn't seem to be used for repodata.json
though.
What do you think about an additional layer of caching/storage for remote pkgstores like S3?
I don't see the point? What would be the point? Do you think for very small files that the redirect is the bottleneck?
Do you think for very small files that the redirect is the bottleneck?
Yes. Will do some benchmarking on this.
Just FYI the pre-authentication is computed completely on the quetz side usually (it's usually some encryption of the request metadata with the authentication token). So there is no round-trip or so needed.
I am interested to see a benchmark. In general, I think I'd like to try to avoid as much as possible that Python "touches" static files. Static files should be routed through nginx or S3 / GCS etc.
Here are a bunch of timings with GCS pkgstore and from a WiFi home internet connection (each fastest of ~10 tries):
curl -L
, 1 HTTP + 1 HTTPS request): 460 msrepodata.json
from Quetz (180 K uncompressed, streams from GCS): 600 msSo, GCS overhead seems to be ~ 250 ms, and the overhead from the roundtrip seems to be another 110 ms. (Not sure how curl needs 110 ms to process the redirect?!) So there is a "budget" of 380 ms for Quetz to serve packages directly without redirect. (270 ms if you ignore the 110 ms spent in curl.)
Also, the way repodata.json
is served from GCS (Quetz streaming it to the client) seems really slow.
Edit: GCS bucket was in US while I'm in EU. I did another test with GCS in EU, this removes ~ 150 ms. I also tried adding Cloud CDN in front, which shaves off another 50 ms. So the budget shrinks to 230/180 ms. (Or 120/70 ms with the curl overhead removed.)
It doesn't seem to be used for
repodata.json
though.
Never mind, it's with add_temp_static_file
. Looks we aren't running the task correctly.
it's quite possible that there are issues with the proxy mode... I would have to look into it more deeply.
Interesting findings re. timing.
Tbh I am less concerned about small files then about large files and that's where I think they should really not be served through Python. One could argue that small files can be served through python and everything else redirected but it seems like a complication.
If you have 5 (or more!) parallel downloads, this should only give a tiny hit on the overall picture ...
For S3 we already have support in powerloader btw (to natively pre-sign URLs on the client side): https://github.com/mamba-org/powerloader/blob/effe2b7e1f555616e4e4c877648658d1e6c89ded/src/mirrors/s3.cpp#L239-L244
For GCS the algorithm looks extremely similar so we could also add support for gcs://
mirrors.
https://cloud.google.com/storage/docs/access-control/signing-urls-manually
That would remove the initial redirect roundtrip -- but force you to distribute S3 / GCS credentials to users.
- Why don't we use redirects from
repodata.json{,.gz}
?
If you have 5 (or more!) parallel downloads, this should only give a tiny hit on the overall picture ...
Likely. Will do some testing on this as well :)
IIUC the mirroring code correctly, downloading and caching works as follows (assuming you're using a GCP backend):
Special case
repodata.json
:repodata.json.gz
from pkgstore and stream that to client..gz
exists, streamrepodata.json
from pkgstore to client.Questions:
repodata.json.gz
files coming from?repodata.json{,.gz}
?