openSUSE / zypper

World's most powerful command line package manager
http://en.opensuse.org/Portal:Zypper
Other
412 stars 113 forks source link

RFE: Better use of connection pooling and background tasks #407

Open Firstyear opened 3 years ago

Firstyear commented 3 years ago

During a zypper refresh --force the following behaviours are observed:

This is for a single repository. When there are multiple repositories, this behaviour is looped. Given an environment with high latency such as Australia (can be simulated with sudo tc qdisc add dev eth0 root netem delay 350ms) this takes a large amount of time, sometimes more than 30s on a fresh tumbleweed machine.

A number of improvements are possible:

These changes will greatly improve latency and the user experience in high latency areas, but will also improve the experience in low latency areas. This also improves the experience for mirror owners due to reduced connection count and duration.

Reference: https://github.com/Firstyear/opensuse-proxy-cache/blob/main/README.md#potential-areas-of-improvement-for-zypper

bzeller commented 3 years ago

Re-use and pool connections especially to unique locations to reduce connection open and TLS handshake

So, as mentioned in #409 the connection pooling is done by libcurl, we do not have much influence on it. Libcurl keeps a internal list of connections it is recycling and drops them if they are not used enough.

Download all content before doing any CPU work for all respositories

This does not make much sense, currently we do that on a per repository basis, first download all data then calc the metadata caches etc. However our roadmap now contains the plan to do async approach, e.g. download for multiple repository async and as soon as data is available start the CPU work on it. Which means we do work asap and parallel.

Process repository metadata with multiple threads to improve parallelism.

See the previous point, there is not need to use threads for it because the metadata is processed by a small external process that generates the solv caches (repo2solv). So we can just execute it in parallel for each repodata that has finished downloading

Firstyear commented 3 years ago

Re-use and pool connections especially to unique locations to reduce connection open and TLS handshake

So, as mentioned in #409 the connection pooling is done by libcurl, we do not have much influence on it. Libcurl keeps a internal list of connections it is recycling and drops them if they are not used enough.

Evidence shows that this isn't currently working though.

Download all content before doing any CPU work for all respositories

This does not make much sense, currently we do that on a per repository basis, first download all data then calc the metadata caches etc. However our roadmap now contains the plan to do async approach, e.g. download for multiple repository async and as soon as data is available start the CPU work on it. Which means we do work asap and parallel.

Process repository metadata with multiple threads to improve parallelism.

See the previous point, there is not need to use threads for it because the metadata is processed by a small external process that generates the solv caches (repo2solv). So we can just execute it in parallel for each repodata that has finished downloading

So long as the async behaviour is planned then that's good. :)

bzeller commented 3 years ago

So, as mentioned in #409 the connection pooling is done by libcurl, we do not have much influence on it. Libcurl keeps a internal list of connections it is recycling and drops them if they are not used enough.

Evidence shows that this isn't currently working though.

That is highly depended on the list of mirrors that is returned for your files i guess. IIRC libcurl only keeps like 10 open connections, and drops them when a new one comes in. If the mirrors you get for the files are in a bad order you almost never recycle connections. Having just one mirror list for each repo would probably improve that.

See the previous point, there is not need to use threads for it because the metadata is processed by a small external process that generates the solv caches (repo2solv). So we can just execute it in parallel for each repodata that has finished downloading

So long as the async behaviour is planned then that's good. :)

:) Its all about improving speed atm :)

Firstyear commented 3 years ago

Well, if it's all about speed, then getting things to work in high latency areas is probably a really good test for you :)

Firstyear commented 3 years ago

That is highly depended on the list of mirrors that is returned for your files i guess. IIRC libcurl only keeps like 10 open connections, and drops them when a new one comes in. If the mirrors you get for the files are in a bad order you almost never recycle connections. Having just one mirror list for each repo would probably improve that.

But ... it's not? Even during a metadata refresh, we can see libcurl continuously dropping and opening connections to dl.o.o so something is wrong here?

bzeller commented 3 years ago

We will look into it once we get back to the code. But if you have experience in C++ feel free to poke around in the code as well. We are happy to accept contributions ;)

Firstyear commented 3 years ago

I have no experience with C++ sorry so I'm not able to contribute beside reports, analysis and ideas. Sorry about that,

bzeller commented 3 years ago

I have no experience with C++ sorry so I'm not able to contribute beside reports, analysis and ideas. Sorry about that,

No worries, that is already a big help for us! I just thought it was worth a try :)

peterdd commented 6 months ago

Maybe related:

I have problems to use zypper with a mobile, slow (throttled by mobile provider, ~10KB/s) connection using a mobile personal phone hotspot:

Requesting the file manually using curl command tells me that file is around 1864k, but from network this seems to be downloading much more (15MB+) within 30min and still not successfully download http://download.opensuse.org/update/leap/15.5/sle/repodata/09e29141fcee3c18c54560ad62262de5a251be10394908ba2f9d051c1fd59080-deltainfo.xml.gz

My assumption is zypper tries to download this file over and over but never finishes it due the slow connection?

The watched network traffic stopped when aborting zypper command, so it is really zypper/curl.

tennis:/home/vagrant # zypper if cronie
Retrieving repository 'Update repository of openSUSE Backports' metadata ..............................................................................................[done]
Building repository 'Update repository of openSUSE Backports' cache ...................................................................................................[done]
Retrieving repository 'Update repository with updates from SUSE Linux Enterprise 15' metadata ........................................................................[error]
Repository 'Update repository with updates from SUSE Linux Enterprise 15' is invalid.
[repo-sle-update|http://download.opensuse.org/update/leap/15.5/sle/] Valid metadata not found at specified URL
History:
 - Timeout exceeded when accessing 'http://download.opensuse.org/update/leap/15.5/sle/repodata/dc4309ed85be5347524bbe0e5b08625a5762d2d1a0afd71b0d065ea2c4c90d47-primary.xml.gz'.

Please check if the URIs defined for this repository are pointing to a valid repository.
Warning: Skipping repository 'Update repository with updates from SUSE Linux Enterprise 15' because of the above error.
Retrieving repository 'Update Repository (Non-Oss)' metadata ..........................................................................................................[done]
Building repository 'Update Repository (Non-Oss)' cache ...............................................................................................................[done]
Some of the repositories have not been refreshed because of an error.
Loading repository data...
Reading installed packages...

Information for package cronie:
-------------------------------
Repository     : Main Repository
...

Using curl with the -C option downloaded the file without problem and could continue if download was interrupted.

tennis:/home/vagrant # curl -L -C - http://download.opensuse.org/update/leap/15.5/sle/repodata/09e29141fcee3c18c54560ad62262de5a251be10394908ba2f9d051c1fd59080-deltainfo.xml.gz -o xml.gz
** Resuming transfer from byte position 303104
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  3 1568k    3 48969    0     0   8708      0  0:03:04  0:00:05  0:02:59  7616^C
tennis:/home/vagrant # curl -# -L -C - http://download.opensuse.org/update/leap/15.5/sle/repodata/09e29141fcee3c18c54560ad62262de5a251be10394908ba2f9d051c1fd59080-deltainfo.xml.gz -o xml.gz
#############################################                                                                                                                           27.4%^C
tennis:/home/vagrant # zypper if cronie
Retrieving repository 'Update repository with updates from SUSE Linux Enterprise 15' metadata ----------------------------------------------------------------------------[\]^C
Trying to exit gracefully...
tennis:/home/vagrant # curl -# -L -C - http://download.opensuse.org/update/leap/15.5/sle/repodata/09e29141fcee3c18c54560ad62262de5a251be10394908ba2f9d051c1fd59080-deltainfo.xml.gz -o xml.gz
############################################################                                                                                                            36.3%^C
tennis:/home/vagrant # curl -L -C - http://download.opensuse.org/update/leap/15.5/sle/repodata/09e29141fcee3c18c54560ad62262de5a251be10394908ba2f9d051c1fd59080-deltainfo.xml.gz -o xml.gz
** Resuming transfer from byte position 692224
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 1188k  100 1188k    0     0   7795      0  0:02:36  0:02:36 --:--:--  8475
tennis:/home/vagrant #
peterdd commented 6 months ago

Nevermind, I learned by seeing http://download.opensuse.org/update/leap/15.5/sle/repodata/ it probably also tries to download also *-primary.xml.gz, which is 75MB currently.

This really sucks as zypper is not showing that sizes of the required repo metadata files and wastes the users time who have not everywhere fast, unlimited internet connection. :-(

peterdd commented 6 months ago

I got a workaround using --no-refresh option:

zypper --no-refresh in cronie
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 3 NEW packages are going to be installed:
  cron cronie mailx

3 new packages to install.
Overall download size: 454.8 KiB. Already cached: 0 B. After the operation, additional 862.2 KiB will be used.

Backend:  classic_rpmtrans
Continue? [y/n/v/...? shows all options] (y): y
Retrieving: mailx-12.5-3.3.1.x86_64 (Main Repository)                                                                                                    (1/3), 320.4 KiB    
Retrieving: mailx-12.5-3.3.1.x86_64.rpm ...................................................................................................................[done (7.1 KiB/s)]
Retrieving: cron-4.2-150400.84.3.1.x86_64 (Main Repository)                                                                                              (2/3),  17.5 KiB    
Retrieving: cron-4.2-150400.84.3.1.x86_64.rpm .............................................................................................................[done (5.5 KiB/s)]
Retrieving: cronie-1.5.7-150400.84.3.1.x86_64 (Main Repository)                                                                                          (3/3), 117.0 KiB    
Retrieving: cronie-1.5.7-150400.84.3.1.x86_64.rpm .........................................................................................................[done (7.3 KiB/s)]

Checking for file conflicts: ..........................................................................................................................................[done]
(1/3) Installing: mailx-12.5-3.3.1.x86_64 .............................................................................................................................[done]
/etc/cron.d/: cannot verify root:root 0755 - not listed in /etc/permissions
/etc/cron.daily/: cannot verify root:root 0755 - not listed in /etc/permissions
/etc/cron.hourly/: cannot verify root:root 0755 - not listed in /etc/permissions
/etc/cron.monthly/: cannot verify root:root 0755 - not listed in /etc/permissions
/etc/cron.weekly/: cannot verify root:root 0755 - not listed in /etc/permissions
(2/3) Installing: cron-4.2-150400.84.3.1.x86_64 .......................................................................................................................[done]
setting /usr/bin/crontab to root:trusted 4755. (wrong permissions 4750)
Updating /etc/sysconfig/cron ...
Created symlink /etc/systemd/system/multi-user.target.wants/cron.service -> /usr/lib/systemd/system/cron.service.
(3/3) Installing: cronie-1.5.7-150400.84.3.1.x86_64 ...................................................................................................................[done]

The download of the rpm files have a progress meter. So my wish is to have this progress meter and filesize inof also for the repo metadata update that is done at start of using a zypper command.

I learned now that http://download.opensuse.org/update/leap/15.5/sle/repodata/repomd.xml also contains info about repo metadata filesizes:

...
<data type="deltainfo">
<checksum type="sha256">
09e29141fcee3c18c54560ad62262de5a251be10394908ba2f9d051c1fd59080
</checksum>
<open-checksum type="sha256">
86e08f33d94cbf87ea2aa63014f03b899778e087e850ff9a65b67450a23d543a
</open-checksum>
<location href="repodata/09e29141fcee3c18c54560ad62262de5a251be10394908ba2f9d051c1fd59080-deltainfo.xml.gz"/>
<timestamp>1715931738</timestamp>
<size>1909744</size>
<open-size>9913671</open-size>
</data>
...

So this info could also be used by the zypper cli to show to user, right?

mlandres commented 6 months ago

| So this info could also be used by the zypper cli to show to user, right?

Like in zypper -v.

Specified repositories: repo-oss 
Checking whether to refresh metadata for repo-oss
Retrieving: repomd.xml ...........................................................................................[done]
Retrieving: repomd.xml ...........................................................................................[done]
Retrieving: media ......................................................................................[done (137 B/s)]
Retrieving: repomd.xml.asc .............................................................................[done (481 B/s)]
Retrieving: repomd.xml.key .............................................................................[done (988 B/s)]
Retrieving: repomd.xml ...........................................................................................[done]
Retrieving: dc75aa050244909e11a545c03bb7301c475e6131331108c7367d0b06aa426733-appdata.xml.gz .........[done (10.3 MiB/s)]
Retrieving: e0cfd9eadccc8b78842af98cb78c02608cb7f844e8f89bc21fce8cb3588a2cf9-appdata-icons.tar.gz ...[done (21.2 MiB/s)]
Retrieving: 0b97b2b2579518d546e80eeb47331f9a258a715159a5d58bcf70e49e5215be0f-license.tar.gz ......................[done]
Retrieving: ce05d60302efda01fd0719deefcf0e233eb930e2224b5ed0d7bc4b5f81c3adfd-primary.xml.gz .........[done (15.0 MiB/s)]
Retrieving: d9f6fdf009970a045435a2a3606fa002ff17f0736f7a34c9b8c2b07486513e89-susedata.xml.gz ........[done (10.1 MiB/s)]
Retrieving: 0a9c21b7766a15f24812930aebf8250c8bfcfa5cda95d9dafe030a7955fbde27-susedata.de.xml.gz ..................[done]
Retrieving repository 'repo-oss' metadata ........................................................................[done]
Building repository 'repo-oss' cache .............................................................................[done]
Specified repositories have been refreshed.