Open jneo8 opened 10 months ago
I don't know enough about apt to know why it's choosing the backports version. I'm on Jammy (22.04), but I don't have the backports one showing up, so I can't reproduce this.
$ apt-cache policy freeipmi-tools
freeipmi-tools:
Installed: (none)
Candidate: 1.6.9-2ubuntu0.22.04.2
Version table:
1.6.9-2ubuntu0.22.04.2 500
500 http://gb.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
1.6.9-2 500
500 http://gb.archive.ubuntu.com/ubuntu jammy/main amd64 Packages
From this comment, it looks like when you run regular apt-get install
it doesn't choose the backports one? Any idea why?
I notice from the apt.py code that it adds the argument --option=Dpkg::Options::=--force-confold
to the apt-get install
command. What does that do, and does that change this behaviour in manual tests?
Additionally, we're not actually capturing the error message here, I think because we're including e.output
in the exception message instead of e.stderr
, and of course stderr is where the error actually lives. See also.
I don't know enough about apt to know why it's choosing the backports version. I'm on Jammy (22.04), but I don't have the backports one showing up, so I can't reproduce this.
You should able to reproduce this on focal.(maybe using multipass)
As I know the version came from the function from_apt_cache, because the backport version is the first one show in command apt-cache show freeipmi-tools
output and this function just return the first version it get.
The behavior is different than command line
apt install freeipmi-tools
If we don't intend to install the backport package, it looks like a bug in this library.
I notice from the apt.py code that it adds the argument
--option=Dpkg::Options::=--force-confold
to theapt-get install
command. What does that do, and does that change this behaviour in manual tests?
--force-confold
will automatically keep an old conf file when it's been modified, without prompting. It shouldn't ever impact package choice.
You can do this by changing your install.py
to:
import apt
repositories = apt.RepositoryMapping()
for repo in repositories:
if repo.release.endswith("-backports"):
repositories.disable(repo)
apt.add_package("freeipmi-tools", update_cache=False)
I reproduced the original issue on focal and after adjusting as above, it successfully installed 1.6.4-3ubuntu1.1
.
@jneo8 does this cover what you need?
By the way, the default behaviour is that it'll run through the output of apt-policy show
and pick the first one that matches the arch, and the version, if specified (if not specified, then only the arch needs to match).
You can do this by changing your
install.py
to:import apt repositories = apt.RepositoryMapping() for repo in repositories: if repo.release.endswith("-backports"): repositories.disable(repo) apt.add_package("freeipmi-tools", update_cache=False)
I reproduced the original issue on focal and after adjusting as above, it successfully installed
1.6.4-3ubuntu1.1
.@jneo8 does this cover what you need?
By the way, the default behaviour is that it'll run through the output of
apt-policy show
and pick the first one that matches the arch, and the version, if specified (if not specified, then only the arch needs to match).
This change the /etc/apt/sources.list
and I don't want to cause another apt related bug because of this on production.
Instead of disable backport repository, is that possible we just let apt have a chance to install Candidate
version?
# It's the apt-cache policy output on focal now
$ sudo apt-cache policy freeipmi-tools
freeipmi-tools:
Installed: (none)
Candidate: 1.6.4-3ubuntu1.1
Version table:
1.6.9-2~bpo20.04.1 100
100 http://archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages
1.6.4-3ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
1.6.4-3ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
Below is my workaround to get the Candidate version first and pass to apt.add_package
"""Apt helper module for missing features in operator_libs_linux."""
import re
from subprocess import PIPE, CalledProcessError, check_output
from typing import Optional
from charms.operator_libs_linux.v0 import apt
def get_candidate_version(package: str) -> Optional[str]:
"""Get candiate version of package from apt-cache."""
try:
output = check_output(
["apt-cache", "policy", package], stderr=PIPE, universal_newlines=True
)
except CalledProcessError as e:
raise apt.PackageError(f"Could not list packages in apt-cache: {e.output}") from None
lines = [line.strip() for line in output.strip().split("\n")]
for line in lines:
candidate_matcher = re.compile(r"^Candidate:\s(?P<version>(.*))")
matches = candidate_matcher.search(line)
if matches:
return matches.groupdict().get("version")
raise apt.PackageError(f"Could not find candidate version package in apt-cache: {output}")
def add_pkg_with_candidate_version(pkg: str) -> None:
"""Install package with apt-cache candidate version."""
version = get_candidate_version(pkg)
apt.add_package(pkg, version=version, update_cache=False)
I would like to contribute to apt lib to provide some similar feature like this.
I think I would rather not - this looks quite convoluted.
I'd suggest instead just specifying the exact version that you want to install, which I think the lib already supports? That way a given revision of the charm will be tied to a known good version in the archive.
It does mean the charm will need bumping when the version changes in the archive, but that could be automated quite trivially 😀
Hi @jnsgruk ,
I believe there are two issues here:
apt
command in this case. Should apt lib support the same behavior as apt command?1.6.9-2ubuntu0.22.04.2
for jammy and 1.6.4-3ubuntu1.1
for focal, and they are both candidate versions. When referring to "specifying the exact version," are you suggesting we should use the version details from apt-cache policy or directly hard-code the version number in the charm for different series?I'm suggesting the latter, that you hard code the versions.
This is relatively common in our product charms like MySQL/ PostgreSQL where we pin a given revision of a charm to a revision of the underlying snap to ensure compatibility.
In reality if you have some automation to bump versions and release to charmhub, this is quite low overhead and will give you more confidence that the things going into production have been tested together.
Shouldn't the apt lib install_package
has the same behavior as apt install command?
Make sure they have the same behavior will decrease the difficulty for user to using this library.
Without this then the install_package
will suddenly give you different version once backport repository released.
While pinning a revision to a snap makes sense, pinning to a deb package not so much.
Consider this scenario,
In this case, if the hardware-observer charm were to pin the deb package, a dist-upgrade would be gated on a charm upgrade. Which means that the operator would need to either upgrade the hardware observer charm to a revision that matches the candidate version found in the apt cache policy or selectively only upgrade every other package...
This seems to increase operational complexity by quite a lot, not only to an operator but also to tooling around these operations.
To that end, regardless of the approach chosen by hardware observer, it still seems that apt.add_package
should be getting the candidate package as interpreted by apt based on its pin priority policy. If you look at the example above posted by @jneo8 , its not about divergent behavior between apt.add_package
and apt cli. The fact that add_package chose a package with lower priority is incorrect.
From https://wiki.debian.org/AptConfiguration,
If the packages have the same priority, the package with a higher version number (most recent) wins. If packages have different priorities, the one with the higher priority wins.
Yes, I agree that apt.add_package
should install the same version as apt-get install
/ apt install
, and it seems like a bug that it doesn't. Looking closer at this library I think it's trying to be way too clever, and don't understand a number of the design choices. For example, why doesn't apt.add_package()
simply call apt-get install
directly and use its logic?
In any case, I think an immediate workaround for @jneo8 is to do what @jnsgruk suggested and pin the version. However, @nishant-dash has some good points about why that's probably not a good idea in general.
I'd say we should make add_package
follow the priority logic of the actual apt-get install
command and choose the right package. I don't know how much work that is, but our team (@tonyandrewmeyer and I) probably can't spend time on this right now.
An alternative would be to rewrite this package with a v1 version, vastly simplifying it to basically just calling the apt commands and relying on their logic.
An alternative would be to rewrite this package with a v1 version, vastly simplifying it to basically just calling the apt commands and relying on their logic.
This is a good idea. I was never a fan of the complexity this library introduced. I don't know how much work it'd be, but could be a possible "blue" or a job for next cycle if too much.
freeipmi-tools
failed to install on focalRelate: https://github.com/canonical/hardware-observer-operator/issues/128
Reproduce
install.py
The apt lib choose backports version to install instead of focal-updates
One way for us to resolve this issue is try to catch the target version from the command, but I feel more nice if apt lib can support it.