Closed pmderodat closed 4 years ago
Well, in the particular case of alr search
that's a plain bug; native packages should simply be unavailable in an unknown platform (in the current implementation), instead of breaking the search. I never have tested out of ubuntu/debian though, and that's why you're seeing that I guess.
TL;DR: yes, something like that would be practical. We should think ahead a bit more.
Long version: I decided to start with the most stringent approach of a whitelist because if it is available, it should be safe to use. Also, it's what OPAM does (I didn't check others): https://github.com/ocaml/opam-repository/blob/master/packages/curses/curses.1.0.3/opam.
The problem I see once we move away from a 'tested and known to work' approach is that we risk ending with seemingly working packages that are broken in practice, and that wouldn't be very Ada-way. Detection in a portable way won't be hassle free either in some cases.
In general I agree with your suggestions though, as long as we keep a way of having known good native versions (some people may want to be strict about what gets used). In the current impl, a new origin or native kind could do the trick, or we could revamp the native part. This is something to think about.
The problem you point of maintenance burden is certainly there with the current method. Continuous integration may help there in detecting outdated native packages. It also can help in your suggested approach by detecting broken combinations. In a way this is already happening (e.g. https://github.com/alire-project/alire/blob/master/index/native/alire-index-make.ads) since the platform package is not a fixed version in some cases (ugly).
I think this whole issue, one way or other, will inevitably cause frustration. If going permissive, things will break. If not, things will require platform-specific maintenance.
On windows one can rely on file downloads, for example, though that's not as friendly as a linux package manager. Detecting availability would certainly ease matters.
In any case this is an issue that we should thread carefully, and probably bring in more expertise to hear. First step to me would be fixing breakage in unknown versions while we decide a course of action.
I'm thinking of use cases for this. I see the obvious one of command-line tools, that can be able to provide their version by running them. Any others?
Correct me if I’m wrong, but I don’t think this is what opam does: I just checked on my system, opam happily accepts to build curses even though I don’t have the necessary installs (so the build fails). As I said, I think that’s fine (at least by default). I have the feeling that we need to let users decide which mode they want to use: strict (more checks, less freedom to do things) or lax (fewer checks, but you can do more). In this lax mode, the database could still be used to provide suggestions about native packages to install to users.
In general, I have the feeling that the “tested and known to work” approach is really an ideal, not something we can actually reach. For instance, Alire currently doesn’t check the version of native packages, so a tested package could fail to build with versions that precede the ones that were tested. Then even if we added version constraints to native packages (which is naturally going to add even more maintenance burden), I’m sure we could find some other mismatch that can introduce build (or runtime) failures.
Anyway, I agree that frustration will be there no matter what, so let’s make sure it’s as slow as possible for everyone. ;-)
I don’t understand when you say:
Continuous integration may help there in detecting outdated native packages. It also can help in your suggested approach by detecting broken combinations.
How so? Having some kind of build farm with all supported OS available? And checking what exactly? Native packages that are outdated because no longer available in newer distribution versions? What kind of broken combinations do you have in mind?
Also:
On windows one can rely on file downloads, for example
How would that work? Are you suggesting to assume that a dependency is available because we find some file in the user’s download directory? That doesn’t sound reliable.
In any case this is an issue that we should thread carefully, and probably bring in more expertise to hear.
I’m all for it, but I’m really not sure how to do that: do you have specific people in mind we should invite to this discussion?
I'm thinking of use cases for this. I see the obvious one of command-line tools, that can be able to provide their version by running them.
I don’t get it: what are you referring to?
Correct me if I’m wrong, but I don’t think this is what opam does: I just checked on my system, opam happily accepts to build curses even though I don’t have the necessary installs (so the build fails)
I didn't go that far, so thanks for pointing that out; I only checked the index format I linked to. So it seems they use that info only when available and operate in a lax mode like you describe.
As I said, I think that’s fine (at least by default).
I'm not a fan of fail by default. But it is a matter of preference that doesn't touch the core issue, on which I agree with you.
I have the feeling that we need to let users decide which mode they want to use: strict (more checks, less freedom to do things) or lax (fewer checks, but you can do more). In this lax mode, the database could still be used to provide suggestions about native packages to install to users.
Agreed. As long as release dependencies are listed at a minimum as 'externally provided' so we can keep track of those.
For instance, Alire currently doesn’t check the version of native packages, so a tested package could fail to build with versions that precede the ones that were tested.
It does when possible, in the sense that some native packages have major versions in their package name. But your point stands for the ones that do not, which is something that made me very uneasy with the current impl. So this refactoring would be a good moment to address that too.
Then even if we added version constraints to native packages (which is naturally going to add even more maintenance burden), I’m sure we could find some other mismatch that can introduce build (or runtime) failures.
It's like that now, and I agree that it's not ideal. For me it's enough if we leave contributors the option to be as precise as reasonably possible. The maintenance burden should be distributed among release maintainers anyway, and it's up to them if their projects die because of abandon or bit rot.
How so? Having some kind of build farm with all supported OS available? And checking what exactly? Native packages that are outdated because no longer available in newer distribution versions?
That's it, and it's already happening. There are dockers for distro+versions (currently debian/ubuntu) in the CI config, and releases are built during testing. So, when native packages change versions, they become unavailable in that platform (because the native package name changed) and their dependents fail to resolve. So at least we are aware of outdated/missing versions. (Output is e.g. https://github.com/alire-project/alr/blob/b354982d3d6214df917ebaf77ca580e325d0d85e/status/gnat-ubuntu-lts.md). The alternative is having a whole bunch of untestable releases. Which we are going to have for distros without a maintainer.
What kind of broken combinations do you have in mind?
Not sure what you mean. CI attempts to build the latest release of every project, and reports either success, unresolvable, or broken compilation. And before you point that out, that won't scale with the free tier of Shippable ;-) as we add more distros/versions.
On windows one can rely on file downloads, for example How would that work? Are you suggesting to assume that a dependency is available because we find some file in the user’s download directory? That doesn’t sound reliable.
No, I was thinking of something like the "source download" that you added. One of the recognized file formats can be ".exe/.msi", that installs some dependency. Again, this should be done if the user wants to do it, and if the thing is not already available, so it still requires some care.
do you have specific people in mind we should invite to this discussion?
I was thinking of the people in AdaCore interested in this project. Fabien, Raphaël. Sorry for the vagueness.
I'm thinking of use cases for this. I see the obvious one of command-line tools, that can be able to provide their version by running them.
I don’t get it: what are you referring to?
Let's take the case of make
. Ubuntu, for example, has it under an unversioned package of the same name. Still, you can check that it's available by running make -v
which additionally will give you the version (it is also shown by apt policy make
). So the minimal recipe for detection you alluded to can identify both availability and a precise version usable for resolution.
I was asking if you have other autodetection cases in mind that don't fit in this scenario I described. Your example of ncurses makes me think you're referring to libraries instead of tools. So the test should be a build using the library, and versions won't be so easy.
So I agree we must do something about this, and we can start discussing implementation. But I don't have time right now for more. I know, I write too much :)
Adding two links for future reference:
https://github.com/rust-lang/cargo/issues/1240 Brief discussion about the same problem in cargo.
https://crates.io/crates/libudev A native-library crate. It seems it's a simple wrapper that delegates how to install/detect outside of cargo. Unclear how the native version is related to the cargo version (both 0.2.0 and 0.1.2 depend on the same platform package).
Also played a bit with opam depext
, which is a dedicated command that calls the platform package manager to install whatever is needed by an opam package. Has an interesting discussion too:
https://github.com/ocaml/opam-depext
https://github.com/ocaml/opam/blob/master/doc/design/depexts-plugins
I did my test with opam install lambda-term
in a ubuntu:latest docker, that depends on native m4
package.
The opam approach brings an interesting point which is having a separate, per platform, source of equivalences between opam names and platform names, which simplifies delegation of maintenance. In essence, that allows going forward with resolution and, on user demand, check/install OS packages needed for a particular project (that we could extend to deferring to the user the responsibility of custom installation of something).
For our use case, that might be a way of knowing that a dependency is needed but defer how to install it/what versions are available, both in the sense of alire index population and user experience. It also fits well with the "relaxed" mode we talked previously (since, worst case, we could do nothing).
I will try to come up with a proposal that integrates all the ideas we have been discussing in the near future.
After #204 and #222 the original reason for this issue should be addressed, but the discussion here went further, into how to sanely use a platform's native packages and their package managers. That remains an open issue, hence the issue name change.
Fixed in #303 #319
As of today, the handling of native packages only works on Debian or Ubuntu distributions. For instance, on my Arch Linux box, I’m getting a lot of crashes such as:
I don’t think that requiring users to go through all packages in the index to complete package names each time they use a new distributions is a good position. Besides, the current formalism will not apply on package manager-less systems such as Windows. And this forces users to install software through their package managers, while some may want to do manual install from sources (to have recent versions, for instance).
For these reasons, I think the current mechanism to handle native package dependencies should be replaced. What about instead: assume the dependence is available then and check it is available only when trying to install a package that depends on it, or when looking for packages that are available?
Instead of using a never-exhaustive database of distribution-specific package names (which can vary over time, by the way!), maybe we could use the same trick as configure scripts: try to build a very simple program using the dependency, demonstrating that it is installed (no matter how). We could still have, as an optional feature, alr using the incomplete database to suggest which packages to install given their distribution, though…
What do you think?