Closed aindilis closed 2 years ago
Maybe something with the singularity host is broken? This was definitely working before
Yeah, cloud.apptainer.org is down. Is there another way temporarily to source the files it provides?
@jdekarske Any chance you could host the .sif elsewhere? Doesn't seem to pull cleanly anymore.
Install worked fine for me on my branch before I pulled from upstream. (ca https://github.com/jdekarske/planutils/commit/cbe16f4105a03fbd750045a7ad14b586281bc6bf). I see the same error reported on a freshly built docker image. Maybe a newer singularity version has changed something? I recall never getting the image verified or whatever they call it.
The image is actually hosted on this domain https://cloud.sylabs.io/library/jdekarske/default/smtplan
The image pulls fine for me singularity pull library://jdekarske/default/smtplan:latest
Definition is here: https://github.com/KCL-Planning/SMTPlan/pull/14
Any insight on the Singularity > Apptainer update, @FlorianPommerening ?
Once the dust settles, I do intend on getting some regression test stuff implemented so we'd be warned about this...
For reference, I was using Singularity 3.8.1. I wasn't aware of the Singularity EOL. Is there a new registry for images?
@haz can you use github registry and CI for hosting images?
@jdekarske No clue, tbh...it's why I reached out to Florian. I know Basel sets things up well for downward. Theirs still works (for now?) and is at a different hub: https://github.com/AI-Planning/planutils/blob/main/planutils/packages/downward/install
I have not heard of cloud.apptainer.org so far. For Fast Downward, we used to use Singularity hub but they switched to read-only, so now you cannot add new images. The old ones are still hosted, that is why you still see Fast Downward working. But for the last two releases (counting the one last week) we couldn't upload there anymore. Now we just host the sif on the wiki (https://www.fast-downward.org/Releases/22.06 or if you want a direct link: https://www.fast-downward.org/Releases/22.06?action=AttachFile&do=get&target=fast-downward.sif). All the singularity pull
or apptainer pull
stuff is not really doing much. A call to wget
is enough to get the image.
The downside is that you do not have a nice link anymore, so
singularity pull aibasel/downward:22.06
./downward.sif
becomes
wget https://www.fast-downward.org/Releases/22.06?action=AttachFile&do=get&target=fast-downward.sif
chmod u+x ./fast-downward.sif
./fast-downward.sif
Feels like we should have a formal way to host...anything in the GitHub artifact side of things we could host elements? I don't think the link aesthetics plays a role (it's all hidden in the install script), but the link you show is FD-specific.
Not sure I understand the question. If you can host files on GitHub then yes, this would work to replace Singularity hub. Not sure how happy GitHub is about this usecase, but I guess planutils will not create enough traffic that they would care about this.
Hrmz...maybe there is new hub functionality: https://apptainer.org/docs/user/main/library_api.html
Do you mean this one (https://cloud.sylabs.io/library)? The others seem to be either paid enterprise solutions or source code to run something on your own servers. As for the first one, I don't really understand their account model. You seem to get 11GB of storage and 500 build minutes (per month? per year? or overall?). It might be possible solution but I'm not sure we need a service like this at all. It doesn't do much more than host the files and download them. We can do that with any server (like the Github solution).
@maltehelmert mentioned (I think?) that they may be a way with the new apptainer setup to be closer to the docker end of things. From the site...
...so do you think it's possible just to have docker images setup, host them on dockerhub (under aiplanning/
org space), and then pull from there?
Are you saying we can store sif at dockerhub?
On Wed, Jun 22, 2022 at 10:05 AM Michael Katz @.***> wrote:
Can we then decide on a centralized repository to allow people to use for that purpose? Obviously, not forcing anyone :)
On Wed, Jun 22, 2022 at 9:53 AM Florian Pommerening < @.***> wrote:
Do you mean this one (https://cloud.sylabs.io/library)? The others seem to be either paid enterprise solutions or source code to run something on your own servers. As for the first one, I don't really understand their account model. You seem to get 11GB of storage and 500 build minutes (per month? per year? or overall?). It might be possible solution but I'm not sure we need a service like this at all. It doesn't do much more than host the files and download them. We can do that with any server (like the Github solution).
— Reply to this email directly, view it on GitHub https://github.com/AI-Planning/planutils/issues/86#issuecomment-1163128603, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH4LEAOD7TWP6NMYDF3LOVTVQMLGHANCNFSM5ZG2DXXQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- Cheers, Michael
-- Cheers, Michael
Not as such, but Apptainer/Singularity recipe files are no longer needed, there is now a way to directly build them from Docker containers. And the apptainer command-line tool has a one-line invocation that downloads from docker hub and builds the sif file. The advantage is that you only need to host on docker hub. One drawback is that it might take more time than just downloading a ready-made sif file, so we're still offering both distribution modes (docker hub and downloadable sif file) for Fast Downward. But perhaps it's quick and that's not a concern, it should be measured. I heard this from @FlorianPommerening, so hopefully he can tell us more.
I think I figured it out without Florian: apptainer pull downward.sif docker://aibasel/downward:latest
(You can omit "downward.sif" and it will choose the name automatically as downward_latest.sif.)
It does take a minute or two of building locally, so it's a tradeoff about whether this should be recommended over downloading a sif or not. But if you do it multiple times, the output is cached.
Yes, you can pull from docker://
as Malte said. I think this would be nice solution. The additional build times seem ok for planutils because it is a one-time installation.
The amount this simplifies things outweighs the build time for sure! If a nice sif hub comes along, we can start to re-orient the packages that way, but as an interim I think it's perfect to just point dockerhub way.
Thank you all!
Next question, and stemming partially from the discussion on security, why don't we ask packages to include their Dockerfile. Won't be used as part of the interaction with planutils, but those of us maintaining things can build / push to the aiplanning/...
space. Then, any new change can be confirmed (rather than random pulls that may change without our knowledge). We can have trusted orgs, such as aibasel
to sidestep it all for simplicity.
Thoughts?
It would be bit more work accepting new packages but for users it might give a more trustworthy impression. However, since we are not going to audit the code that goes into the Docker images, we cannot really give a guarantee that the resulting docker image will not delete all your files. It's a bit like pypi: everyone can upload there, so users have to trust the individual package authors. With the docker containers under our care, we could guarantee that nothing changes without us knowing. But that would also mean that new releases of planners will not easily show up in planutils. So I'm not completely convinced.
Also intersects with the hope to regression test things for the existing suite of tools...
Ok, ok, deal with your own docker files ;). We'll have to put our faith in those bringing the planner/package to the project.
@jdekarske Long story short -- can you (1) get a Dockerfil setup for smtplan; (2) throw it up on the dockerhub; and (3) change the install script to grab it from there?
I think even if we went and validated Dockerfiles (and the source code that they build), I think many projects will include external source code that someone could perform a switcheroo on at any time. I'm not talking about apt install from standard Ubuntu sources; of course if we start distrusting that, it's a bottomless pit. But for example our build process might include stuff like wgetting soplex, so why trust the soplex guys more than the planner developers that put stuff on dockerhub.
Well the original idea is we'd build and push to aiplanning/*
dockerhub space, and then it doesn't get re-built until we do it again.
But that adds more to "our" plates...yet another reason to avoid ;)
I think I'm with @maltehelmert here. I don't think planutils needs to guarantee that the images are good, but at least have a trail from where they were built. If I were to manually build and push a change to the smtplan image and it breaks things, you'd have to track me down to potentially fix it. Unless someone had the prior image handy, you wouldn't be able to regress to a working version.
I'd propose planutils builds images that have a dockerfile included and host them on the github container registry. We've been using this workflow for a while and haven't had many issues. (I'd be happy to add the workflow in this repo)
Maybe for more active planners (not 3 year old smtplan who hasn't merged my PR :disappointed: ), they can manage their own images.
If not, I'll just create a new repo with only the dockerfile so there is at least some sort of tracking on it.
I think preventing things breaking with new versions of a docker image is much easier: we just put a version identifier in the planutils recipe (e.g., pull downward:22.06
instead of downward:latest
). We have to update the versions from time to time but changing 22.06
to 23.06
is less effort then rebuilding the images and pushing them. Then the docker images can live where ever the planner authors want, as long as that registry supports versions.
Dockerhub does and I assume the Github registry does as well (this is the first time I heard about it).
Aye, I like that idea. And merging PR's can be the gatekeeping to make sure that a pull is versioned. I assume the apptainer access can hit a particularly versioned dockerhub build?
Dockerhub uses tags, like 22.06
and latest
in Florian's example. These tags can move -- for example, we could upload a new version (for example 22.06.1
) and change the 22.06
tag to refer to that instead in the future. There is no way around that, but I think it's by design that it's possible to push bugfix releases in this way, and I think it's a feature for us rather than a bug. It's up to planner developers to make sure they follow best practices for this.
By analogy, we want sudo apt install python3.9
to install the latest bugfix release of version 3.9 of Python and would use this kind of command in scripts or installation instructions for something using Python 3.9 rather than asking people to download, say, 3.9.1 specifically.
@jdekarske I think for now we're going to try and keep it simple and responsibility left with package maintainers. Doesn't mean we can't transition into a planutils-maintained docker registry down the line, but the main devs here are stretched pretty thin.
To that end, any chance you could do a docker build of your smtplan fork and update the install
script to pull things directly from the dockerhub?
@aindilis : Can you have another go? Things should be working since the last version.
@haz Yes, it works great - thanks!
Thanks all for talking through it! Marking as closed.
I stumbled across the same issue and was happy to see how quickly you guys solved it. Unfortunately, I still have the problem with the latest version. I pulled the latest version and set it up as described in the Readme. When I then try to install smtplan, I get:
About to install the following packages: smtplan (148M)
Proceed? [Y/n] Y
Installing smtplan...
FATAL: Unable to get library client configuration: remote has no library client
Error installing smtplan. Rolling back changes...
rm: cannot remove 'smtplan.sif': No such file or directory
Am I missing something here?
Can you confirm that you are updated?
In the container: pip freeze
On your host: docker images | grep planutils
May be an issue with native -vs- docker usage... @aljoshakoecher , as @jdekarske points out, let us know some more details about your setup and we should be able to help you out.
The problem was in fact on my side, I was not updated... pip freeze
returned v0.7.5 for planutils -- which is oviously not the latest version. After running docker with both the --pull
and --no-cache
argument, I am now updated and everything is working.
Thank you very much!
Can we then decide on a centralized repository to allow people to use for that purpose? Obviously, not forcing anyone :)
On Wed, Jun 22, 2022 at 9:53 AM Florian Pommerening < @.***> wrote:
Do you mean this one (https://cloud.sylabs.io/library)? The others seem to be either paid enterprise solutions or source code to run something on your own servers. As for the first one, I don't really understand their account model. You seem to get 11GB of storage and 500 build minutes (per month? per year? or overall?). It might be possible solution but I'm not sure we need a service like this at all. It doesn't do much more than host the files and download them. We can do that with any server (like the Github solution).
— Reply to this email directly, view it on GitHub https://github.com/AI-Planning/planutils/issues/86#issuecomment-1163128603, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH4LEAOD7TWP6NMYDF3LOVTVQMLGHANCNFSM5ZG2DXXQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- Cheers, Michael
Didn't we decide against a repository of sif files? I thought the plan was to build sif files directly from dockerhub (or wherever the planner authors decide to host their docker images). Or did you mean that we should decide on a centralized repository for the Docker images? I don't think this fits this model either: we discussed that for now we don't want to have the Dockerfiles under our control and leave them under the control of the planner authors. Having a centralized Docker registry would mean that someone would have to organize it and decide when to update it.
Yep, I think it's kinda gone to the level of "up to the planutils package maintainer to host it some place".
That said, we could probably offer something under ai-planning
as an org if folks want to host there. @FlorianPommerening : what's the process of getting an apptainer library up and going? Anything naturally tie-in with GitHub orgs?
I've never done that. We just host the compiled image files by uploading them to our wiki.
That is singularity hub but that no longer accepts new submissions. The current alternative to this would be to compile the code for singularity hub yourself and host the whole system and that is the part that I meant with "I've never done that".
We probably should change the installation at some point because the line you quoted will get the 20.06 release. To get the 22.06 release, you'd do
wget https://www.fast-downward.org/Releases/22.06?action=AttachFile&do=get&target=fast-downward.sif
Was getting some serious deja vu, and then decided to scroll up and read the conversation from the summer on this issue. I think we're already well on our way to a solution fully discussed :P.
@FlorianPommerening : I think the downward package (and likely others) just need a redirect to the dockerhub (or similar) version, now that apptainer handles it fine.
Yes, that would also work.Using wget
should be quicker in general because it doesn't have to convert from docker to singularity. But the difference is small so if going through Docker makes things more uniform that works as well.
However, going to https://cloud.apptainer.org/auth/tokens times out, as does any other URL from that domain.