From a categorization point, Spack is very similar to Homebrew, the default registry is a git repository of Formulas (they call them Specs) that reference external urls for source code to build from.
My initial thoughts were that it would be easy to replicate the experiment I did in Homebrew https://github.com/protocol/package-managers/issues/12 for Spack Specs, which Todd agreed would be a good thing to try, he was also interested in storing and indexing the compiled binaries in IPFS, of which they generate a lot more than Homebrew for each spec due to the wide matrix of build options and compilers they support for different super computer setups.
They have a strong interest in reproducibility from a scientific point of view, but also a practical point of view as they often build for 1 million or more cores at a time.
There's an interesting aspect that they have the super computer networks cut off from the outside world, although he said that wasn't a primary area of interest. IPFS cluster may also be of interest for the future.
One point he brought up that I didn't have an answer for was the security aspects, which ports IPFS talks on etc, as they have highly confidential software that the handle and everything must be approved by their network admins.
The Andrew File System (AFS)[1] is a distributed file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. It was developed by Carnegie Mellon University as part of the Andrew Project.[2] Originally named "Vice",[3] AFS is named after Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.
Outcomes from the call:
Todd is going to read up more on IPFS
I'm going to send over the details of the homebrew experiment and how I'd add support for making binaries available via IPFS as well
We're going to arrange another follow up call
I'm going to send a pull request to add a spec for installing IPFS to the Spack repository
I also think it'd be very helpful to have a overview document of the parts of IPFS and how they apply to package management to get people up to speed before calls as well as being available to for them to distribute to other team members.
I had a call last night with @tgamblin, the creator of Spack at Lawrence Livermore National Laboratory, to talk about IPFS and how spack might be able to use it.
From a categorization point, Spack is very similar to Homebrew, the default registry is a git repository of Formulas (they call them Specs) that reference external urls for source code to build from.
We also had Todd on the Manifest podcast last year: https://manifest.fm/11
My initial thoughts were that it would be easy to replicate the experiment I did in Homebrew https://github.com/protocol/package-managers/issues/12 for Spack Specs, which Todd agreed would be a good thing to try, he was also interested in storing and indexing the compiled binaries in IPFS, of which they generate a lot more than Homebrew for each spec due to the wide matrix of build options and compilers they support for different super computer setups.
They have a strong interest in reproducibility from a scientific point of view, but also a practical point of view as they often build for 1 million or more cores at a time.
There's an interesting aspect that they have the super computer networks cut off from the outside world, although he said that wasn't a primary area of interest. IPFS cluster may also be of interest for the future.
One point he brought up that I didn't have an answer for was the security aspects, which ports IPFS talks on etc, as they have highly confidential software that the handle and everything must be approved by their network admins.
He also mentioned https://en.wikipedia.org/wiki/Andrew_File_System as something that sounded similar to unixfs from years gone by:
Outcomes from the call:
I also think it'd be very helpful to have a overview document of the parts of IPFS and how they apply to package management to get people up to speed before calls as well as being available to for them to distribute to other team members.