MatrixAI / Forge-Package-Archiving

Apache License 2.0
4 stars 0 forks source link

Roadmap #2

Closed ghost closed 2 years ago

ghost commented 6 years ago

Discussion of nix-prefetch-ipfs

We briefly discussed a nix-expression rewrite system similar to that of gx package manager for go. What it would do is rewrite source addresses (sources that are required to build the nix package) from a standard url to an ipfs url where that content is hosted. For example: "https://git.kernel.org/torvalds/t/linux-4.15-rc8.tar.gz" -> "ipfs/QmeF59wRCygGMrEJbLdYS1CyJmA2XowaR9oRX2VJty7nCR", which is a content address of the tarball using the multihash scheme" ipfs itself provides a specific facility to store tar compressed archives using ipfs tar add, which parses tarballs into a merkle dag structure. Not sure what goes on behind the scenes to parse this tarball into merkle dag structure.

The nix-prefetch-ipfs would thus do the following:

More about Forge Package Archiving

There are two things in nixos that can be pushed to ipfs: build-outputs (binary cache compiled packages) and build-inputs (used to build the packages using a nix expression). Forge Package Archiving looks to move build-inputs into ipfs.

HydraCI, TravisCI

We need to look at systems that do continuous integration so we can see how we can successfully propogate updates to automatons in the Matrix network. This may also be useful to Forge Package Archiving, as HydraCI continuously builds packages.

Identifying Automatons

Every automaton will have several identifiers:

Transferring Automatons between Machines

We distinguished between two types of automatons

Nature of packages

A package is a wrapper around some independently created program. Packages should be able to wrap any program (language-agnostic). When we say program, we specifically refer to the program source, and not compiled versions of the program. To facilitate scalable architectures, Matrix must handle the compilation of the program source to the target machine.

Container

After the source program has been wrapped in a matrix package, we can refer to the compiled version of that package as a container. One of the core problems that we face with containers is ensuring the traditional communication paradigms that are available to the original source program are still available after it has been wrapped in a package and is run as a container on several different machines with different architectures, and perhaps different network protocols available on those machines. We essentially need to be able to tunnel protocols and expose low level things like sockets to the program. This may require even more sophisticated low level integration like linux kernel modules that enable this to occur.

Service Dependencies

We talked about how service dependencies can be managed in the swarm. The central idea is that a service running on an automaton will be able to pull in service dependencies into the network by contacting a centralised orchestrator that is responsible for pulling necessary service dependencies. Alternatively, this action of pulling in service dependencies could be embedded into the control of the network itself (This is the same as there being a service that represents operations on the network).

Global Swarm

We envision a global swarm that will be available for anyone using the MatrixAI network. This way, service provision could be shared across the whole network p2p style. Anybody running a service may be able to provide that service to another user participating in that network. The reason this is beneficial is by supporting a virtual network layer over a variety of different networking protocols, the range of devices that can communicate on this network will be far greater than the current status quo.

Action Plan

Timeline

CMCDragonkai commented 2 years ago

No longer relevant.