Closed DiegoRBaquero closed 7 years ago
Thanks for the comment, Diego. There's a lot of thinking to be done around edge cases here, so I appreciate the opportunity to think on this case a little harder.
Right now, peer-npm daemon
acts like an npm registry -- it provides the same HTTP API as https://registry.npmjs.org
. peer-npm install
, for example, is actually wrapping the stock npm
command, running instead npm install --registry=http://localhost:9000
.
It'd be possible to use a special field for swarm dependencies, though it means extra complexity if we wanted to still use vanilla npm
under the hood. Something like
package.json
to _package.json
package.json
and rewrite it so that dependencies
only contains swarmDependencies
package.json
npm install
vanilla oppackage.json
and _package.json
; write back package.json
In short, I think this'd be fragile. Here's the approach I'm taking right now:
When a package is requested, ask the peer driver (could be IPFS, dat, webtorrent, local fs, etc) whether a given package (by name) is from the peer network. For packages of the form foo_PUBLICKEY
, this is easily recognized. If the driver says "yes, this is a peer-net package" use the peer network for fetching. Otherwise, proxy the request to the npm registry. This lets all the deps stay in one place, but seamlessly lets swarm dependencies be first-class citizens.
I ended up doing this after all. Good idea!
Good to know :)
Instead of using "dependencies" field in package.json, why not use "swarmDependencies" or something like that. Then
npm install
wouldn't try to install npm packages that doesn't exist and it doesn't pose a security risk.Cool idea btw.