Closed parkan closed 8 years ago
Hmm kind of tempted to steal this from pip
Ok so how about this
<translator-name>
`-- translator.py
-- README.md
-- __init__.py
-- sample
`-- <something.json>
-- tests # optional
-- <translator-name-1>
-- <translator-name-2>
-- ...
-- requirements.txt # includes mediachain-client
-- setup.py # includes 'test' and 'publish-translators'
python setup.py publish-translators
runs test suite against all translators, validates output@yusefnapora how does this scheme sound?
I think I like it 😄
when you publish a translator, you're copying its source tree to ipfs? and then the client will pull from ipfs based on the version...
my only concern is discoverability, I guess. How do we get the list of all published translators?
yeah internally it runs ipfs add $translator_name
the centralized way this works is via PRs on github -- travis runs a CI on the PR and publishes to IPFS (which we need to catch and repin somewhere)
the cool part is that we don't actually need to accept the PR for the thing to become available, so just opening it = making it available, we can accept if we want it in a central index
can also add a "latest" IPNS pointer that gets updated on merge to master etc
right on. this sounds like it'll work then
OK, so obviously the biggest issue here is (malicious) RCE when running somebody else's translator code. This can't really be mitigated until we figure out the DSL (and restricted imports).
Gonna think about how to frame this.
Organize and version translators in some way. One version may be
<translator name>/<IPFS multihash>
, another may be<translator name>/<version>
. For now they will probably live in a github repo