Open mversic opened 2 months ago
We also need executor.wasm
when deploying iroha.
Currently deployment download genesis and executor using direct links from github.
Also our docker-compose files rely on existence of executor.wasm
implicitly trough gensis.json
.
Does end user suppose to build/download executor on their own?
if we build and publish it as an artifact in the CI is it possible to reference it from the compose file?
Github artifacts are tricky to deal with...
We can't simply get artifact by it's name like executor
.
We either should specify artifact id which is not known in compose file or get list of artifacts with the same name and get latest which is not very convenient as well.
Also i'm not sure it's artifacts are suitable for long term storage since every artifact has retantion timer associated with them.
we can:
is it possible to run a script on the host that build wasm before starting the container on docker-compose up
?
edit: seems not
we can:
- persist artifacts somewhere publicly available
- put executor in the docker image itself (not a big fan of this)
I supposed a public endpoint as AWS S3, but not everybody who is involved in custom genesis creations has even AWS accounts there. Thus, a lot of new routine are coming. Also additional secrets should be added to CI that it's always better to avoid if possible.
as far as we're concerned others can store executor.wasm
in their repo and not change their workflow. I don't see why them not having access to AWS prevents us from doing this. This is my preferred option atm
is it possible to run a script on the host that build wasm before starting the container on
docker-compose up
?edit: seems not
Deployment (CD) should be clear of creation smth.
We can try to push executors to the outside special repository but it's a rough option too. FTP server is suitable here, but should be configured from scratch, so, an additional effort and time spending in this case.
I supposed a public endpoint as AWS S3, but not everybody who is involved in custom genesis creations has even AWS accounts there. Thus, a lot of new routine are coming. Also additional secrets should be added to CI that it's always better to avoid if possible.
as far as we're concerned others can store
executor.wasm
in their repo and not change their workflow. I don't see why them not having access to AWS prevents us from doing this. This is my preferred option atm
In this case we have to update all deployments either to download executor from S3 or from git. Also, should docker-compose files community users download executors manually before to up docker-compose?
what we have currently also feels wrong. Docker pulls iroha2-dev
image but that image references locally built (and versioned by git) executor.wasm
. If I checkout one of the previous commits it might break.
I'm thinking there are 3 reasonable choices here:
executor.wasm
to be built before running docker-compose
2nd approach is fine except that it's hard to edit the executor.wasm
if it's baked into the image and not in an external volume shared with the host OS. Can we provide some escape hatch for this? The problem is how to make changes to executor and have them reflected in all containers in a simple way
What if we do the following:
I want to draw everyone attention to what has happened to me when publishing RC21.3 release.
I've been looking into why this CI job it fails. All of a sudden Iroha fails too boot from docker-compose.yml
and it's unrelated to the changes in the PR. What has happened is that we have merged this PR and docker-compose.yml
depends on hyperledger/iroha:dev
image instead of build the one from the current repository
IMO all docker images should build themselves locally. This should be the default, and if someone wants to they can change to download the docker image. If we want to offer predefined network referencing a static docker image, we should do that in the documentation repository, not here. These images are for development, not for deployment
I agree @mversic , there would be no good reason to pull the last published dev image and test it in the release workflow. I also think a test by pulling an image would duplicate a previous test by building the image
My experience in #4411 led me to think about modularity of not only executor but also genesis and swarm, and about consistency between them. The following is a suggested flow starting from a build hook under the root, injecting configs into builders, and ending to place artifacts:
graph LR
classDef artifact fill:#8c4351,fill-opacity:0.5
classDef env_val fill:#166775,fill-opacity:0.5
classDef not_planned opacity:0.5
%% legend
artifact(artifact):::artifact
env_val(environment variable):::env_val
%% build tools
root_hook(/build.rs) --> executor_builder(iroha_wasm_builder_cli)
root_hook --> genesis_builder(kagami genesis)
root_hook --> schema_builder(kagami schema):::not_planned
root_hook --> swarm_builder(iroha_swarm)
%% config injection
base_seed(BASE_KEYPAIR_SEED):::env_val --> genesis_pub_key(GENESIS_PUBLIC_KEY):::env_val
base_seed .-> swarm_builder
genesis_pub_key .-> genesis_builder
executor_path(EXECUTOR_PATH):::env_val .-> executor_builder
executor_path .-> genesis_builder
%% artifacts
executor_builder --> executor(executor.wasm):::artifact
genesis_builder --> genesis(genesis.json):::artifact
swarm_builder --> swarm(docker-compose.*.yml):::artifact
I agree @mversic , there would be no good reason to pull the last published dev image and test it in the release workflow. I also think a test by pulling an image would duplicate a previous test by building the image
If you mean the step Test docker-compose.yml before pushing
, image from DockerHub shouldn't pulled there. We use load: true
definition while building iroha with appreciate tags before. It allows do not build/pull iroha image from scratch again during docker-compose tests because docker-compose command is able to find these tags locally.
@BAStos525 I mean the corresponding step of the release workflow would ignore iroha:TAG
built in local and pull iroha:dev
hardcoded in docker-compose.yml
@BAStos525 I mean the corresponding step of the release workflow would ignore
iroha:TAG
built in local and pulliroha:dev
hardcoded indocker-compose.yml
Right. For now only manual changing of docker-compose files between merging to the stable branch is available. Compose files are managed by swarm.
Currently, every developer builds executor on his own machine and commits it into git. Then this executor is used in tests. This creates too many unnecessary git conflicts and prevents me from merging backports directly into
main
branch. What was the reason for doing this in the first place?We should build executor in the CI every time before running tests in the CI and not track
executor.wasm
in git at all