Closed aklenik closed 2 years ago
For 1, I wonder if it's down to this ?
push:
branches: [ main ]
I just based it off of the starter actions you can generate and also I was trying to work out how to actually get the github action to run. It appears that it can't come from a PR request from your local repo and only worked if it was a branch on the main repo itself (needed Ry's help for that) which kind of makes it difficult for non-maintainers to contribute to this.
for 2 we can parallelize more but I guess it depends on whether there are any limits in github actions to this (given that fabric seem to be having all sorts of issues now with azure pipelines due to parallelisation and so builds not working). Currently the builds only test node chaincode but we will need the benchmarks to test all chaincode languages to ensure the chaincode implementation is complete and correct. Definitely be good to use file-changes to drive different builds.
It appears that it can't come from a PR request from your local repo and only worked if it was a branch on the main repo itself (needed Ry's help for that) which kind of makes it difficult for non-maintainers to contribute to this.
I don't really understand this. Isn't this an issue only for the initial creation of a workflow (since the workflow is not picked up yet for the intended target branch)? Or do I misunderstand the issue?
(Btw, have you seen this? https://github.com/nektos/act Looks promising if we want to "complicate" things in the workflow and test it quickly.)
To summarise
To the question of the chaincodes supported by this repo we have for fabric
fixed-asset and fixed-asset-base exist to try to benchmark a fabric network topology outside of a chaincode implementation (ie it allows you to benchmark fabric itself) which is really handy when looking for deployment bottlenecks, determining theoretical tps max and to help performance tuning the fabric code itself. This should remain and in fact needs to be enhanced so requires another set of issues
The others are samples in a similar vein to fabric-samples so we could look to drop those and create new benchmarks on some of the fabric samples, for example the token implementations. I see this as a separate issue to this one however but it would still be optimised based on only running if a change to benchmarks is made (but we would need to rely on a nighly build to see if something changed in fabric-samples that would warrant our attention)
The other thing I would note about fabric samples is that
The implementation of this issue should also result in a demonstration of the solution working before the implementer given that it would require different types of PR to show it working
The following enhancements should be considered to lower the execution time of CI tests: