ignite / cli

Ignite is a CLI tool and hub designed for constructing Proof of Stake Blockchains rooted in Cosmos-SDK
https://ignite.com
Other
1.26k stars 547 forks source link

speed up build commands: e.g. `c build/serve` #2383

Closed ilgooz closed 2 years ago

ilgooz commented 2 years ago

Learn more about the bounty on Ignite's Discord.

It takes a quite bit of time for Ignite CLI to *rebuild and start a blockchain during development when you use the c serve command.

c serve, c build, generate, scaffold, and some other related commands that shares the same logic under the hood. Improving the speed of build process will make all these commands faster.

*rebuild includes steps like: 1- running go mod tidy/download/build/install 2- analyzing chain's source code and generating TS/Vuex/Dart/Go code for chain itself and 3rd party modules imported.

Step 2 above is a lot slower when --proto-all-modules is enabled.

We need to analyze the build process and find out the ways to speed things up.

We can possibly cache proto code generation per module but we need to be careful about the cache invalidation logic, it's quite tricky to do.

We can use goroutines more efficiently if there are room for improvements in that area.

We can speed up the code analysis process if there are room for improvements in that area.

We might skip running go mod tidy/download, when needed, if not running these won't break things in the build process.

Ideally our solution will skip code generation for cached modules and since the 3rd party modules cannot be modified, we never would need to regenerate code for them. And we would be regenerating code for custom modules only when their proto API changes. In any case, if the generated code deleted or edited for any of the modules, we need to invalidate the cache and generate for them again.

In my machine, M1 2020, running go build on chain's main pkg takes around 8s and running c build on the chain takes around 87s(when codegen enabled in config.yml), running c build --proto-all-modules takes around 256s(when codegen enabled in config.yml). Ideally it should complete close to 8s or some more when there is need for code generation for some of the modules.

ilker>cmd/marsd $ time go build
go build  8.72s user 1.94s system 136% cpu 7.783 total
ilker>local_test/mars $ time ignite c build                    
Cosmos SDK's version is: stargate - v0.44.5

🛠️  Building proto...
📦 Installing dependencies...
🛠️  Building the blockchain...
🗃  Installed. Use with: marsd
ignite c build  87.47s user 22.01s system 162% cpu 1:07.51 total
ilker>local_test/mars $ time ignite c build --proto-all-modules
Cosmos SDK's version is: stargate - v0.44.5

🛠️  Building proto...
📦 Installing dependencies...
🛠️  Building the blockchain...
🗃  Installed. Use with: marsd
ignite c build --proto-all-modules  256.09s user 39.81s system 296% cpu 1:39.70 total
ilker>cmd/marsd $ time ignite s chain github.com/test/mars

⭐️ Successfully created a new blockchain 'mars'.
👉 Get started with the following commands:

 % cd mars
 % ignite chain serve

Documentation: https://docs.ignite.com
ignite s chain github.com/test/mars  57.95s user 16.53s system 132% cpu 56.224 total

Please see backlog here: https://github.com/ignite-hq/cli/issues/1996, https://github.com/ignite-hq/cli/issues/1997, https://github.com/ignite-hq/cli/issues/1285, https://github.com/ignite-hq/cli/pull/2023.

We may need to use implementation ideas from the referenced links above, specially the ones related to caching code generation per module.

Make sure that file watcher in c serve cmd won't get re triggered while generating/modifying files in the source code during code generation. Because this might restart the code generation process over and over again. The rebuild process for c serve should only be made when source code modified by the user manually.

Make sure that you have following added to your chain's config.yml while testing (as these will enable code generation for OpenAPI and TS/Vuex):

client:
  openapi:
    path: "docs/static/openapi.yml"
  vuex:
    path: "vue/src/store"
ilgooz commented 2 years ago

Hey @bjaanes!

You have been assigned to this issue, thank you for participating in our bounty program, we will be very excited to see your solution! 🎈

gjermundgaraba commented 2 years ago

Excellent! Excited to work on this (i.e. I accept).

gjermundgaraba commented 2 years ago

Alright, I've gotten a pretty good overview of the main bottlenecks and how to fix them. One thing that has become clear very quickly is the need for a caching "system" that can be easily accessed multiple places. So what I want to ask about is how you prefer this being designed.

For context, the main way this is done today is by saving checksum as text files in a folder that is only accessible if you have the services/chain/chain.go import. In other words, it is only accessible on the service layer.

1: current chain context There are multiple times I've found myself looking for a simple way to look up info about the current chain we're processing, so I was wondering if there is any design or thoughts on standardizing this so there is a chain metadata object of some kind that can be passed around to look up this stuff. Another option is of course to start pegging stuff to the context as values, but you quickly lose out on type checking, refactoring etc, so not sure that is the best option? In this chain context I would imagine there would be a way to access a standardized caching solution.

2: caching design As mentioned, the current design is only saving checksums as text files, which makes it a bit difficult to use across the project. I am not entirely sure what the best solution is, but it should certainly be easy to namespace different caches without having to manually go and check if anyone is already using the keys you want to use. I am also not sure if files are what we want to use, or if a simple key-value store of some kind would be better.

Let me know your thoughts on this.

ilgooz commented 2 years ago

There are multiple times I've found myself looking for a simple way to look up info about the current chain we're processing

Can you please elaborate this to help me understand what's the problem?

Another option is of course to start pegging stuff to the context as values

I really don't prefer using context to pass data around, it is pretty ambiguous. But maybe they are better after generics introduced, haven't checked it for a while.

I am also not sure if files are what we want to use, or if a simple key-value store of some kind would be better.

We can definitely use a key value storage, and in the future we can also refactor the existing code to use this storage.

gjermundgaraba commented 2 years ago

Can you please elaborate this to help me understand what's the problem? Specifically, one thing I need to do is find the cache folder (which currently, if am not mistaken, is dependent on service/chain because it uses the chain-id in the folder name) when scaffolding. So what I did now as a quick workaround is that I do a "chain.New(appPath)" right after scaffolding to get the info I need. This creates a dependency between service/scaffolding and service/chain which I am not sure is what we want.

Perhaps this is better to leave for another issue, but it would be nice to have a common "chain" object that can be passed around so that you can always find things like current chain-id, appPath, etc. A more standardized approach essentially. Perhaps something that could be instantiated in the cmd and passed down to the services as context? I see for instance the Scaffolder object does some of this (just not all). Maybe a common approach might lead to better separation of concerns.

I really don't prefer using context to pass data around, it is pretty ambiguous. But maybe they are better after generics introduced, haven't checked it for a while.

Agreed. I checked and it is not really the case.

We can definitely use a key value storage, and in the future we can also refactor the existing code to use this storage.

Great. It's what I've been mostly working with so far and the results are pretty fantastic. Even the first builds and serves after scaffolding are really fast because scaffolding starts caching things right away. Should hopefully lead to a better on-boarding experience.