Closed serejke closed 1 year ago
Hi @serejke,
The recommended way is actually doing a fork of this repository, and re-run the tasks with the target network of your preference. The NPM package goal is mostly to interact with existing canonical deployments.
Our current option is to fork the repo and adjust for our needs, but I see a downside to this, as the Balancer is in active development and we need to integrate changes to our fork, making audits trickier.
You can find a longer explanation in the DEPLOYING
readme. To address this concern in particular, note that this repository and the build-info
files are completely decoupled from the sources in the monorepo. For example, the build info for the Vault corresponds to the source files that were audited before its deployment. Even if the sources changed afterwards in the monorepo, the build-info files stored here remain the same, so if you redeploy them to other networks you will still be deploying the same version that we use.
If you need a chain that is not listed in the chains as you mention, you should add it there and ensure you have the settings for the endpoint set in your local hardhat networks JSON. I'd recommend you to go through the pre-requisites in the DEPLOYING
readme for reference.
As a sidenote, if you want to verify the contracts in a custom block explorer not supported by Hardhat (see supported ones here), you should add the info to the custom networks in our hardhat config file (see example here).
Please let me know if that addresses your concerns, or if you need more info :)
Hi @jubeira, thanks for the response,
It makes a great sense to "snapshot" exact versions (ABIs) of smart contracts in this repo.
Maybe I intuitively expected that per-chain-per-task outputs of this repository would be decoupled from the tasks themselves, and stored at, for example, <root>/deployments/<chain>.json
with keys of "task-name": <outputs>
, for example:
deployments/mainnet.json
{
"00000000-tokens": {
"BAL": "0xba100000625a3754423978a60c9317c58a424e3D",
"WETH": "0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2"
},
"20210418-vault": {
"Vault": "0xBA12222222228d8Ba445958a75a0704d566BF2C8",
"BalancerHelpers": "0x5aDDCCa35b7A0D07C74063c48700C8590E87864E",
"ProtocolFeesCollector": "0xce88686553686DA562CE7Cea497CE749DA109f9F"
}
}
This way, the tasks folder can be chain-agnostic, and packaged to an NPM package with version = git-commit-hash
, and outputs can be stored in the custom's chain repo.
Just sharing my thoughts! I see the GitHub forking achieves the same thing.
Thanks for the input.
Maybe I intuitively expected that per-chain-per-task outputs of this repository would be decoupled from the tasks themselves, and stored at, for example,
/deployments/ .json with keys of "task-name":
Well, the outputs are stored inside the tasks for historical reasons at this point. This was part for the monorepo until recently, and since the tasks (including outputs) were mostly for our internal reference it made sense.
What you mention in your example actually exists under addresses
(see mainnet example). This file is built by iterating through all the outputs for easier reference. You could argue that it's a bit redundant to have it both in the task outputs and in a separate address index, but so far we've found it useful for different purposes.
Yes, this is exactly what I expected. I personally prefer to have all chain-related outputs in one file rather than many files.
Feel free to close this ticket, or transform it for Balancer's team needs. Thanks!
Hey, I'm wondering on the recommended way of deploying Balancer contracts to a custom EVM chain.
Specifically, we need to deploy the following tasks:
We'd like to use the
npm install @balancer-labs/v2-deployments
and run the tasks on our chain. Currently, it is not possible because: 1) the @balancer-labs/v2-deployments NPM package only has output addresses and does not have source code of the tasks 2)const Networks
lists a predefined set of supported chainsOur current option is to fork the repo and adjust for our needs, but I see a downside to this, as the Balancer is in active development and we need to integrate changes to our fork, making audits trickier.