Open dmihal opened 3 years ago
This is must. Especially for forking.
In case anyone ends up here, the way to do this right now is to use environment variables and configure the network programmatically. So, say you want the Hardhat network to be non-forked by default but to fork in some scenarios. Then you could do something like this:
const hardhat = {
// your basic hardhat config
}
if (process.env.FORK === "true") {
hardhat.forking = {
// your forking config
}
}
module.exports = {
networks: {
hardhat,
// other networks
}
}
Then you can do hh run myLocalScript.js
and FORK=true hh run myForkScript.js
We know this is not ideal, but it should at least help as a workaround.
My use-case is to test migrations which run on several networks, typically a L1/L2 setup. With hardhat-deploy
, there is the concept of companionNetworks
, so the config could look like this:
module.exports = {
networks: {
hardhat: {
hardfork: 'london',
companionNetworks: {
L2: 'hardhat_L2'
}
},
hardhat_L2: {
type: 'hardhat', // for example
hardfork: 'berlin',
blockGasLimit: 30000000
}
}
}
It implies that several instances of the hardhat EVM would run in parallel, not sure if there are specific restrictions about this
@nataouze I think this issue is specifically about having networks with other name than hardhat
that are also Hardhat networks. What you are suggesting (running multiple Hardhat networks in parallel) might be an interesting feature request, but we should have a different issue for that. (Fair warning: that seems like a hard thing for us to implement, so don't expect support for it soon :sweat_smile:)
I've bumped into this issue as well. Just want to add some color for my particular use case and will also share my work around.
My application needs to either use multiple endpoints or even switch between networks. Further, I have the need to reset my node back to the latest block at different points in time. My workaround looks like this:
interface RequestForkingParams {
forking: {
url?:string, //should be deleted
jsonRpcUrl?: string,
blockNumber?: number,
accounts?: string[]
}
}
async function reset(network:keyof typeof config.networks){
const original = config.networks[network];
const params:RequestForkingParams = {...config.networks.hardhat}; // copy hardhat network args
// Type Narrowing
if ("url" in original) { // if using another network besides "hardhat"
params.forking.jsonRpcUrl = original.url;
} else {
params.forking.jsonRpcUrl = original.forking.url;
}
delete params.forking.url; // delete unused key
await this.network.provider.request({
method: "hardhat_reset",
params: [params]
});
}
As an oddity, it seems that using the hardhat_reset
rpc method expects a jsonRpcUrl
key instead of the plain url
key in the original config. It would be great to standardize the spec here so one could simplify the implementation to config.networks[network];
I'm also in need to forking different networks at the same time.
Resetting via a hardhat_reset
call only works if I serialize my workflow, losing the ability to run different chains autonomously, without codependencies on each other.
Right now, only one "hardhat" network can be defined in the configuration. It would be nice if multiple could be defined with multiple configurations (for example, one for local testing and one for mainnet forking)