Closed mikeshultz closed 1 year ago
Thought I tested the above, but it seems to create a chunked up command. This one works: cmd.extend(("--no-deploy", ))
Thought I tested the above, but it seems to create a chunked up command. This one works:
cmd.extend(("--no-deploy", ))
I think you can do cmd.append("--no-deploy")
as well
Let's do this but maybe add an entry to ape-config.yaml
:: hardhat
like:
hardhat:
no_deploy: false # Defaults to true
so users can opt back in if they actually wanted that.
@unparalleled-js are there any examples of a plugin having tests that check behavior according to PluginConfig
or ape-config.yaml
differences? I didn't see any in ape_config (or missed 'em). Was originally looking to create some unit tests to check the output of HardhatProvider.build_command()
depending on the no_deploy
setting but it's looking kind of like I would have to alter the HardhatProvider
API to override the config which probably isn't ideal. Though maybe I'm just missing something in all the abstractions.
@unparalleled-js are there any examples of a plugin having tests that check behavior according to
PluginConfig
orape-config.yaml
differences?
In Core Ape, we have a temp_config()
fixture for this: https://github.com/ApeWorX/ape/blob/main/tests/functional/test_config.py#L49-L50
If wanting to use in Hardhat, we'd have to copy it in for now.
I got this put together with a test case but it doesn't work as-is because --no-deploy
option on the node
task is provided by the hardhat-deploy
plugin. That plugin may not be installed in all envs, including the general env tests are run in (presumably it's global npx installs?). So if you run hardhat node --no-deploy
and the plugin is not installed the command will error and the provider will fail in an unhandled way.
Since ape-hardhat
uses the npx
command, it will use a local ($(cwd)/node_modules/
) package (if available) or alternatively, install and use a globally installed package for the command. Global installs doesn't even work with hardhat-deploy, returning Hardhat error HH12
.
Since hardhat has limited global support, I think we could read package.json
and look for installed plugins. If ether package.json
or hardhat-deploy
is not found in dependencies, we ignore the config option. There's no Hardhat task command or anything nice that would list out the already-installed plugins so it's hard to figure this out otherwise. Could add a custom JS script to read tasks from their hre but that seems overly complex.
A simple alternative might be to just default the no_deploy
setting to False
and require the user to enable it explicitly. This would be simple to implement but if someone sets no_deploy: true
it will fail in a way that is not obvious if npx
error handling is left as-is. I also do kind of agree that deploys on a fork is probably not the preferred behavior by the Ape users interacting with a Hardhat project so it would be nice to default that to true.
Let me know if you (or anyone) has any thoughts on this. Otherwise, I'll probably add limited package.json
reading and plugin loading to HardhatProvider.
I think checking for package.json
works well enough! I am not much of a Hardhat user, but it seems everything needs to be local anyway.
was fixed as part of https://github.com/ApeWorX/ape-hardhat/pull/112
Overview
Today I was running into timeout issues when starting a fork in a pre-existing hardhat project directory. Speaking with @unparalleled-js, I'm aware of the timeout config now, but still would prefer to be able to disable these as they're unnecessary and slow. In my case though, the timeouts were mostly caused by the fact that the hardhat node will run all the deployment scripts deploying all of the contracts to the fork. This can take a minute or two in this case.
I initially wanted to just submit a PR disabling these deployments but figured I should check in to see if there's any good argument against it. As an untested example:
I don't think there's much use case for these hardhat deployment scripts since the deployed contract instances can't (currently) be referenced via Ape, anyway. Not without reading in
network.json
or expecting deterministic deployment addresses, which are likely to be an edge case at best. That said, maybe there's a use case I'm not seeing or someone that might find use in having these deployments run.In that case, perhaps there should be a CLI option to enable deployments or users should run them manually?