ApeWorX / ape

The smart contract development tool for Pythonistas, Data Scientists, and Security Professionals
https://apeworx.io
Apache License 2.0
889 stars 131 forks source link

Unable to read mainnet contract on a forked network using pytest mark [APE-1016] #1467

Closed 0xkorin closed 1 year ago

0xkorin commented 1 year ago

Environment information

$ ape --version
0.6.10

$ ape plugins list
Installed Plugins:
  foundry      0.6.8
  etherscan    0.6.4
  hardhat      0.6.6
  vyper        0.6.7
  fantom       0.6.0
$ cat ape-config.yaml
plugins:
  - name: vyper
  - name: hardhat
ethereum:
  default_network: local
  local:
    default_provider: hardhat
  mainnet_fork:
    default_provider: hardhat
geth:
  ethereum:
    mainnet:
      uri: https://redacted

What went wrong?

When running a test in a forked mainnet using the use_network pytest mark, ape seems to think that no contract is deployed at the address. As a consequence, both these tests below fail. However, if we remove the marks and instead run ape test --network ethereum:mainnet-fork, both tests pass. I have tried multiple addresses of contracts that are deployed on the actual mainnet, and none of them pass the test.

@pytest.mark.use_network('ethereum:mainnet-fork')
def test_code(networks):
    assert len(networks.provider.get_code('0xB9fC157394Af804a3578134A6585C0dc9cc990d4')) > 0

@pytest.mark.use_network('ethereum:mainnet-fork')
def test_read(networks):
    factory = Contract('0xB9fC157394Af804a3578134A6585C0dc9cc990d4')
    assert factory.pool_count() > 0

The second tests fails with ape.exceptions.ContractError: Unable to make contract call. '0xB9fC157394Af804a3578134A6585C0dc9cc990d4' is not a contract on network 'mainnet-fork'.

This is the output of print(networks.provider):

name='hardhat' network=<ethereum:mainnet-fork chain_id=31337> provider_settings={} data_folder=PosixPath('/Users/user/.ape/ethereum/mainnet-fork') request_header={'User-Agent': 'Ape/0.6.10 (Python/3.9.16 final)'} cached_chain_id=None block_page_size=100 concurrency=4 process=None is_stopping=False stdout_queue=None stderr_queue=None PROCESS_WAIT_TIMEOUT=15 port=8545 attempted_ports=[8545] _test_config=Config(mnemonic='test test test test test test test test test test test junk', number_of_accounts=10, gas=GasConfig(show=False, exclude=[]), disconnect_providers_after=True, hd_path="m/44'/60'/0'/{}") _fork_config=HardhatForkConfig(upstream_provider=None, block_number=None, enable_hardhat_deployments=False) _upstream_provider=<geth chain_id=1>
antazoey commented 1 year ago

So I figured out what is happening here. I apologize, it is a bit confusing! I would like to improve this.

Your default network is a local hardhat node, as configured in your config. When ape test first starts up, it spins up a local hardhat node.

You have NOT specified a different port for your mainnet-fork versus your local. If you plan on running multiple nodes like this, you need to tell it different ports or else it will think it is connecting to an already-running existing node.

And that is exactly what is happening: mainnet-fork hardhat connects to the local hardhat thinking it is mainnet fork when it is not, it never did the fork...

To fix for your project, you can configure hardhat to use a different port for mainnet fork. The updated config that works is like this:

hardhat:
  host: auto

^ now the port used by the local hardhat will differ from the port used by the forked-network hardhat.

I am going to think about if there is anything else we can do before closing this.

antazoey commented 1 year ago

I think the problem is in ape-hardhat and other similar providers. We need to ensure we are connecting to a fork. I will fix the issue there will a good message showing how to use auto and then can close this

antazoey commented 1 year ago

I suppose another thing we need to do in ape-hardhat is allow setting a hosts (ports) per network individually, instead of using auto, in the case you need these to be consistent.