ApeWorX / ape-alchemy

Alchemy network provider plugin for the Ape Framework
https://www.apeworx.io/
21 stars 13 forks source link

`Contract.Event.query` gets stuck and does not raise #43

Open gosuto-inzasheru opened 1 year ago

gosuto-inzasheru commented 1 year ago

Environment information

$ ape --version
0.5.5

$ ape plugins list
Installed Plugins:
  arbitrum     0.5.1
  alchemy      0.5.2
  etherscan    0.5.4

What went wrong?

Please include information like:

running the following in console gets stuck for me (no more output, no errors) using alchemy as my default provider:

Contract('0x4d224452801ACEd8B2F0aebE155379bb5D594381').Transfer.query('*', start_block=14204533)
ethereum:
  default_network: mainnet
  mainnet:
    default_provider: alchemy

however, if i connect to alchemy directly (ie without using ape-alchemy):

geth:
  ethereum:
    mainnet:
      uri: https://eth-mainnet.g.alchemy.com/v2/XXX

i get the following error back:

ProviderError: Log response size exceeded. You can make eth_getLogs requests with up to a 2K block range and no limit on the response size, or you can request any block range with a cap of 10K logs in the response. Based on your parameters and the response size limit, this block range should work: [0xdbb82d, 0xdbca64]

(note that if i pass stop_block=14404196, as proposed by the error, i do get results back for both the ape-alchemy and the geth approach)

How can it be fixed?

feels to me the error should be raised all the way when using ape-alchemy, this in order to prevent endless waiting and not knowing what went wrong.

SHAKOTN commented 1 year ago

I second this. Experiencing same issue on my laptop

gosuto-inzasheru commented 1 year ago

more findings: Contract("0x4d224452801ACEd8B2F0aebE155379bb5D594381").Transfer.query("transaction_hash", start_block=15000000) runs fine when connecting through geth uri, but when running that with default provider set to alchemy, again it freezes with no output/errors.

is there an inherent different way of connecting through ape-alchemy than through the normal, non-plugin geth way? i guess especially for get_contract_logs, does it behave differently using this plugin?

edit: ok wow looks like it came through after all! just suuuuper slow

fubuloubu commented 1 year ago

@gosuto-inzasheru I believe these are symptoms of https://github.com/ApeWorX/ape/issues/1119, just with a different provider causing the issue. when you don't restrict queries by block number, currently it queries all possible blocks for events in each transaction which is.... extremely slow, yes. will definitely want to refactor it to using filters and paging to make it speed up

@Ninjagod1251 let's confirm this is indeed the same behavior, and then we can close this in favor of the ape core issue because I don't think it's alchemy causing the problem here

gosuto-inzasheru commented 1 year ago

https://github.com/ApeWorX/ape/issues/1119 is definitely related, but i believe there is something inherently different about the way a .query gets made when ape-alchemy is installed:

$ ape plugins list   
Installed Plugins:
  arbitrum     0.5.1
  etherscan    0.5.4
$ ape console -v INFO

In [1]: import timeit;timeit.timeit('from ape import Contract;Contract("0x4d224452801ACEd8B2F0aeb
   ...: E155379bb5D594381").Transfer.query("transaction_hash", start_block=15000000)', number=3)
INFO: Cache database has not been initialized
Out[1]: 375.95413874601945
$ ape plugins list
Installed Plugins:
  alchemy      0.5.3
  arbitrum     0.5.1
  etherscan    0.5.4
$ ape console -v INFO

In [1]: import timeit;timeit.timeit('from ape import Contract;Contract("0x4d224452801ACEd8B2F0aeb
   ...: E155379bb5D594381").Transfer.query("transaction_hash", start_block=15000000)', number=3)
INFO: Cache database has not been initialized
Out[1]: 3181.283465335029
fubuloubu commented 1 year ago

ApeWorX/ape#1119 is definitely related, but i believe there is something inherently different about the way a .query gets made when ape-alchemy is installed:

$ ape plugins list   
Installed Plugins:
  arbitrum     0.5.1
  etherscan    0.5.4
$ ape console -v INFO

In [1]: import timeit;timeit.timeit('from ape import Contract;Contract("0x4d224452801ACEd8B2F0aeb
   ...: E155379bb5D594381").Transfer.query("transaction_hash", start_block=15000000)', number=3)
INFO: Cache database has not been initialized
Out[1]: 375.95413874601945
$ ape plugins list
Installed Plugins:
  alchemy      0.5.3
  arbitrum     0.5.1
  etherscan    0.5.4
$ ape console -v INFO

In [1]: import timeit;timeit.timeit('from ape import Contract;Contract("0x4d224452801ACEd8B2F0aeb
   ...: E155379bb5D594381").Transfer.query("transaction_hash", start_block=15000000)', number=3)
INFO: Cache database has not been initialized
Out[1]: 3181.283465335029

This is very strange indeed! Are you using alchemy as your default network when executing example 2?

gosuto-inzasheru commented 1 year ago

indeed, in both examples my ape config looks like this:

ethereum:
  default_network: mainnet
  mainnet:
    default_provider: alchemy
geth:
  ethereum:
    mainnet:
      uri: https://eth-mainnet.g.alchemy.com/v2/XXX

which in the first run gives ERROR: Failed setting default provider: Provider 'alchemy' not found in network 'mainnet'. right before connecting to the geth configured url. in the second run it connects to alchemy successfully (using the WEB3_ETHEREUM_MAINNET_ALCHEMY_API_KEY=XXX in my .env).

fubuloubu commented 1 year ago

Gotcha, okay that makes more sense

fubuloubu commented 1 year ago

Ah, reading this now it seems like this provider either needs to override Web3Provider.get_contract_logs to use the 2k block chunks:

https://github.com/ApeWorX/ape/blob/79a552f61080aff891bad4bf965bc96fca832c6c/src/ape/api/providers.py#L1020

or change block_page_size: int = 100 to block_page_size: int = 2000