Closed mshakeg closed 1 year ago
So it would seem this low content length constraint is not on the mirror node itself, but likely on its CDN.
If this is the case, then I still think the ops team should increase it to at least 1mb since the hashio relay has the INPUT_SIZE_LIMIT
configured to 1mb.
Thanks @mshakeg . There's no retry with consensus node calls as we only wanted to do this for NotSupported Mirror Node scenarios. This is a bug we'll look into this and circle back.
We'll rectify this before pushing to mainnet as a result
I am unable to reproduce. The curl example I actually get a 503 due to a NPE and not a 431, which we will address. The mirror node has a 1 KB limit. In the curl example, the headers in the example are tiny -H 'accept: application/json' -H 'Content-Type: application/json'
and would not exceed 1KB for headers. The other example is also tiny only being about 400 bytes. Why would you need to send large headers when all of the content should be in the request body?
There seems to be some mixing of request header size and request body size. INPUT_SIZE_LIMIT
should be for request body and 1MB is reasonable. But 431 is for request header size exceeded and none of the examples have large headers. Users should not be sending more than 1KB in their headers.
@steven-sheehy yeah, I was also a bit confused as to why a 431
was returned, however after some investigation it seems a 431
is only returned when calling the endpoint via the Swagger UI, however, running the curl directly returns a status 500
with the following response body(replace -X 'POST'
with -i
to see the status code):
{
"_status": {
"messages": [
{
"message": "Internal Server Error",
"detail": "",
"data": ""
}
]
}
}
So there are 2 separate issues with the mirror node.
Hey @mshakeg so one side we release a fix in 0.22.1 that should have addressed the failing calls.
Can you confirm the remaining part, is it relay related from your view or are you saying the mirror node Swagger UI has limits that differ from what the actual API has?
Hey @Nana-EC so the issue with calls to the public mirror node and via swagger still persists, however, the hashio relay returns a 200. If the hashio relay points to the public hedera mirror node then presumably it now works as it falls back to a consensus node? which is non-ideal as the mirror node should handle all eth_calls and the consensus node has a limit of 15m whereas the mirror node has a gas limit of 120m, so any eth_call which requires >15m will likely always fail until the mirror node issue is resolved.
mirror node request:
curl -i \
'https://testnet.mirrornode.hedera.com/api/v1/contracts/call' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"block": "latest",
"data": "0x1749e1e300000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000001c0000000000000000000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000000000000000000004600000000000000000000000007155817ede50ca3e6a2ab39be73c97bc892f421300000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000005f5e100000000000000000000000000000000000000000000000000000000000000002b00000000000000000000000000000000003ca60b00012c0000000000000000000000000000000000000d24000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007155817ede50ca3e6a2ab39be73c97bc892f421300000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000005f5e100000000000000000000000000000000000000000000000000000000000000002b00000000000000000000000000000000003ca60b0005dc0000000000000000000000000000000000000d24000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007155817ede50ca3e6a2ab39be73c97bc892f421300000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000c4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000005f5e100000000000000000000000000000000000000000000000000000000000000005900000000000000000000000000000000003ca60b00012c0000000000000000000000000000000000000e120005dc00000000000000000000000000000000003ca60b00012c0000000000000000000000000000000000000d2400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007155817ede50ca3e6a2ab39be73c97bc892f421300000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000c4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000005f5e100000000000000000000000000000000000000000000000000000000000000005900000000000000000000000000000000003ca60b00012c0000000000000000000000000000000000000e120005dc00000000000000000000000000000000003ca60b0005dc0000000000000000000000000000000000000d240000000000000000000000000000000000000000000000000000000000000000000000",
"estimate": false,
"gas": 7000000,
"to": "0x688220f76a11702273d8c4a9be798355f1cbf01d"
}'
mirror node response with status 500:
{
"_status": {
"messages": [
{
"message": "Internal Server Error",
"detail": "",
"data": ""
}
]
}
}
Make the same call via the Swagger UI still returns a status 431
Hey @mshakeg, we tried to reproduce the issue via local-node running the consensus and mirror node tags that are currently on testnet (0.37 & 0.78.1) and by inserting the same runtime bytecode that this contract has on testnet. However, we couldn't reproduce it and even the call returned successfully a result.
Could you share more details about the method that is invoked? What is the signature and what operations it performs? Are there accounts that are involved or a call to a second contract?
The problem might be caused by the differences between the local environment and testnet, but we are still investigating.
Hey @IvanKavaldzhiev
Are you saying the above mirror node request curl succeeds but only on the local mirror node? Hmm, if that's the case then it would seem the issue is not with the mirror node itself but rather with the public mirror node CDN? perhaps a length constraint on the request body as I initially proposed.
Could you share more details about the method that is invoked? What is the signature and what operations it performs? Are there accounts that are involved or a call to a second contract?
Sure, the call is a view
call and does NOT depend on a sender(tx.origin
or msg.sender
). The contract that is called calls a contract which in turn calls another contract, but all calls are necessarily view
calls given the outermost call is a view
call.
@mshakeg,
The above mirror-node curl request succeeds on local-node, indeed. However, as Steven pointed out, a NPE is observed in testnet logs, so we believe on testnet the above calls causes such an error and we try to see the root cause for it. If it was a header lenght constraint, it would be a different type of error.
Having these chained requests and multiple contracts involved, adds more runtime bytecodes in place. Some of the nested contract's runtime bytecode might be null. It's still strange, however, that locally it passes. I believe these contracts are nested child contracts and their code is self-contained in the parent's bytecode. Otherwise we wouldn't be able to have a sucessfull respose locally, since we haven't inserted any other runtime bytecodes. Am I correct?
@IvanKavaldzhiev
It's still strange, however, that locally it passes.
It is very strange as I'd expect the call to revert since if I understand you those other contracts don't exist in your local runtime and even if they do they may not exist with the latest state.
I believe these contracts are nested child contracts and their code is self-contained in the parent's bytecode.
So in a single call(specifically in the example curl provided):
call depth 0: the outermost contract is a standalone contract(i.e. not a child contract created by a factory contract)
call depth 1: this contract is also a standalone contract
call depth 2: this contract is a child contract created by a factory
Hope this gives sufficient clarity.
@mshakeg Can you try invoking this same call via HAPI contract call so we can see the trace information on hashscan?
Hi @steven-sheehy has the trace helped? It's quite long given it's a multicall.
In case the following might be helpful in debugging.
This is also an issue on mainnet where all contracts called at any call depth were deployed after mirror node release 0.78.
curl -i \
'https://mainnet-public.mirrornode.hedera.com/api/v1/contracts/call' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"block": "latest",
"data": "0x1749e1e30000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000028000000000000000000000000000000000000000000000000000000000000003c000000000000000000000000000000000000000000000000000000000000005000000000000000000000000000000000000000000000000000000000000000640000000000000000000000000000000000000000000000000000000000000078000000000000000000000000000000000000000000000000000000000000008c00000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000b400000000000000000000000000000000000000000000000000000000000000c80000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000001312d00000000000000000000000000000000000000000000000000000000000000002b00000000000000000000000000000000002158720005dc000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000002625a00000000000000000000000000000000000000000000000000000000000000002b00000000000000000000000000000000002158720005dc000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000003938700000000000000000000000000000000000000000000000000000000000000002b00000000000000000000000000000000002158720005dc000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000004c4b400000000000000000000000000000000000000000000000000000000000000002b00000000000000000000000000000000002158720005dc000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000005f5e100000000000000000000000000000000000000000000000000000000000000002b00000000000000000000000000000000002158720005dc000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000001312d00000000000000000000000000000000000000000000000000000000000000002b000000000000000000000000000000000021587200012c000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000002625a00000000000000000000000000000000000000000000000000000000000000002b000000000000000000000000000000000021587200012c000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000003938700000000000000000000000000000000000000000000000000000000000000002b000000000000000000000000000000000021587200012c000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000004c4b400000000000000000000000000000000000000000000000000000000000000002b000000000000000000000000000000000021587200012c000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000b6cc4baafed413873d41e3013c7223defe8b60f500000000000000000000000000000000000000000000000000000000000ac1e8000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a4cdca175300000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000005f5e100000000000000000000000000000000000000000000000000000000000000002b000000000000000000000000000000000021587200012c000000000000000000000000000000000006f89a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"estimate": false,
"gas": 100000000,
"to": "0xff63b3847Ff0Bc39ed9b7AA5552EBeB32E3E40cA"
}'
Hey @Nana-EC my understanding of the changes in release v0.22.1 is that it expands the cases for the fallback to a consensus node and so applications that heavily rely on large gas intensive eth_calls are still not feasible for the following reasons:
ContractCallQuery
feesContractCallQuery
requests allow for at most 15m whereas the mirror node contract/call endpoint allows for at most 120m.Could you provide a rough time estimate for resolution?
@mshakeg Yes, it was very helpful. Thank you. Ivan has been able to reproduce locally and will work on a fix.
FYI, Mirror node is moving from a 120m to a 15m gas limit per call to closer match consensus nodes and to improve performance/security.
@steven-sheehy awesome, great to hear, hopefully, the fix doesn't take too long to ship.
Regarding the mirror node gas limit per call being decreased to 15m to match that of consensus nodes, will it at least be configurable such that 3rd party providers(such as Arkhia) can increase it beyond that default?
@mshakeg Yes, the gas limit field would be configurable.
As Steven mentioned, I was able to reproduce the NPE by inserting the meta data and bytecode info for all contracts that I saw in the stacktrace in hashscan in my local DB. It seems the issue is due to missing functionality in 0.36.0-alpha.3
hedera-evm
library (which is used inside 78.1 tag), that is enhanced in 0.37.0 with which we start supporting also speculative writes (without precompiles).
I tried running the same call with 79.0-rc1 tag which has the 0.37.0 evm library version and now it passes. However, the return result is not the same as in hashscan. Here is what I get:
{
"result": "0x00000000000000000000000000000000000000000000000000000000000004050000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000140000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000002c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007c8a000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000244e487b71000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000072c4000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000244e487b7100000000000000000000000000000000000000000000000000000000000000120000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007329000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000244e487b7100000000000000000000000000000000000000000000000000000000000000120000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007325000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000244e487b71000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000"
}
Is there a logic inside the contract that has a changing behaviour and could return different results based on a condition? Another very important question is, do you try to make any kind of state modifications that a consequent call should rely on?
@IvanKavaldzhiev thanks for the update. Your progress sounds promising.
So I just re-executed the same transaction here and the result is slightly different from the result of the previous transaction from 2 days ago, specifically only the first part of the result is different as that contains the block number which has obviously changed. However, the rest of both the results are identical and the lengths for both are 4034(as I'd expect) whereas the length of your result is completely different at 1986.
Is there a logic inside the contract that has a changing behaviour and could return different results based on a condition?
Only based on the state/storage of contracts at call depth 2, however, since no changes to those contracts have occurred within the past 2 days the results from the 2 transactions(excluding the first part containing the block.number
) are identical.
Another very important question is, do you try to make any kind of state modifications that a consequent call should rely on?
As previously mentioned no state changes were made that would affect the results(other than the first part containing the block.number
).
Hope that sufficiently answers your questions.
Hi @IvanKavaldzhiev I just wanted to ask if my last reply has provided clarity. Hope you've progressed further as this is a pretty major issue for my use case :(
Hey @mshakeg, yes, your answer provided further clarity and would be helpful for us! We had a national holiday and unfortunately weren't able to progress much. We're on our way to debug consensus node's code base and see what might cause the shorter and wrong output from the Mirror Archive Node. Once we have a progress, we would get back to you.
Hi @IvanKavaldzhiev I just wanted to follow up with you on the progress of this issue as it's been almost a week with no public indication of progress which doesn't seem promising :(
Hi @mshakeg we are actively investigating the issue.
There is a new release of the mirror-node (v0.78.1), could you try again and see it the issue persists?
Are you able to share the initcodes or the contracts themselves in order for us to debug your specific use-case locally?
Hi @Kalina-Todorova
Good news, I just ran the same curl from here and got back the same result as a contract transaction. So it seems to be fixed with mirror-node v0.79.1.
I would assume the relay would now rely on the /contracts/call
endpoint and never fallback to a consensus node(at least for supported calls), however, that'll have to be tested and I'm not sure of an easy way to verify this.
Correct @mshakeg, the relay defaults to the Mirror Node and only falls back on the consensus node for unsupported methods and bugs while still ironing things out. With the completion of HIP 584 the goal would be to no longer utilize the consensus node.
I'll plan to resolve this issue unless you believe there's something remaining per the original issue?
@Nana-EC I see, in that case, this is resolved.
Description
Now that
eth_call
requests are directed to the mirror node POST/api/v1/contracts/call
endpoint as opposed to a consensus node with aContractCallQuery
request, it appears that eth_call's with large request data calls fail.Take the following eth_call request as an example:
This is likely an issue with the mirror node as running the same call directly against the mirror node returns a 431 status(i.e. Request header fields too large).
Steps to reproduce
See above for the
eth_call
and corresponding POST request.Proposed Solution:
INPUT_SIZE_LIMIT
default.eth_call
issue still persists investigate further.Also, it seems the
ContractCallQuery
fallback never occurs as I've never been able to receive even 1 successful response(which I'd expect if the fallback was working). Investigate if this is in fact the case.Additional context
No response
Hedera network
testnet
Version
relay/0.22.0-rc2
Operating system
None