Closed paymog closed 1 year ago
It seems this only happens for a few of the subgraphs deployed into my infrastructure: QmWyLVicJp1zPpMAhzBEFEJ5qAES678J3oBFomUbpCrMyq
, QmacUjeDurvnnfNnubtKxtVQve5iEnHs5B9bDAZbwC1prP
, and QmRGUzBxZzHP4L4K44WMHSWMdU1RTMLqGrWypGrtCuz9R3
This was fixed in firehose version 1.3.2
Do you want to request a feature or report a bug?
Bug
What is the current behavior?
The graph node emits the following log line:
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem.
Not sure how to repro but here's my set up:
ghcr.io/streamingfast/firehose-ethereum:4378d28
(which has some bugfixes on top ofghcr.io/streamingfast/firehose-ethereum:develop-geth-v1.10.26-fh2.1
)features = [ "filters" ]
turned on for my firehose provider in the graph node tomlMy index files were recently rebuilt and this issue only happens when
features = [ "filters" ]
is used by graph node. If I remove that feature from the firehose provider in my graph node toml, this issue goes away.What is the expected behavior?
I expect graph node to correctly set the maximum size of the grpc messages it can accept.