Currently, we are using simple block logic to skip possibly duplicate blocks. This is enough for our uses, since the log rate for the events should be 1 per block at most in a real scenario, but it's not a full solution. This PR introduces an LRU cache of log hashes (defined by ourselves) that is used to detect whether a log has been relayed yet or not. It also makes it so our log filtering starts from the last block, since we now can detect duplicate blocks - so if, e.g. there were 2 block emissions on lastBlock and we'd only gotten 1 before filtering logs, we don't lose the second one.
Currently, we are using simple block logic to skip possibly duplicate blocks. This is enough for our uses, since the log rate for the events should be 1 per block at most in a real scenario, but it's not a full solution. This PR introduces an LRU cache of log hashes (defined by ourselves) that is used to detect whether a log has been relayed yet or not. It also makes it so our log filtering starts from the last block, since we now can detect duplicate blocks - so if, e.g. there were 2 block emissions on
lastBlock
and we'd only gotten 1 before filtering logs, we don't lose the second one.