Closed yoomee1313 closed 2 months ago
@ian0371 In my opinion, we can still consider using hashicorp lru implementation. In terms of usability, it seems to be a matter of preference because I don't see much difference in performance.
Check this comment: https://github.com/ethereum/go-ethereum/pull/26162#issuecomment-1477708923.
About the blob_lru, I don't know why name is blob_lru, but this PR uses SizeConstrainedCache
struct.
Just to clarify, it seems like #73 is key to resolving the OOM issue. However, it may not be directly related to the current OOM context. Could you confirm?
Cache(lru.go) = Where is this to be used?
This is the hashicorp lru cache and it is used for codeSizeCache
of database cachingDB. It can be refactored to use basicLRU later.
return &cachingDB{
db: statedb.NewDatabaseWithNewCache(db, cacheConfig),
codeSizeCache: getCodeSizeCache(),
codeCache: lru.NewSizeConstrainedCache[common.Hash, []byte](codeCacheSize),
}
Just to clarify, it seems like https://github.com/kaiachain/kaia/pull/73 is key to resolving the OOM issue. However, it may not be directly related to the current OOM context. Could you confirm?
Please check this comment https://github.com/ethereum/go-ethereum/pull/26092#issuecomment-1305755602. fastcache is the key problem of leak, and it seems heavily triggered by stateAtBlock. If my understanding is different from you, please let me know.
FYI, when analyzing the heap profiling of the leaking node, fastcache
consumes up to 30% of memory usage.
Cache(lru.go) = Where is this to be used?
The codeCache
type is SizeConstrainedCache
, defined in blob_lur.go
. I confused that where lru.Cache
is used and now found it at https://github.com/kaiachain/kaia/pull/74/files#diff-37b6e7ed4f299b87b357b54376123393b34ee0699fc035f25bc0621627dcc6baR187. No function name or usage changed, thus no further change happened for this.
Please check this comment https://github.com/ethereum/go-ethereum/pull/26092#issuecomment-1305755602. fastcache is the key problem of leak, and it seems heavily triggered by stateAtBlock. If my understanding is different from you, please let me know. FYI, when analyzing the heap profiling of the leaking node, fastcache consumes up to 30% of memory usage.
NewDatabaseWithExistingCache
was dominated by trace*
functions. Now somethings got it. Thanks.
Proposed changes
Types of changes
Please put an x in the boxes related to your change.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
I have read the CLA Document and I hereby sign the CLA
in first time contribute$ make test
)Related issues
Further comments
If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc...