Closed ZenGround0 closed 3 months ago
I haven't been able to understand how memory footprint could be so high. All of f05's state is ~10GB and AMTs should always be caching intermediate nodes so there shouldn't be extra reads. However going from 10GB -> 10MB of memory needed with little expected impact on speed is already a good thing.
So, iteration doesn't cache (in go, IIRC). Random lookups do and I'm guessing our library isn't super optimized for that.
Oh, wow. Our deferred cbor type is terrible.
After experiencing OOM kills @rvagg shared pprof report showing ~80G of memory held by the market migration.
pprof shows that the culprit appears to be AMT Get reading in market proposals which are the market actor's state bottleneck.
I haven't been able to understand how memory footprint could be so high. All of f05's state is ~10GB and AMTs should always be caching intermediate nodes so there shouldn't be extra reads. However going from 10GB -> 10MB of memory needed with little expected impact on speed is already a good thing.