Closed AhiVT closed 4 years ago
@acdenisSK please feel free to contact me on Discord if you'd like to do some debugging: Dunkel#0001
Just as an FYI, this issue does not occur when using trace level logs with the tracing_subscriber crate.
Likely because the tracing level slows the bot down enough that it does not happen.
See also #905 for more details on the memory leak issue including heap data and profiling. It is speculated that this leak is caused from a panic in tokio from the sharder.
@Flat OOMs are occurring even when the bot is not sharded. My bot at this commit has the same memory leak issue: https://github.com/MOONMOONOSS/pit-tracker/tree/e391bce7c1ed3d954ed225bfb1d2e689d83bb143
@Flat OOMs are occurring even when the bot is not sharded. My bot at this commit has the same memory leak issue: https://github.com/MOONMOONOSS/pit-tracker/tree/e391bce7c1ed3d954ed225bfb1d2e689d83bb143
All bots use the shard manager and a minimum of one shard. The code is common https://docs.rs/serenity/0.9.0-rc.1/src/serenity/client/mod.rs.html#869
What does each repo that is affected by this memory leak have in common? My bot uses ctx.write()
and did not experience leakage during my profiling on MacOS. I should also note that during this time my bot did not lock the murex for writing ops during this 25 minute test. Perhaps a test repo should be made to investigate if writing to the context data field is the culprit?
Resolved by #975.
This is a fresh tracking issue that picks up on a detected issue in #955 This memory leak occurs within a few hours execution time on Ubuntu Linux on the latest Nightly Rust build using the following repo: https://github.com/MOONMOONOSS/pit-tracker