type txnsMap map[CacheKey]*util.JSONResponse
// CacheKey is the type for the key in a transactions cache.
// This is needed because the spec requires transaction IDs to have a per-access token scope.
type CacheKey struct {
AccessToken string
TxnID string
Endpoint string
}
these fields needs to be fixed length, bytes and converted to from UTF-8 as needed; they need to be fixed length and the extents of the map should be limited to a specific size (thus total number of bytes of cache key structure * maplen) will translate directly to the amount of memory it will use
The timed cache cleanup code should just be discarded, most people who are running this server are using maybe 2GB of memory VPSes if even that. The semi official memory requirements for dendrite is 1GB but I'm not sure what that could possibly be based on given that this map can grow infinitely until it runs out of memory (and routinely does)
This is just an example (the correct way to find the memory leak would be to run the profiler and see whats using 75% memory at times, it could also have something to do with presence (tbd but just turned off presence both ways and it seems to be using less memory now) -- presence also uses some maps somewhere
It might ahave also been search indexing (I turned that off too)
This is just off the top, something that seems like it could be causing a memory leak (ther eis definitely a memory leak it needs to get fixed)
Relevant code:
https://github.com/paigeadelethompson/dendrite/blob/main/internal/transactions/transactions.go
these fields needs to be fixed length, bytes and converted to from UTF-8 as needed; they need to be fixed length and the extents of the map should be limited to a specific size (thus total number of bytes of cache key structure * maplen) will translate directly to the amount of memory it will use
The timed cache cleanup code should just be discarded, most people who are running this server are using maybe 2GB of memory VPSes if even that. The semi official memory requirements for dendrite is 1GB but I'm not sure what that could possibly be based on given that this map can grow infinitely until it runs out of memory (and routinely does)
This is just an example (the correct way to find the memory leak would be to run the profiler and see whats using 75% memory at times, it could also have something to do with presence (tbd but just turned off presence both ways and it seems to be using less memory now) -- presence also uses some maps somewhere
It might ahave also been search indexing (I turned that off too)