Closed artvel closed 2 years ago
I'm using an iterator, so the complete dataset shouldn't be held in memory by badgerhold, however badger itself may be caching data in memory, which is a common tactic used by databases. Badger in general tends to use a lot more memory compared to BoltDB.
However, I believe I can make this count more efficient by using a key only iterator. I'll need to do some research.
Thanks for reporting.
Yeah, doing a key only iterator won't work, because the values are needed in order to filter by the query criteria.
I could disable pre-fetching values, but that would come at the cost of performance. The root of the issue is the fact that badger has decided (like most databases) to pay for more performance with memory usage.
If you want more control over your memory usage you can tweak the options passed into badger: https://dgraph.io/docs/badger/get-started/#memory-usage
Thanks, that sounds reasonable. I'll look into that.
Hi guys
When using
Count
in aDB
with a huge amount of entries the memory is being eaten up. And it is being freed right afterCount
.Cheers, artvel