Closed robanstha closed 9 months ago
Apparantly, it seems like the high memory consumption is because of the design document views. It just seem to grow over time. Even though if the database is deleted, the views are probably being cached which is taking up memory.
Found a workaround for the issue:
This has free up all the memory that were consumed by views in the above case: From docker stats, you can see that the memory usage was reduced by 1.xGB to 1xxMB. If views were not deleted and db wasn't compacted, it would have stuck in 1.xGB even after deleting database.
Using couchdb 3.2.2 in Kubernetes pods with three nodes couchdb cluster and posting documents to database in using _bulk_docs endpoint and the memory usage in the docker container spikes up gradually in GBs while total db size is about 10MB. I'm posting ~2700 documents which totals up ~500kb every day twice until the couchdb crashes because of high memory usage.
Also, after the pods are back up, the memory consumption goes down.
OTP 25 is supposed to have memory leak issue but couchdb 3.3.3 is using OTP 24:
Example of all databases and its size created using curl:
Gradual Spike in memory consumption after running _bulk_docs of ~500kb (2700 docs) once every day:
Expected Behavior
Memory should not grow gradually in GBs while all databases size is about 10MB.
Current Behavior
Memory spikes up by about 1GB for 500kb of documents posted via _bulk_docs
Steps to Reproduce (for bugs)
Context
We have a process that creates 2700 docs (~500kb total) with new database every day. The high memory consumption increases every day and the container stops and restarts once it reaches the peak.
Your Environment
Local docker container and kubernetes pods (Reproducible on both 3 node cluster but NOT on single node).