Closed sonalkr132 closed 2 years ago
@sonalkr132 Thanks, I'll share this regression to the Ruby core team.
that would be great, thanks. let me know if anyone needs help setting up rubygems.org locally or reproducing this.
I looked into this. I can confirm that https://github.com/ruby/ruby/commit/98ac62de5cb03efec0fb32684c61c0d4df692e5a fixes the issue. However, it's not in a 3.0 release yet. I'll ask around for a timeline of when 3.0.3 will be released.
As you might have heard, Ruby 3.0.3 is out! It contains the fix for the memory leak in Hash#transform_keys!
. Unfortunately, a backport in 3.0.3 has another bug, which breaks bootsnap. The latest version of bootsnap (1.9.3) has a workaround. Please give 3.0.3 a try and let us know if you run into any issues with it!
rubygems.org used (or will use) Ruby 3.0.3 at #2876 .
@peterzhu2118 Thanks for your investigation.
We had to revert the ruby 3 update because memory usage kept increasing with time https://github.com/rubygems/rubygems.org/commit/1c1584d4d2d22d6a5511069ea9042dd2ceb2ca5b
Steps to Reproduce
rake db:create db:migrate
)sudo docker run -e NEW_RELIC_AGENT_ENABLED=false -e RAILS_ENV=production -e SECRET_KEY_BASE=1234 -e DATABASE_URL=postgresql://localhost --net host quay.io/rubygems/rubygems.org:ruby2 -- unicorn_rails -E production -c /app/config/unicorn.conf
sudo docker stats
vegeta attack -duration=120s -targets some.txt | tee results.bin | vegeta report
sudo docker run -e NEW_RELIC_AGENT_ENABLED=false -e RAILS_ENV=production -e SECRET_KEY_BASE=1234 -e DATABASE_URL=postgresql://localhost --net host quay.io/rubygems/rubygems.org:ruby3 -- unicorn_rails -E production -c /app/config/unicorn.conf
sudo docker stats
vegeta attack -duration=120s -targets some.txt | tee results.bin | vegeta report
Expected Behavior
memory usage should become constant after a while, as seen on ruby2 image
Current Behavior
memory usage keeps increasing on ruby3. brief flat lines are part where the load test was not running.
Possible Solution
¯_(ツ)_/¯
Additional Context
I had tried taking heap dump from prod as explained in this blog, however, heapy diff was showing that size of everything had increased. Perhaps things are not getting GCed?