flippercloud / flipper

🐬 Beautiful, performant feature flags for Ruby.
https://www.flippercloud.io/docs
MIT License
3.7k stars 413 forks source link

Flipper causing measurable slowdowns #774

Closed jnarowski closed 11 months ago

jnarowski commented 11 months ago

I am seeing Flipper show up in a lot of our slowest transactions in NewRelic. We're using 0.22.1

  1. Wondering what the best ways to optimize Flipper are. We're currently using the ActiveRecord adapter.
  2. Do you recommend we switch to Redis?

For Flipper to take 400ms or really anything above 10-50ms seems excessive.

Screenshot 2023-11-21 at 1 49 38 PM
jnunemaker commented 11 months ago

This sounds like preloading which is on by default.

Are you using any configuration of any kind for flipper (say in an initializer) or all defaults?

Have you read the Optimization page (https://www.flippercloud.io/docs/optimization)?

jnarowski commented 11 months ago

Thanks. Checking it out now. We're mostly using all the defaults.

jnunemaker commented 11 months ago

@jnarowski k! Let me know if you have any questions. Happy to help.

Also if you are using puma with threads I can get you a config that does memory reads and only hits AR on an interval. It’s pointless with a single process but with threads it improves performance.

jnunemaker commented 11 months ago

Closing to keep things tidy but happy to keep responding here.

jnarowski commented 11 months ago

Do you have any best practices in terms of enabling for 100% instead of for all actors, cleaning out old feature flags etc? Seems like if I disable preloading, it creates a ton of new queries for every request, but keeping it preloaded and loosing 200-400ms isn't a good option either.

jnunemaker commented 11 months ago

Oh and lastly, how many features and how many gates do you have? Maybe count the features and gates in those tables and drop it here. That helps debug.

jnarowski commented 11 months ago

5000 feature gates 26 features

We have a high concentration gates on a single feature "calendar_v2" at a count of 4305

We were trying to enable it for everyone except a small group of people so had to enable it for a huge cohort instead.

jnarowski commented 11 months ago

@jnarowski k! Let me know if you have any questions. Happy to help.

Also if you are using puma with threads I can get you a config that does memory reads and only hits AR on an interval. It’s pointless with a single process but with threads it improves performance.

Sorry, I missed this message. We do use Puma. I think this could have a huge improvement to performance.

Our stack is

Thanks for all the help!

jnarowski commented 11 months ago

Would love those Puma instructions whenever you get the chance! Hoping to deploy the optimization today.

jnunemaker commented 11 months ago

Bummer. I misremembered. It was a PR not some specific config. That's a lot of change that I'd have to go back through to feel comfortable merging.

I'd use the active support memory cache config that is on the optimization page.

That'll cache all your gates in memory but probably ok for 5k and a worthy perf trade off.

require 'active_support/cache'
require 'flipper/adapters/active_support_cache_store'

Flipper.configure do |config|
  config.adapter do
    Flipper::Adapters::ActiveSupportCacheStore.new(
      Flipper::Adapters::ActiveRecord.new # or whatever current adapter you use,
      ActiveSupport::Cache::MemoryStore.new # Or Rails.cache,
      # or whatever you want here, longer means better perf but longer time to feature changes showing up
      expires_in: 10.seconds 
    )
  end
end

That will cache in memory and help you now. Long term we are working to storing everything in memory all the time and just loading from the local adapter on an interval.

jnunemaker commented 11 months ago

You can see a real version in gitlab of similar config. They do memory -> redis -> active record. I don't think they really need the redis in there and you for sure don't but the in memory should help a bunch.

jnarowski commented 11 months ago

Thanks! I'll check it out.