Closed tispratik closed 12 years ago
Pratik,
Currently there is only 1 insert per request. I would speculate most of the time is spent in the mongo driver performing the insert. Assuming your network and hardware running mongo are fast enough, the only way to improve performance there would be to batch the mongo records at some risk of them not being inserted if the application crashes.
Alex
Another solution i thought about is using resque+redis for background processing and maintaining background workers to do the job. But this will require hacking around the central logger gem right?
I think delayed_job is a good option and it does not need additional infrastructure like redis and workers. Would it require a lot of code change in the central logger to hook in dj ?
Tying into some kind of background or asynchronous process would make sense. Based on what I am reading here it appears that your app will go down if mongo crashes. Moving to the background would solve both problems, would it not?
I am NOT using this gem right now, but I am investigating using it for a large enterprise system we have running. We are using a homegrown Mongo db system now for logging right now and are noticing the same performance hit per request and have had downtime due to MongoDB hanging/crashing which we want to avoid in the future
I did a Jmeter performance test for my application home page with and without central logger. The avg response time without central_logger is 800 ms and with central logger is 1100 ms. There is a jump of about 38 %.
Can we do something about it?
With Central Logger:
Without Central Logger: