Closed sipple closed 12 years ago
Additional information. I've figured out how to recreate the problem.
Basically, if I let central_logger create the collection on its own (settings I'm using copied below), and a very large log entry gets sent - like an error with a bunch of backtrace - mongodb logs an error in its own log, and the collection gets dropped and recreated. The error mongodb logs is: Wed Jul 13 11:09:08 [conn876] insert: couldn't alloc space for object ns:convoy_modelo_log.production_log capped:1
If I create the collection myself using, for instance, these settings, it works and doesn't drop/recreate: db.createCollection("production_log",{capped:true, size:100000})
This might be a problem in mongodb, and it's just that the default options in central_logger are running afoul of the bug. Here's a related report to mongodb about a similar problem: https://jira.mongodb.org/browse/SERVER-2639
And here are my database.yml settings:
mongo: database: convoy_master_dev_log host: 10.100.214.199 port: 27017 capsize: <%= 1000.megabytes %>
I haven't seen this error before. Make sure 1000.megabytes is defined (active_support) when the yaml is parsed. I would guess you would get an exception when the ruby is executed, though.
Also, can you check that the capped collection is actually 1GB from the command line. I suspect it might not be, and the insert is silently failing because the capped collection is too small.
The second part is what the issue is, I believe. I've already re-created all the databases on my mongodb server, but the behavior seemed to not be the size of the capped collection itself; as long as no single large entry went into the log, it would keep chugging along, but as soon as a large error entry was put into the database, the problem would occur.
The problem, I think, is the "max" setting for the collection, not the "size". The collection is big enough, but its max single entry size is too small. I wonder if the central_logger gem, when there is no database/collection matching the settings, is implicitly creating a collection with the correct size, but a max of something small, like 1 byte, and so if any big entries come through, there's a problem.
I noticed this any time I did not create the collection myself in mongodb, but allowed central_logger to do it for me. I'm on ruby 1.8.7 and rails 2.3.4 if that makes a difference.
This might have something to do with how I'm initializing things, but I'm finding a problem that in some cases - when the site actually throws an error seems to be one of them - where the mongodb collection is dropped and re-added. At least, I'm losing all previous log entries, so that's my assumption.
I'm currently initializing CentralLogger within the environment/.rb files, not the larger environment.rb initializer, due to some log related settings being set within the environment specific initializers. This might be related to the problem, though offhand I can't figure out how.
I'm assuming some condition is resulting in reset_collection being invoked. Any thoughts on what? Is this a problem with how/when I'm calling initialize_deprecated_logger, or is this a known issue? I imagine if it's related to the rails2 version and it is a problem, you're not in any rush to fix it, but if I knew what caused this to happen I could work around it on my end.
Thanks!