Closed BrunoBonacci closed 11 years ago
Hello Bruno,
Not since version 0.7.x. You end up with a java.lang.OutOfMemoryError? Can you reproduce that?
Tks.
Hi, Before it runs out of memory, the GC start to pause too much causing timeouts on requests, so my system automatically it remove the grails instance from the load balancer and restart the instance.
I took heapdump after a fresh startup and warmup, and an heapdump when the issue happen and the difference between the two is several thousands of groovy.lang.ExpandoMetaClass entries with references to DBApiLayer and DBCursor.
Those entries are generated by groovy AST @Delegate but strangely enough there is more than one set of entries. Once the delegate is generated and attached to GMongo i don't see the reason why it should generate more delegate for the same class.
I had a look to GMongo code and I didn't spot anything obviously wrong, your approach is very simple and clean. The only thing is not very clear to me is the "patching" bit.
Bruno
On Thu, May 16, 2013 at 7:54 AM, Paulo Gabriel Poiati < notifications@github.com> wrote:
Hello Bruno,
Not since version 0.7.x. You end up with a java.lang.OutOfMemoryError? Can you reproduce that?
Tks.
— Reply to this email directly or view it on GitHubhttps://github.com/poiati/gmongo/issues/20#issuecomment-17984792 .
I see,
You have a lot of ExpandoMetaClass with references to DBApiLayer? Weired, you should not have a lot o DBApiLayers instance in your VM. What is your environment? How much collections your App. uses?
DBCursor patching can lead to the creation of ExpandoMetaClass instances, This is due to my patching approach, not really optimised (but the GC should do the job). Can you give me the output of the JVM heap?
I did an stress test of a day. I ran one heavy load script for 24 hours and got no memory error. But of course this don't invalidate a possible leak.
Tks Bruno.
Hi,
just to be clear, as I wrote in the first email the leaked instances are not DBApiLayer but MetaMethodIndex instances (several hundreds of thousands) for the same class "DBApiLayer$MyCollection" and "DBCursor".
In your test did you try to make an heap dump towards the end and run Eclipse Memory Analyzer? (http://www.eclipse.org/mat/)
The leaked size is very small so it might take long time before OOM occurs. Just before shutting down the test, take a heap dump and run the analyzer.
My environment is a Groovy/Grails app deployed in Tomcat 7 and running on multiple EC2 micro instances. Those EC2 instances they have very little memory hence the "more" frequent occurrence of the problem.
Bruno
Bruno,
Micro instances are really not optimal for the JVM, can you try in a small instance instead?
Hi,
The decision of using Micro instances of the websites is obviously a cost-driven decision. However this doesn't change the problem at all. A larger instance means only a longer time before the memory leak cause the JVM to be inoperable. Anyway startiing the server in the Micro instance, after the cache warmup and leaving some space free for the OS you still have around 100Mb of free Heap space for your app. 100Mb (after cache warm-up) is a lot of space for just 100-150K requests/day per instance. Obviously if you have 10GB the problem would be probably irrelevant (or less rilevant). The fact that the memory leakage is minimal doesn't mean that is not there.
I can confirm that the issue is somewhere in the way GMongo uses the driver. Because due to its instability, last week I decided to use the java mongo driver directly bypassing GMongo across all DB instances. The migration was pretty easy and straightforward and as result the system has been running all week with the same footprint.
To make sure that the comparison was fair, I kept running one instance of the webserver with the gmongo wrapper and started it more or less at the same time of the one that was using the java driver directly. Since they are both behind the same load balancer, they share a very similar traffic and load. The instance with gmongo after two days had consumed all 100Mb of free heap space, while the direct mongo driver one was still and exhibiting 100Mb of free heap space. The only difference between the two instances was that one was using the GMongo driver while the second was using the java mongodb directly without GMongo. Therefore there is definitely something there.
Thanks anyway to have spent some time to look at the issue. Kind regards Bruno
Ok Bruno,
I will try to reproduce this again later.
Tks for your time too.
@poiati Was this problem solved? I'm using Gmongo 1.2 and suffering from a memory leak, too. When looking at the heap dump, it appears to be very similar to this issue. Any ideas what could cause such leaks when using Gmongo?
These are the leak suspects:
Problem Suspect 1
3,926 instances of "groovy.lang.ExpandoMetaClass",
loaded by "org.codehaus.groovy.grails.cli.support.GrailsRootLoader @ 0x122e88b98"
occupy 95,746,168 (33.69%) bytes.
Keywords
org.codehaus.groovy.grails.cli.support.GrailsRootLoader @ 0x122e88b98
groovy.lang.ExpandoMetaClass
--
Problem Suspect 2
1,010 instances of "com.mongodb.DBApiLayer",
loaded by "org.codehaus.groovy.grails.cli.support.GrailsRootLoader @ 0x122e88b98"
occupy 56,522,416 (19.89%) bytes.
These instances are referenced from one instance of
"org.codehaus.groovy.util.AbstractConcurrentMapBase$Segment[]", loaded by
"org.codehaus.groovy.grails.cli.support.GrailsRootLoader @ 0x122e88b98"
Keywords
org.codehaus.groovy.grails.cli.support.GrailsRootLoader @ 0x122e88b98
org.codehaus.groovy.util.AbstractConcurrentMapBase$Segment[]
com.mongodb.DBApiLayer
@poiati I created a simplistic job in a brand new grails application (grails create-app
) which is executed every second (source code below). The memory consumption of this is steadily increasing. While it used 150-250MB in the beginning, it now consumes 400-500MB after only 60 minutes. Something is really wrong here. Do you see any mistake in my use of GMongo in the code? If not, there seems to be a large memory leak in GMongo. Any ideas?
import com.gmongo.GMongo
import com.mongodb.MongoURI
class MemoryJob {
def concurrent = false
static triggers = {
simple startDelay: 5000, repeatInterval: 1000
}
def execute() {
def mongoUrl = "mongodb://localhost:27017"
def mongo = new GMongo(new MongoURI(mongoUrl))
def db = mongo.getDB("memory")
println new Date()
db.getCollection("test").insert(['date':new Date()])
mongo.close()
}
}
The only other additions to the new project were compile "com.gmongo:gmongo:1.2"
(dependencies) and compile ":quartz:1.0.1"
(plugins) in the BuildConfig.groovy file. I am using Groovy 2.2, Grails 2.3.2.
This time, the leak suspects are these:
Problem Suspect 1
3,411 instances of "com.mongodb.DBApiLayer", loaded by "groovy.lang.GroovyClassLoader @ 0x119195a50" occupy 192,202,360 (46.02%) bytes. These instances are referenced from one instance of "org.codehaus.groovy.util.AbstractConcurrentMapBase$Segment[]", loaded by "sun.misc.Launcher$AppClassLoader @ 0x1192c58c0"
Keywords
sun.misc.Launcher$AppClassLoader @ 0x1192c58c0
groovy.lang.GroovyClassLoader @ 0x119195a50
org.codehaus.groovy.util.AbstractConcurrentMapBase$Segment[]
com.mongodb.DBApiLayer
--
Problem Suspect 2
7,297 instances of "groovy.lang.ExpandoMetaClass", loaded by "sun.misc.Launcher$AppClassLoader @ 0x1192c58c0" occupy 172,887,776 (41.40%) bytes. These instances are referenced from one instance of "java.lang.Thread", loaded by "<system class loader>"
Keywords
sun.misc.Launcher$AppClassLoader @ 0x1192c58c0
java.lang.Thread
groovy.lang.ExpandoMetaClass
--
Hint 1
The problem suspects 1 and 2 may be related, because the reference chains to them have a common beginning.
Hello troxler,
Other people already reported memory problems with GMongo before, but he couldn’t reproduce the problem in a controlled environment.
I think your test may not configure a problem because of the laziness of the Java GC. If you let the application run for several hours, it crash?
Said that. Yes, maybe there is a problem between Groovy ExpandoMetaClass and GMongo implementation, but there is no hint of what it can be. Your help is really appreciated.
Thanks.
Thanks for the update. Yes my application crashed after running for several hours (java.lang.OutOfMemoryError: Java heap space
). Also, I guess that the leak is reproducible as I described in the comment above. Does it leak when you try that out?
I think your test may not configure a problem because of the laziness of the Java GC.
Before generating a heap dump, I always forced garbage collection from within VisualVM which did not decrease the consumed heap a lot (but more than it did by itself).
As far as I know, you can’t force Java GC, you can only request.
Anyway, if it crash, probably there is a problem. I will ran your application here to reproduce your problem.
How much time it usually takes to crash?
Tks.
[]'s Paulo Poiati
blog.paulopoiati.com
On Wed, Dec 11, 2013 at 11:30 AM, troxler notifications@github.com wrote:
Thanks for the update. Yes my application crashed after running for several hours (java.lang.OutOfMemoryError: Java heap space). Also, I guess that the leak is reproducible as I described in the comment abovehttps://github.com/poiati/gmongo/issues/20#issuecomment-30316317. Does it leak when you try that out?
I think your test may not configure a problem because of the laziness of the Java GC.
Before generating a heap dump, I always forced garbage collection from within VisualVM which did not decrease the consumed heap a lot (but more than it did by itself).
— Reply to this email directly or view it on GitHubhttps://github.com/poiati/gmongo/issues/20#issuecomment-30319985 .
As far as I know, you can’t force Java GC, you can only request.
Ah yes. When running my real application, it sometimes also crashed because it was doing more garbage collection than anything else when it was approaching the maximum heap space. So I do not think that the lazyness of GC is the problem.
How much time it usually takes to crash?
That depends on the amount of heap space you allocate ;) As the consumption is about 450MB after an hour, I guess the application would crash after approximately an hour with a heap space of 400MB. In this test application, however, I did not wait until it crashed.
Anyway, if it crash, probably there is a problem. I will ran your application here to reproduce your problem.
Thanks a lot for looking into it.
I took a closer look to the problem. Your application uses multiple GMongo instnances? Usually you need only one instance per application.
Here is your code with the above in mind:
import com.gmongo.GMongo
import com.mongodb.MongoURI
class MemoryJob {
def concurrent = false
static triggers = {
simple startDelay: 5000, repeatInterval: 200
}
static mongoUrl = "mongodb://localhost:27017"
static mongo = new GMongo(new MongoURI(mongoUrl))
static db = mongo.getDB("memory");
def execute() {
println new Date()
db.getCollection("test").insert(['date':new Date()])
}
}
This leaks to you too?
Thanks for the suggestion. And indeed it solved the memory problems I was having in my application. It has now been running for several hours and the memory consumption has been constant. So thank you :)
Maybe this is something that should be looked into in the future anyway, though.
You are welcome.
And, yes, this should looked forward.
[]'s Paulo Poiati
blog.paulopoiati.com
On Thu, Dec 12, 2013 at 1:06 PM, troxler notifications@github.com wrote:
Thanks for the suggestion. And indeed it solved the memory problems I was having in my application. It has now been running for several hours and the memory consumption has been constant. So thank you :)
Maybe this is something that should be looked into in the future anyway, though.
— Reply to this email directly or view it on GitHubhttps://github.com/poiati/gmongo/issues/20#issuecomment-30429597 .
Hi,
I'm experiencing memory leaks since I started to use gmongo.
I read few mailing list post that you already identified and fixed a memory leak in v.0.7-0.8
The Eclipse Memory Analyzer identifies the following leak:
Looking at the heapdump those instances of ExpandoMetaClass is then holding MetaMethodIndex instances (several hundreds of thousands) for the same class "DBApiLayer$MyCollection" and "DBCursor" with one set of methods of every method in the driver api.
My suspect is that the groovy delegates are continuously generated (multiple times) and retained by the class loader, so that you have multiple times the same delegates generated over and over for the same target Class.
Have you came across this problem?
regards Bruno