prakashjegan / javaparser

Automatically exported from code.google.com/p/javaparser
0 stars 0 forks source link

error occured while parsing huge data #16

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Dear Team,

I am developing codesearch tool. In this tool, I am using your
javaparser1.0.5 for parsing java file and indexing using lucene. Without
using javaparser, i could index huge data like 500mb. But after i integrate
javaparser1.0.5, it will break while  the index size is near to 25mb. 

 Is there any issue in javaparser1.0.5 while parsing huge data. Is any
streams are not closing, which is open to read the data???????

Kindly reply me with the proper solution.

Thanks & Regards,
Muralidharan P.

Original issue reported on code.google.com by pmuralid...@gmail.com on 5 Feb 2009 at 5:49

GoogleCodeExporter commented 8 years ago
Hi Muralidharan,

Could you please give me more details about the error? There is any exception, 
like
OutOfMemoryError ?

It's the first report of memory leak, I'll need to know how you are using the 
parser.
This information will help me to try solve this issue.

thanks!

Original comment by jges...@gmail.com on 5 Feb 2009 at 10:16

GoogleCodeExporter commented 8 years ago
Hi Jgesser,

I am attatching the file inwhich i am using Javaparser to get imports and 
comments. I
tell my code procedure with step by step as follow.

1. First i given one directory which contains all type of 
files(java,xml,jsp,....)
for indexing.
2. In that file i will parse the code through my own parser(exclude java files).
3. If the java files are came, then i will using your Javaparser1.0.5 for 
parsing
files and getting imports & comments.
4. after parsing the file i will index those files into some format.

These are the procedure to call each file and directory. before introducing
javaparser1.0.5, i am using pmd parser which is working perfectly.

But when i introduce the javaparser1.0.5, i got an error (TOO many open 
files....)
which is given by lucene-xxxxx.jar (the same will occurs only if stream is not
closing properly) if the index size is more than 25 MB. But before that it 
could not
break even i am indexing nearly 450 MB of data.

Kindly check the same and reply me with proper solotion.

Thanks & Regards,
Muralidharan P.

Original comment by pmuralid...@gmail.com on 9 Feb 2009 at 11:34

Attachments:

GoogleCodeExporter commented 8 years ago
Dear Jgesser,

Any updates regarding my issue reporting above.

Regards,
Muralidharan P.

Original comment by pmuralid...@gmail.com on 18 Feb 2009 at 5:58

GoogleCodeExporter commented 8 years ago
Hi Muralidharan,

no news about this issue yet, I have a test where at least 1k files are parsed 
and I
didn't get suck error.
I am wondering if the problem is occurring because after you parse and index the
files the resulting ASTs stay in memory with a strong reference (maybe in a
ArrayList), where the GC couldn't collect them. 

But I will try to find a way. If you have any news or if you want to try find 
the
root of this problems, please, share with us.

Original comment by jges...@gmail.com on 18 Feb 2009 at 11:13

GoogleCodeExporter commented 8 years ago

Original comment by jges...@gmail.com on 17 Jan 2010 at 7:12