agl / jbig2enc

JBIG2 Encoder
Other
251 stars 86 forks source link

Memory leak in jbig2enc.cc #44

Closed SebastianGoebel closed 10 years ago

SebastianGoebel commented 10 years ago

Hi all,

at first, thank you for the nice encoder project. I hope this is the right place for my bugreport.

While working with the lib I detect a memory leak in jbig2enc.cc at line 324 / 325 (function unite_templates_with_indexes). This function removes equivalent pix values and in line 324 the refcount of the deleted image is added to the reference counter of the remaining one:

pixChangeRefcount( ctx->classer->pixat->pix[firstTemplateIndex], pixGetRefcount(ctx->classer->pixat->pix[second_template_index]));

This is unnecessary because the references on the deleted image are not changed. So the reference counter of the remaining image is to high and the image is finally not deallocated.

Regards, Sebastian

agl commented 10 years ago

Thanks for the report.

I've not touched this code for a very long time and, in this case, that function is was contributed and and not written by me (see 7409342280caef38a5fde19ef4f6ea221bb0ea85).

It does seem reasonable that the refcount updating is mistaken.

However, I don't have test data etc setup any longer so I'm not sufficiently confident of not breaking anything. (The unittests were never opensourced with the rest of the code I'm afraid due to extra dependencies.)

If you've deleted that pixChangeRefcount call and it works for you without valgrind errors then I'll take your word for it and make the change :)

thanks!

SebastianGoebel commented 10 years ago

Thanks for the quick response. I've deleted that pixChangeRefcount call and all tests work fine. Trust me :) The memory loss is not really big because the images are very small and so it's not recognizable with normal short term testing. I've only detect it because I was searching for memory leaks in my project.

I have some additional questions:

Are there any plans to support the current leptonica version (1.7)?

If I add several pages to one jbig2ctx struct in order to create a collective symbolTable the performance is really poor. Is this a known issue? Is there room / are there plans for improvement?

thanks!

Am 08.02.2014 16:48, schrieb Adam Langley:

Thanks for the report.

I've not touched this code for a very long time and, in this case, that function is was contributed and and not written by me (see 7409342 https://github.com/agl/jbig2enc/commit/7409342280caef38a5fde19ef4f6ea221bb0ea85).

It does seem reasonable that the refcount updating is mistaken.

However, I don't have test data etc setup any longer so I'm not sufficiently confident of not breaking anything. (The unittests were never opensourced with the rest of the code I'm afraid due to extra dependencies.)

If you've deleted that pixChangeRefcount call and it works for you without valgrind errors then I'll take your word for it and make the change :)

thanks!

— Reply to this email directly or view it on GitHub https://github.com/agl/jbig2enc/issues/44#issuecomment-34547174.

Paradatec Tel +49-531-23815-0 Ges. für parallele Datentechnik mbH Fax +49-531-23815-22 Mittelweg 7 eMail info@paradatec.de 38106 Braunschweig, Germany Web www.paradatec.de

Sitz der Gesellschaft: Braunschweig Geschäftsführer: Frank Eickers, Andreas Müller

Handelsregister: Braunschweig HRB 3698

agl commented 10 years ago

Are there any plans to support the current leptonica version (1.7)?

I've no plans to update jbig2enc in the future, I'm afraid.

If I add several pages to one jbig2ctx struct in order to create a collective symbolTable the performance is really poor. Is this a known issue? Is there room / are there plans for improvement?

It's likely the both the encoder and decoder performance drops off. (See Figure 5 in https://www.imperialviolet.org/binary/google-books-pdf.pdf‎). I'm sure that the encoder performance could we improved with tweaking but I've no plans to optimise it. Likewise I expect that the decoder performance could be dramatically improved but Acrobat is harder to patch :)

rhatlapa commented 10 years ago

Hi Sebastian, I have looked at your fix and you are probably right and that was an error on my side. Unfortunately I also don't have any more needed test data and mainly appropriate system setup to properly test it.

Regards.

Radim