robotsinthesun / monkeyprint

A simple tool for pre-processing 3d models and controlling 3d DLP printers
GNU General Public License v2.0
60 stars 35 forks source link

Memory usage issue #8

Closed kakaroto closed 7 years ago

kakaroto commented 7 years ago

As mentioned in issue #4, there is a memory usage issue when loading some stl files. The one I have been using to troubleshoot the current crashes is this one : https://www.thingiverse.com/thing:912478/#files which seems to always (or very often) cause monkeyprint to crash. The other issue with that file (and probably others) is that as soon as slicing begins, the memory usage of monkeyprint goes through the roof. I just tried it now and as soon as I click the slicing button, the memory usage went from 200K to 4.4GB. This caused monkeyprint, Xorg, and pretty much all of my PC to freeze due to swapping. After 5 minutes, I had to ssh into the machine to kill monkeyprint so it could unfreeze itself.

robotsinthesun commented 7 years ago

Strangely I cannot replicate the memory issue on my setup (native linux) using the file you've given. Might that be related to virtual box limitations?

The crashing might be fixed (at least it didn't crash for the last ten tries in a row), check out my comment in #4 https://github.com/robotsinthesun/monkeyprint/issues/4.

Still, something is happening at around layer 250 or so (that's where the Eiffel tower has it's lower platform I guess), the slicer taking several seconds for one slice in this region...

On 19.10.2016 00:16, Youness Alaoui wrote:

As mentioned in issue #4 https://github.com/robotsinthesun/monkeyprint/issues/4, there is a memory usage issue when loading some stl files. The one I have been using to troubleshoot the current crashes is this one : https://www.thingiverse.com/thing:912478/#files which seems to always (or very often) cause monkeyprint to crash. The other issue with that file (and probably others) is that as soon as slicing begins, the memory usage of monkeyprint goes through the roof. I just tried it now and as soon as I click the slicing button, the memory usage went from 200K to 4.4GB. This caused monkeyprint, Xorg, and pretty much all of my PC to freeze due to swapping. After 5 minutes, I had to ssh into the machine to kill monkeyprint so it could unfreeze itself.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/robotsinthesun/monkeyprint/issues/8, or mute the thread https://github.com/notifications/unsubscribe-auth/AMuKp5QBsucq-j_Nq0WqoYgbtSertGvKks5q1UVBgaJpZM4KaWjH.

kakaroto commented 7 years ago

That's weird, because I'm not the only one with the memory issue as far as I know, @nickthetait, can you confirm ? Anyways, I have the memory issue on the debian VM with VTK5, but also on my native Fedora system with VTK6, so it's not a VM related issue since it happens on my native linux just as badly. I'll try the fixSlicer branch and see if it helps somehow to prevent that memory usage problem.

kakaroto commented 7 years ago

I just finished porting your changes in fixSlicer to VTK6 and tested it, the memory issue is half-gone. Instead of it jumping to 4GB of RAM instantly, it quickly increases RAM usage to about 900MB, then slowly keeps increasing while slicing up to 1.5GB, then once slicing is done, it drops back to ~1MB of RAM usage. So the changes you did in fixSlicer had a very positive impact on the memory problem. However, I'm guessing that 1.5GB is still not extremely good, so I would guess that more improvements needs to be done there, but I'd consider it lower priority now. Thanks

nickthetait commented 7 years ago

My test system is Debian Jessie with 8 processor cores and 16GB of RAM. Slicing the Eiffel model with https://code.alephobjects.com/diffusion/MP/browse/master/ it will use ~24% RAM and 10% CPU. Sometimes the crash will occur with 1 second, other times it may continue slicing for over 10 minutes before crashing, other times it will finish slicing correctly.

Using https://github.com/robotsinthesun/monkeyprint/tree/fixSlicer things seem much better. Only 5% RAM usage and ~12% CPU. First try it sliced in under a minute without crashing :ok_hand:

kakaroto commented 7 years ago

Thank @nickthetait for the report. Yeah, haven't seen it crash on my system either since the fixSlicer branch. The RAM usage is lower, but it's still too high for the 2GB of RAM Virtual machine, I'll just use simpler models for testing with vtk5 in the VM. Overall, it's much better :)

robotsinthesun commented 7 years ago

Strange, I can't reproduce the memory increase. Can you comment the lines 2169--2309 in monkeyprintModelHandling.py in the fixSlicer branch and add the following lines in line 2310?

imageWhite = numpy.ones((height, width), numpy.uint8)
imageWhite *= 255
sliceStack.append(imageWhite)

That should produce a stack of white images instead of the real slices. You could then start and uncomment from line 2169 onwards to see which line causes the memory to increase...

Another thing I was thinking about: What's your model size in pixels and slices? If you slice the Eiffel tower with 0.01 mm that's 12000 slices already. If the resolution would be 1000x1000 px, that would be 1.200.000.000 bytes = 1.2 gb. If that's the case, I should really only store the slice images in a preview resolution and maybe also not store all slices if the number of slices is high...

Best, Paul

robotsinthesun commented 7 years ago

Strange, I can't reproduce the memory increase. Can you comment the lines 2169--2309 in monkeyprintModelHandling.py in the fixSlicer branch and add "sliceStack.append(imageWhite)" in line 2310? That should produce a stack of white images instead of the real slices. You could then start and uncomment from line 2169 onwards to see which line causes the memory to increase...

Another thing I was thinking about: What's your model size in pixels and slices? If you slice the Eiffel tower with 0.01 mm that's 12000 slices already. If the resolution would be 1000x1000 px, that would be 1.200.000.000 bytes = 1.2 gb. If that's the case, I should really only store the slice images in a preview resolution and maybe also not store all slices if the number of slices is high...

Best, Paul

On 19.10.2016 21:51, Youness Alaoui wrote:

Thank @nickthetait https://github.com/nickthetait for the report. Yeah, haven't seen it crash on my system either since the fixSlicer branch. The RAM usage is lower, but it's still too high for the 2GB of RAM Virtual machine, I'll just use simpler models for testing with vtk5 in the VM. Overall, it's much better :)

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/robotsinthesun/monkeyprint/issues/8#issuecomment-254921419, or mute the thread https://github.com/notifications/unsubscribe-auth/AMuKp8cX74uezwKB0GehHIZTEQeA2ed4ks5q1nS_gaJpZM4KaWjH.

kakaroto commented 7 years ago

Yes, that makes sense, I forgot that the default layer height was changed in our version to 0.01 : https://code.alephobjects.com/rMPf7e6cb0d7604e84cdc356c70a1ba63086c77901b I'm not sure about the resolution itself, but if I look at the file here : https://code.alephobjects.com/rMP8f84ee630ca125a0bd0b1a8874976bfdc9a178d9 I think it would be 800x1280 :

projectorSizeY:800
projectorSizeX:1280

Your previous math for 1.2GB is wrong with 12000 slices, it should be 12GB, but that's assuming 1 byte per pixel, I'm going to guess that the resulting slices are not all stored as is in RAM, otherwise, it would never fit in RAM. I think the slices would need to stored temporarily in the filesystem instead, to avoid the RAM problem, also, notice how I said as soon as slicing is done, the RAM usage goes back down, so I'm guessing you don't keep the slices in memory once slicing is done ?

robotsinthesun commented 7 years ago

Well, 12 GB is even more alarming :) Sorry for the mistake... At the moment, each model holds it's own slice stack of a size that just fits the models bounding box. I've looked at the Autodesk Ember you're using. It has a build space of 64 x 40 mm (xy) and an xy resolution of 0.05 mm. If you just fit the Eiffel tower to the 40 mm (with a scaling of 0.7318, resulting in a height of 88.5 mm), the slice stack of the model will be 8850 slices of 800 x 800 px each. Each pixel is indeed stored by one byte (numpy's uint8 type) plus 112 bytes per slice (numpy's array overhead), so that's 5.665 GB.

I've just ran the Eiffel tower with the settings mentioned above. The 0.01 mm layer height really was the key: now my memory fills up almost instantaneously. Now that I've recreated the behavior you described, I see the problem: the data is indeed stored in RAM. I simply didn't think about somebody slicing with 0.01 mm which will of course create a large amount of data.

But I have a solution. Once you're done with the VTK-porting stuff we can merge that and I can work on a better method to generate the slices. The whole stack is only needed for preview, so there's no point in storing it in full resolution. 300 px width should be enough. Then, I'll check the number of slices before slicing and reduce the number of slices to several hundred by skipping slices in large models.

Also, I'll make the slicer thread run on single slices on request, not on the whole stack at once. This way, the full resolution slices can be generated one by one during printing.

I think the way is clear now. I'll wait for your pull request regarding the VTK-version change and start from there...

kakaroto commented 7 years ago

hehe, yeah, 12GB is pretty alarming :) It makes sense that it would use a lot of RAM if we do a lot of slices and it needs to generate the full image in memory. I initially thought it was generating images, stored in PNG format on HDD and only cached some of the slices for the preview. I think it's a good idea to generate the preview with a much smaller layer height, and resolution, but I'd be worried if someone tries to look at the result and sees things he's not expecting (like, a 0.1 total height object, expecting 10 slices of very small details, and the preview just removes all of these details because it slices the preview into a single or a couple of slices, same would apply for small details in X/Y vs the resolution). I'm just worried about causing confusion to users. I wouldn't want users to keep fighting the settings and checking the preview, not realizing that the preview is not the same as what they can expect from an actual print.

It might be a good temporary solution though I think. Keep the X/Y resolution, but check number of slices, if it's above 500, then change the layer height to total_height/500 and use that for the slices. Assuming a 1920x1080 full HD resolution, at 500 slices, we'd get 1920x1080x500 = ~988MB of RAM which is acceptable for a temporary solution. The best solution would be to generate each slice, one at a time, then save them to a temp directory, resize to small res for the preview image, then and use the full res data to generate the 3d 'preview model' then erase the full res image from RAM.

kakaroto commented 7 years ago

I just realized that the 3d model in the slicing tab is not actually generated from the slicing result, so it's not going to affect us. So I think slicing them, resizing one image at a time then destroying original full-res is going to be good enough, especially if we limit the max number of slices (user can't move the slice slider to a specific slice anyway and each small move of the slider will actually jump a few slices). I think that storing the full slices in a temp directory is still going to be useful, especially considering how some slices take several seconds to finish (around slice 365+ of the eiffel tower), and I'd like to eventually have a feature where you can see navigate full size slices (click on the preview image, it pops a new window with full res image, you can set the exact slice number you want to view, zoom on it, etc..)