Ultimaker / Cura

3D printer / slicing GUI built on top of the Uranium framework
GNU Lesser General Public License v3.0
6.08k stars 2.06k forks source link

Mesh / file size limit for Cura? #3105

Open chhu opened 6 years ago

chhu commented 6 years ago

Hi, I am experiencing problems slicing large (but clean) meshes. The win64 version from UM site reports invalid file, the Linux version loads everything just fine and I can scale, move, ... But if I hit prepare, nothing happens and it just says "Slicing..." Zero load on CPU. I can cancel and try again, same result. I also notice that I cannot save anything after I loaded, it will all end up being a zero-sized file on disk. My workstation is well equipped, so no shortage on RAM or Graphics can cause this.

The link to the STL I am trying to slice: http://www.respawned.com/earth_10_30.stl

Please do not recommend to reduce the mesh. Other slicers like Makerbot and Simplify have no problem...

Thanks, Chris

Application Version Cura 3.1 64bit

Platform Linux64 and Win7 64

Steps to Reproduce Load, (scale,) slice.

Actual Results Idling

Expected results CuraEngine should start crunching

ChrisTerBeke commented 6 years ago

I'm getting an "invalid file" message when trying to load this into Cura.

Anyways, this file is almost 1GB in size, which is pretty big for a 3D file, so I'm not surprised Cura can't handle it. Depending on your system, there is enough resources (CPU and memory) to handle files like these or not, so setting a limit within Cura is not really possible, as other people's machines might be able to handle it fine.

chhu commented 6 years ago

My point. Software should only be limited by hardware, and not limit itself. (I am actually surprised that Cura can't handle it :)

fieldOfView commented 6 years ago

When I run Cura from source on Win64, I get the model to load. It may not be using numpy-stl?

This is the interesting part of the log I get when Cura tries to hand over the mesh to CuraEngine:

2018-01-08 13:51:43,045 - DEBUG - UM.Backend.Backend._onSocketStateChanged [168]: Backend connected on port 49674
2018-01-08 13:51:53,288 - DEBUG - CuraEngineBackend.CuraEngineBackend._onStartSliceCompleted [377]: Sending slice message took 10.258777379989624 seconds 2018-01-08 13:52:05,057 - DEBUG - UM.Backend.Backend._backendLog [93]: [Backend] [libprotobuf ERROR X:\env\master\build\Protobuf-MinGW-prefix\src\Protobuf-MinGW\src\google\protobuf\io\coded_stream.cc:207] A protocol message was rejected because it was too big (more than 524288000 bytes).  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
2018-01-08 13:52:05,063 - DEBUG - UM.Backend.Backend._backendLog [93]: [Backend] [ERROR] Arcus Error (8): Failed to parse message:
Exception in thread Thread-16:
Traceback (most recent call last):
  File "C:\Users\Aldo\Python35\lib\threading.py", line 914, in _bootstrap_inner
    self.run()
  File "C:\Users\Aldo\Python35\lib\threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Aldo\Documents\Code Projects\UM\Uranium\UM\Backend\Backend.py", line 159, in _storeStderrToLogThread
    self._backendLog(line)
  File "C:\Users\Aldo\Documents\Code Projects\UM\Uranium\UM\Backend\Backend.py", line 93, in _backendLog
    Logger.log('d', "[Backend] " + str(line, encoding="utf-8").strip())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x93 in position 0: invalid start byte

libArcus seems to have a problem sending messages > ~500 MB.

fieldOfView commented 6 years ago

I am actually surprised that Cura (the frontend) can handle the mesh (if it loads it) much better than Meshmixer does.

nallath commented 6 years ago

The STL loading is limted to 10 mil verts.

At 1 gig, the extra resolution really, really doesn't matter. The engine will actually merge points that are so close together, as at that moment they are orders of magnitude below the tolerances of the machine.

I'm pretty sure that makerbot & simplify do the exact same thing, but simply don't tell you about it.

Lib arcus can indeed not send messages above 512MB (because protobuf can't, it's actually designed for messages up to 1 MB). We couldn't be bothered to split the messages up, as in all that time, it simply never came up as an issue.

chhu commented 6 years ago

I disagree. You underestimate your own machines. Do the math, a simple cube surface on a build volume of 180mm^3 with a conservative resolution limit of 0.1mm would have 1800 1800 6 = 19.44M vertices. As the surface area of your model can quickly grow beyond the cube surface, you quickly hit machine precision with your limit of 10Mio verts... I don't mind if verts get merged where machine precision cannot be reached, but please only there. Meshes will grow eventually, but I do see this would be a big issue for you. Do you have an alternative way to export all settings, scalings, etc from cura to pass it to CuraEngine without the GUI?

nallath commented 6 years ago

Assuming that you have a model where every single vertex actually is needed, which for almost all models is hardly ever the case. In most of those cases, the ground plane needs quite a few less regardless.

This is an issue that can be fixed, but i fear that it won't will get a high priority. In the two years that this limit has been there (max message size of 512), you're the first to actually run into it. As you might have noticed from the rest of the github issues, others will probably get a higher priority as more people suffer from it.

Ghostkeeper commented 6 years ago

libArcus seems to have a problem sending messages > ~500 MB.

Yeah, because Protobuf messages are limited to 512MB.

Ghostkeeper commented 6 years ago

I'm getting this error when running from source:

2018-01-25 13:55:36,929 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [82]: Exception: Exception occurred while loading file /home/ruben/Downloads/earth_10_30.stl
2018-01-25 13:55:36,933 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]: Traceback (most recent call last):
2018-01-25 13:55:36,933 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/Projects/Uranium/UM/FileHandler/ReadFileJob.py", line 66, in run
2018-01-25 13:55:36,933 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     self.setResult(self._handler.readerRead(reader, self._filename))
2018-01-25 13:55:36,933 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/Projects/Uranium/UM/Mesh/MeshFileHandler.py", line 28, in readerRead
2018-01-25 13:55:36,934 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     results = reader.read(file_name)
2018-01-25 13:55:36,934 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/Projects/Uranium/UM/../plugins/FileHandlers/STLReader/STLReader.py", line 56, in read
2018-01-25 13:55:36,934 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     self.load_file(file_name, mesh_builder, _use_numpystl = use_numpystl)
2018-01-25 13:55:36,934 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/Projects/Uranium/UM/../plugins/FileHandlers/STLReader/STLReader.py", line 35, in load_file
2018-01-25 13:55:36,934 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     self._loadWithNumpySTL(file_name, mesh_builder)
2018-01-25 13:55:36,935 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/Projects/Uranium/UM/../plugins/FileHandlers/STLReader/STLReader.py", line 87, in _loadWithNumpySTL
2018-01-25 13:55:36,935 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     for loaded_data in stl.mesh.Mesh.from_multi_file(file_name):
2018-01-25 13:55:36,935 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/.local/lib/python3.6/site-packages/stl/stl.py", line 355, in from_multi_file
2018-01-25 13:55:36,935 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     raw_data = cls.load(fh, mode=mode, speedups=speedups)
2018-01-25 13:55:36,936 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/.local/lib/python3.6/site-packages/stl/stl.py", line 93, in load
2018-01-25 13:55:36,936 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     name, data = cls._load_binary(fh, header)
2018-01-25 13:55:36,936 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:   File "/home/ruben/.local/lib/python3.6/site-packages/stl/stl.py", line 108, in _load_binary
2018-01-25 13:55:36,936 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]:     count, MAX_COUNT)
2018-01-25 13:55:36,936 - ERROR - [(140072565335808)-Thread-8] UM.Logger.logException [86]: AssertionError: File too large, got 20971520 triangles which exceeds the maximum of 10000000

So it looks like numpy-stl is unable to handle this as well.

chhu commented 6 years ago

Interestingly the Linux version of numpy-stl has no such limits. I can load a 4GB stl just fine into current cura 3.1. I am trying to increase the limit in libarcus by recompiling cura to the max of 2GB. There seems no deeper explanation from protobuf as to why this limit exists, I guess it is security related. I'll get back with the results. We have many models that come from point-cloud scans that I simply don't want to reduce first. In my opinion it is bad design if software limits itself just because "there will never be..." In that case it is Googles fault or laziness to not use 64bit (or generically size_t) types...

chhu commented 6 years ago

No luck. Protobuf clips the upper limit to 512MB. Guess message splitting is the only way... Or use of Cap'n proto.https://capnproto.org/

ianpaschal commented 6 years ago

@Ghostkeeper I'm marking this deferred for now but I'm wondering if we can actually close it. To me, this seems like sort of a non-issue that is not going to improve the software 99.9% of users and doesn't warrant the amount of effort required to change it.

Ghostkeeper commented 6 years ago

We could fix it by breaking it up into multiple messages. We've done that for the communication back to Cura's front-end because the g-code is more often larger than 512MB. There we now break up the g-code into layers.

It's not very hard to do but we'd have to be a bit careful with performance on the front-end's side.

chhu commented 6 years ago

Please! :) We still avoid cura because of this issue. We have a lot of models that are scanned and therefore hit the limit. We also made bad experience with reducing, at least with free software. You'll encounter that problem more and more anyway in the future. All the best, Chris

Ghostkeeper commented 6 years ago

Blender's Decimate function works really well for reducing poly count, in my experience.

jlg89 commented 6 years ago

I'm running into this issue too, and I can't understand why it would be downplayed as a "non-issue." I have a relatively simple small part, and I want to print as many copies of it as I can fit on the printer bed (in this case, 63). Cura not only doesn't slice it, but it also fails to give any error message at all. It just says "Slicing..." forever. So this issue really needs to be addressed, preferably by increasing the size limit, but at least by throwing a useful error message in the GUI.

2018-06-01 08:38:35,448 - DEBUG - UM.Backend.Backend._backendLog [90]: [Backend] [libprotobuf ERROR /Users/buildbot/buildbot_workspace/mac-slave1/CuraLE-mac/build/build/Protobuf-prefix/src/Protobuf/src/google/protobuf/io/coded_stream.cc:193] A protocol message was rejected because it was too big (more than 524288000 bytes). To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.

petterreinholdtsen commented 6 years ago

I ran into this issue too, trying to slice the model available from https://www.thingiverse.com/thing:898198 . When trying to open 35vertical.stl while starting cura for the first time, the application would simply hang. Probably a different bug, though.

Ghostkeeper commented 6 years ago

Yeah that's different. That file is only 150kB large.

aahlborg commented 5 years ago

I have the same problem (Cura 3.4.1 PPA on Ubuntu 18.04). I have already printed this model with Cura 2.4.

2018-10-08 10:06:07,409 - ERROR - [Thread-9] UM.Logger.logException [85]: AssertionError: File too large, got 2019914866 triangles which exceeds the maximum of 10000000

jlg89 commented 5 years ago

As I mentioned, at the very least--even if it's not worth the trouble of breaking up the protobuf data into multiple messages to get around the 512MB limitation--PLEASE at least add a meaningful error message to the GUI. As it stands, if a model is too large and throws this backend error, Cura simply indicates that it is "Slicing..." forever. The user never knows what happened.

wolph commented 4 years ago

numpy-stl author here. The reason for the 1e8 triangle limit is mostly for protection so you don't use too much memory but it could easily be increased a bit :)

The current limit is 1e8 triangles which works out to 5GB of ram usage. If you have the resources you can easily increase the limit: https://github.com/WoLpH/numpy-stl/blob/412d54bdd972921b3c1fb509cf5be48aac7bc28f/stl/stl.py#L45

It can be hotpatched easy enough to 50GB (in this case) or more:

>>> import stl
>>> stl.stl.MAX_COUNT = 1e9
Ghostkeeper commented 2 years ago

It should now at least show a bit more of an error message, saying that slicing failed rather than just "Slicing...". But it's not fixed yet.