Closed rickyzhang82 closed 6 years ago
I understood that path planing needs to solve TSP. For a small number (actually is not that small if you check wiki) of closed polygons with recent improvement in TSP research and computing power, you can and should get an exact answer in a recent time frame.
So why not we set a limit on the number of polygons and the distance of polygons to decide if we should spend more time to use exact algorithm?
Someone improved this I believe very recently. I get confused in my old age but I think maybe it was @smartavionics
Maybe this is fixed in a version coming out soon?
Set the z-seam alignment to shortest (or user specified and put the z-seam hint location near one of the cube's corners). Set fill gaps between walls to nowhere and increase the line width to 0.5 and this is what you get...
@smartavionics Thanks for your time to find the tweak. The tweak you suggested is ad-hoc. There is infill in 15.04 if you looked into the issue carefully. I haven't posted Slic3r. Their path planning didn't cross diagonal, either.
In any case, I don't really care how to fix one specific model in Cura 3.x. I'd believe the real cause may be the path planning problem in 3.x (I'll generate intermediate output to confirm this). I'd like to dig it deeper and look into what really went wrong by myself. At the same time, I hope if you can write some document regarding to the 3.x path planning.
One of my colleagues in operation research group mentioned a conference talk. I don't see the original paper nor see the baseline comparison. But this should be a trendy topic in OR.
Also, there is a paper which makes a comparison between 15.04 nearest neighborhood vs Christofides Algorithm.
But first thing first, I will output path which has shell only to compare apple to apple among all optimizer. From there, we can do a systematic test. I think this change will attract more hobbyist and academia look into problem and do an improvement.
Cura doesn't use any TSP implementation, but does a greedy naive solution. Combing makes all routes different from their absolute distance paths and the combing algorithms are way too slow to calculate the path lengths of all possible paths.
As @smartavionics said: this is just a setting configuration problem. Cura 3 uses different defaults.
The extra travel is due to inter-wall empty spaces it is trying to fill up. If you don't want this then prevent it from happening by modeling a slightly thinner piece or disable the setting Fill Gaps Between Walls.
The reason that you got the paths that you did was because the z-seam location was quite different for the inner and outer walls - it was nothing to do with the path planning.
All I did when I tweaked the values was ensure that the z-seams were near each other.
I don't want to be an ass. Are you suggesting that in the absence of influence from z-seam, current path planning optimizer can find a reasonable path?
But what's the difference between 'sharpest corner' I chose vs 'shortest' you suggested in z-seam in the test case above? There are four corners. They are all the sharpest one. For sure, you can argue that that's front end problem which failed to deliver a proper setting to engine.
But my suggestion is that provide an easy way to let outsider look into path planing. Once we have that path data, someone out there in world will improve path planning.
Due to some rounding errors and algorithm limitations an arbitrary corner will be marked as the sharpest one.
Yeah we need more documentation in these classes.
I don't want to be an ass. Are you suggesting that in the absence of influence from z-seam, current path planning optimizer can find a reasonable path?
It often does find a reasonable path. But not always. The path planning has to be quick otherwise slicing will take forever. I grumble about the quality of the path planning all the time.
But what's the difference between 'sharpest corner' I chose vs 'shortest' you suggested in z-seam in the test case above? There are four corners. They are all the sharpest one. For sure, you can argue that that's front end problem which failed to deliver a proper setting to engine.
Shortest means use the vertex that's nearest to the current location, sharpest corner means use the vertex where there is the largest change of direction.
But my suggestion is that provide an easy way to let outsider look into path planing. Once we have that path data, someone out there in world will improve path planning.
It's open source - go study! Yes, it could do with documenting better but then that could be a waste of effort if it is to be replaced some time with a better implementation.
On Thu, May 10, 2018 at 12:35 PM, Ricky Zhang notifications@github.com wrote:
- That's why we need more people to chime in and throw some ideas to improve path planning. We should allow a setting that runs slower in beefy machine. Some prints take longer than do TSP. I learnt from my colleagues. Solving 1000 cities of TSP is not a big deal nowadays given improvement of TSP approximation algorithm and computing power .
- Obviously. z-seams settings failed in this case where multiple sharpest corners exists. It didn't
- Well, in the absence of documentation is also a way to create a job protection. I only read and research in 15.04 algorithm. It is quite straightforward. But there are several new changes in 3.x that makes me believe it leads to sub par result. In any case, I will output the shell as my weekend project and compare the TSP optimizer.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/Ultimaker/CuraEngine/issues/760#issuecomment-388017499, or mute the thread https://github.com/notifications/unsubscribe-auth/AIe9EXC-K28I2sNuZsRSdZqjYVAZ6qxnks5txBfagaJpZM4T04d8 .
--
Kind regards,
Tim Kuipers
Ultimaker BV
www.ultimaker.com
Some prints take longer than do TSP.
Interesting point. Almost always, TSP will be faster than printing. But typically I slice my part maybe 10 times before I'm happy with the settings and save the gcode file. So I would want to run the slower slicing only the final time.
Think of the millions of euros of wasted time one could save if Cura had TSP. it shouldn't be a checkbox. It should be a button that reslices with TSP algorithms.
Also you can do TSP independently I think for each slice so you can multithread that.
@gr5
Regarding to multi thread, it may be not exactly an embarrassingly parallel problem among all layers.
In X-Y plane, the end point of one layer should be the starting point of next layer. However, if layer doesn't change that much, for two consecutive layers they may have a similar optimal route except that it is reversed.
The optimization problem will become very complicated once we want an overall optimal solution in all layers combined. But if we want to solve optimal solution for one layer and ignore the fact that starting a next layer need to move, then it is an embarrassingly parallel problem.
In my weekend, I'm going to output the path of the shell (exclude infill) in 15.04 engine and verify the differences in new Cura.
To your comment in slicing time vs print time, IMHO any rational human should slice multiple times. The try should not just change different settings but also different software as well. For some simple case like above, human eyes can identify problem easily. But some complicated cases probably are hard to do that.
Layer planning is currently already done in parallel. We actively made the decision to not be able to begin a layer where the last one ended for that; we had to make a choice there.
Moreover, doing consecutive layer with a similar path, rather than reversed is better for consistency because then the amount of time material has had to cool before the next layer is deposited is about the same for any location in the layer. Travel time is not the only thing to consider when performing order optimization.
Implementing full TSP on the network of all possible combing paths is going to increase the slicing time 20x. I don't think I would want to wait that long for the print to take a little less time. The amount of print time saved is probably less than the extra slicing time induced!
I might be wrong, though. I'm interested to see what you will find.
On Thu, May 10, 2018 at 5:12 PM, Ricky Zhang notifications@github.com wrote:
Regarding to multi thread, it may be not exactly an embarrassingly parallel problem among all layers.
In X-Y plane, the end point of one layer should be the starting point of next layer. However, if layer doesn't change that much, for two consecutive layers they may have a similar optimal route except that it is reversed.
The optimization problem will become very complicated once we want an overall optimal solution in all layers combined. But if we want to solve optimal solution for one layer and ignore the fact that starting a next layer need to move, then it is an embarrassingly parallel problem.
In my weekend, I'm going to output the path of the shell (exclude infill) in 15.04 engine and verify the differences in new Cura.
To your comment in slicing time vs print time, IMHO any rational human should slice multiple times. The try should not just change different settings but also different software as well. For some simple case like above, human eyes can identify problem easily. But some complicated cases probably are hard to do that.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/Ultimaker/CuraEngine/issues/760#issuecomment-388083820, or mute the thread https://github.com/notifications/unsubscribe-auth/AIe9EQULppkwF2iVT9Oew8r9V-UPwHU5ks5txFjPgaJpZM4T04d8 .
--
Kind regards,
Tim Kuipers
Ultimaker BV
www.ultimaker.com
@BagelOrb I don't get your point in the first paragraph. Can you elaborate it?
You point is valid regarding to the cooling between layers.
For hobbyist FDM printing, I never experience slicing time exceed 60 seconds in Cura. On average, it only takes me 2 seconds. But I spent average 3+ hours for print. I'd be happy to wait for 100x for slicing if you can find out a better way to print.
Besides saving print time, optimal tool path may be the simplest tweak that can be achieved by machine. It doesn't require any user input to improve print quality. Sometimes stringing and blobs may be caused by unreasonable sub par move. No matter how good retraction can be, all FDM nozzle will drip if you travel long enough. So combing may not be a good idea if you can find the shortest distance and travel in air fast. The downside of combing is that ignoring the ugly fact that dripping in consumer grade FDM printer is unavoidable due to long distance move.
Personally, I'm not a big fan of combing. Early optimization is the root of all evil.
For sure optimal travel path is not a panacea, either. So let's see if I can find something in the code.
The engine is planning the layers in parallel. That's why it's impossible to start a layer where the previous layer ended.
You should look into the function addPolygonsByOptimizer()
. A TSP
algorithm should be implemented inside that function.
On Fri, May 11, 2018 at 5:12 AM, Ricky Zhang notifications@github.com wrote:
I don't get your point in the first paragraph. Can you elaborate it?
You point is valid regarding to the cooling between layers.
For hobbyist FDM printing, I never experience slicing time exceed 60 seconds in Cura. On average, it only takes me 2 seconds. But I spent average 3+ hours for print. I'd be happy to wait for 100x for slicing if you can find out better way to print.
Besides saving print time, optimal tool path may the simplest tweak that can be achieved by machine. It doesn't require any user input to improve print quality. Sometimes stringing and blobs may be caused by unreasonable sub par move. No matter how good retraction can be, all FDM nozzle will drip if you travel long enough. So combing may not be a good idea if you can find the shortest distance and travel in air fast. The downside of combing is that ignoring the ugly fact that dripping in consumer grade FDM printer is unavoidable due to long distance move.
Personally, I'm not a big fan of combing. Early optimization is the root of all evil.
For sure optimal travel path is not a panacea, either. So let's see if I can find something in the code.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/Ultimaker/CuraEngine/issues/760#issuecomment-388247283, or mute the thread https://github.com/notifications/unsubscribe-auth/AIe9ESJN-1JKEbT6khl0MxXOcuSqrIuKks5txQGLgaJpZM4T04d8 .
--
Kind regards,
Tim Kuipers
Ultimaker BV
www.ultimaker.com
@BagelOrb Thanks for your hints.
I tested SAS TSP optimizer with 100 and 1000 cities, both of which are complete and symmetry graph. The optimizer runs in exact algorithm with branch and bound technique. It solved 100 cities within 0.4 seconds and 1000 cities within 25 seconds. My machine is Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz.
Note that the optimizer is not multithreaded. So I think the performance is very good. It is worth to investigate using exact TSP.
The problem is that we don't know the costs of the paths in the network. Computing those using the current combing algorithm will make it very slow. Not computing those means that the output of the TSP is not the fastest path, which defeats the point of using TSP.
On Fri, May 11, 2018 at 5:14 PM, Ricky Zhang notifications@github.com wrote:
@BagelOrb https://github.com/BagelOrb Thanks for your hints.
I tested SAS TSP optimizer with 100 and 1000 cities, both of which are complete and symmetry graph. The optimizer runs in exact algorithm with branch and bound technique. It solved 100 cities within 0.4 seconds and 1000 cities within 25 seconds. My machine is Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz.
Note that the optimizer is not multithreaded. So I think the performance is very good. It is worth to investigate using exact TSP.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Ultimaker/CuraEngine/issues/760#issuecomment-388394015, or mute the thread https://github.com/notifications/unsubscribe-auth/AIe9EVJDKS6uffQsCkoW2xrOWlsmyUAcks5txarNgaJpZM4T04d8 .
--
Kind regards,
Tim Kuipers
Ultimaker BV
www.ultimaker.com
@BagelOrb Again, I'm not a big fan of combing. Let's leave this alone first. Solve one thing at a time.
Okay!
On Fri, May 11, 2018 at 6:57 PM, Ricky Zhang notifications@github.com wrote:
@BagelOrb https://github.com/BagelOrb Again, I'm not a big fan of combing. Let's leave this alone first. Solve one thing at a time.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Ultimaker/CuraEngine/issues/760#issuecomment-388422737, or mute the thread https://github.com/notifications/unsubscribe-auth/AIe9EeEmttfmXN3a65y8Wc2eEu19Alekks5txcL4gaJpZM4T04d8 .
--
Kind regards,
Tim Kuipers
Ultimaker BV
www.ultimaker.com
@BagelOrb @smartavionics I reviewed 15.04 and 3.x path optimizer. Here are my findings .Please correct me if I'm wrong.
In both 15.04 and 3.x:
This inaccurate information almost undermined the optimization work in part path planning even with exact/approximated TSP. But why not separate parts path planning from G code generation? In other word, do part path order optimization after sub type path order optimization and entry and exit of the part are known.
In 3.x:
In any case, there seems to be a great room for path planning improvement in Cura. IMHO, I'd like to figure out how to fix the common issue first (the part order). I think part order optimization can convert into an asymmetry TSP. This is due to the fact that the part in our problem has an entry and an exit. The test I did is symmetry TSP. The performance is quite impressive for solving 100 cities within 0.5 seconds. But that's the performance of commercial software. I don't know how good I can write a hobbyist level integer programming solver.
But I'd like to hear you guys' opinion before getting my hands dirty.
The way that Cura is structured to process each part (aka island) in a layer separately and then within each part process the walls/skin/infill separately is deeply ingrained in the program. I think it would take a mammoth restructuring to merge the walls/skin/infill within a part and the gain would not be worth the effort. The travel between parts and the order in which the parts are visited could well be made better with, perhaps, relatively little impact on slicing time.
Personally, if I was to alter any of this, I would be looking at improving the combing algorithm - at the moment it combs fairly quickly but the resulting path is often not very optimal. I added some simple hacks a while back to help reduce the combed distance but it really needs looking at again. Of course, any new solution has to be efficient, it can't afford to be much slower than it is now as combing is used so much.
@smartavionics Thanks for your advice.
First of all, I'm not an expert of G Code. I have to admit I don't know what I don't know. If the following assumption is true, please proceed to read my proposal regarding to improve parts (a.k.a island) order optimization.
G code that move nozzle among parts is done by absolute position (not relative).
Move the parts order optimization after the processing of the walls/infill in part. Because only after that, part get an exact entry and exit point information. Such moving out should NOT lead to incorrect G code. This is due to the assumption above.
Please confirm my guess above
Back to combing algorithm, sub par planning path in 3.x is one thing that makes me frustrated the most. Although I love several new functionalities introduced in Cura 3.x, the time I spent to tweak in 3.x undermined the pleasant experiences I gained. I usually slice in 15.04 and see the result in 3.x. This may be not anyone's fault. Because the firmware of the printer I owned may be optimized for slicing 15.x. It is closed source. Nobody knows what is under the hood. In any case, the printer just not happy about extended long move that drive it ooze even in PLA.
In statistics, people propose a statistical model to fit the data. There is a common fallacy that the more parameters you defined in the model the better fit the data you have in your sample. But it will over fit , which means that the model doesn't fit well in the data another than your sample.
Such fallacy also applies to 3D printing. If you consider STL file is the input data and your physical print is the prediction result, the slicing software (G code generator) plus the printer firmware (motion planning) is a model. Your goal is to try to fit your physical print object to the conceptual object in your mind.
Physical Print Object = Slicing Algorithm and Firmware (STL object model)
The more setting (or ad hoc hacks) you provides will fit one specific sample better. But this implies that your model has fundamental flaw. It is model misspecification. In statistics, we will find a better model, instead of adding more parameters to patch it.
My point is that if the model is good (in other word, slicing algorithm and firmware is good), you don't need to patch the parameters (a.k.a adding more and more settings and hacks). This idea is quite universal. It is called Occam's razor principle.
I totally agree the direction that separate the engine from the setting UI. The setting UI should hide the complexity from end users as much as possible and do intelligent tweak based on the limited number input from user.
But now we start to do this in an opposite direction. The usabilities start to go down. Thus, I proposed to review the model (slicing algorithm) and see what went wrong. It is worths time to lead the design and development in the right direction. The more hack you added, the more engineering problem you have in the end.
cura 15.x is on github also. It uses the same slicing engine so this issue is attached to a repository called "cura engine" which goes back many years and includes the curaEngine that was included with cura 15.06. You can click on "branches" here in this repository and choose 15.06. Better though to install git on your home computer and clone the whole repository and I find having git on my personal computer to be much faster and easier to look at an old version of code.
@rickyzhang82 are you an experienced programmer? Every project is harder than one thinks and if you haven't done lots of programming I think this project is beyond you (you could hire one if you can afford one - this could be a $10k to $100k project though).
For example I'm not sure if you quite understood something mentioned earlier - The normal TSP minimizes DISTANCE but a proper TSP for Cura would minimize TIME. Because the printer motion is limited by a maximum acceleration, short movements (under 3mm) take much longer than you would think. In other words if you do 10 3mm movements it takes longer than one 30mm movement. Maybe this isn't important to make improvements but they will be approximate, not exact.
When you say "I'm not an expert of gcode" that's kind of depressing to hear. gcode is incredibly simple and if you didn't figure that out in 1 minute then it makes me think you will never be able to understand the existing code let alone change it. I hope I am wrong. Cura needs more programmers with your motivation. Or alternatively, motivated businessmen who can hire very good programmers.
The curators of Cura are not going to accept a code change that was hacked together or thrown together. It needs to be well written code that is easy to read and makes good logical sense because they know they will have to edit it some day. You can't just take some other TSP code and throw it into cura as a function and call it. Changes to Cura have to be well written and well thought out.
@gr5
I'm the only bread maker at home. I don't think $100k is enough for me and my family. You underestimated the cost of the brain power in the US job market today. So please don't start talking about money or payment. It is purely my personal motivation to improve this nice small free software with my spare time for my hobbyist 3D printing project. For record, nobody has ever paid me anything to do anything for Cura or Ultimaker company. This is a legal problem I want to stress.
My proposal is a reasonable small structural change, which includes:
The problem in current optimizer is that even for small cities case, it still use nearest neighbor algorithm. Given the computing power today and improvement in algorithm, we can do something better to improve this.
I'm not an expert on G code. I have to admit that I don't know what I don't know. That's why I asked this stupid question. But it is related to my proposal. If G code position generation is based on relative position, then moving parts path planning optimization in later stage will impact G code generation.
But you should not judge anyone because he/she doesn't know x, y or z. I happens to be computer literate. But others from different fields may have different angle to see the problem. We should not exclude them just because they doesn't know something. The attitude like such would exclude others to chime in the subject matters.
BTW, thanks for your info in Git. Last week, I already forked engine 15.04 branch into my dev branch. I patched it to make it build in Fedora 28. I did my homework to read their code before throwing questions and suggestions around.
What you described in nozzle movement is related to the firmware that do motion planning given G Code. I think it is irrelevant in my proposal. Let's focus to solve one layer of problem at a time.
Look, if you write the code that implements the improvements you wish to see and submit the code changes as one or more PRs then they will be considered by the Cura guardians. If the PRs are well written and they are demonstrably beneficial then there is a good chance that they will be accepted. If the PRs are not well written or the benefit is marginal or there are some bad side effects, the PRs will probably not be accepted. It's a simple as that.
@smartavionics
Oh boy. Please stop treating me like a noob. Being humble is a good virtue. But maybe it is not the case here.
I knew how PR works. I owned a tethering app repo and accepted several PR from others as well. I also made several PR for a 68K Motorola Macintosh emulator as well. Please and please stop sharing how great Git is and how PR works. I have dual background in computer science and financial engineering. I promised I won't break Cura Engine for the work I even haven't started yet.
What I'm really looking for is an answer: how G code get impacted if I moving part order optimization in later stage. I can figure it out by myself. But that would take me another week or two. I may or may not loose my interest to improve this. I just borrowed a book on TSP to learn more advanced technique in plane cutting.
I don't know why you guys all think others are dumb as a rock. Perhaps it is a common problem of geek/nerd.
Well gcodes are very very simple. They are just a list of positions where the head should move to next. Within a given layer there is usually no Z movement (unless someone selects hop on retract). So typically they all have only X,Y,E movement. They look like this: G1 X102.42 Y35.77 E7243.11 That tells the printer to move to those coordinates next. There is also a feedrate "F". So: G1 F3000 Would change the printing speed (which is a goal - the planner might decide that's too fast) but is in mm/minute (not mm/second). The F can be on the same line as a movement: G1 X103.42 F6000 The above command moves in X only but it has a faster speed. All extruding moves will have an E (for extruder axis). The firmware limits the acceleration and max speed for all movements and has all kinds of intersting and complicated features but the only thing you need to know is that it moves everything linearly in 4 dimensional space (axes X,Y,Z,E). So if for example you move 100mm in X and 1mm in E it will accelerate all the axes in unison, together such that you get perfect extrusion along that 100mm.
Really all you need to know for TSP is the X and Y positions.
Also in gcode cura adds lots of helpful comments explaining if it is doing the inner shell, outer shell and infill. So in theory you could write a "post processor" that did TSP on various chunks of code. Every non-extruding move could probably be considered a separator of "islands" and they could be done in a different order as long as you do all the shells last (shells are done last by default in cura but people can change that also so it does the shell passes first).
@gr5
Thanks for your instruction.
Q1: What's E?
Q2: Can X, Y and Z coordinate be a relative position upon the end of the last G code instruction?
Comment on your last paragraph. That's is a no. I don't want to write a G code parser. It is fairly simple to inject TSP on parts optimization. The idea is simple but it depends on your answer to Q2.
-- Updated
I figured them out. It can be absolute or relative positioning. But Cura can take a piece of hard coded begin code template which set it as absolute by G90
until the end code begin. I think my proposal should be safe.
Cura only outputs absolute positioning. To switch to relative positioning mode would be a bad idea. You don't want to lose any steps on those steppers or it will be visible in the final print. Even losing a single step will be visible.
E is for extruder. It's the position in mm of the extruder. It starts at zero. So: G1 X104.22 Y 99.33 E1012.432 the above implies we are a little over 1 meter of filament into the print. There is a bug in marlin that it can't handle more than about 16 meters I think? Or 32? Anyway if there is more than a certain amount of filament in a given print it resets back to zero before it's a problem with a G92 command. Every 5 meters? Or 16 meters? I forget. But for the most part it just keeps increasing E. If the very next command after the one above was: G1 E0 it would retract for 1.012432 meters
You don't have to mess with E much if you want to post process. But if you post process then you wont' know how much time you saved as cura will have calculated the time to make the print before you do your post processing.
Oh boy. Please stop treating me like a noob. Being humble is a good virtue. But maybe it is not the case here.
I don't know why you guys all think others are dumb as a rock. Perhaps it is a common problem of geek/nerd.
I make no assumption about your capabilities, I'm simply letting you know how the system works around here. Goodbye, good luck!
@gr5 You are correct. E is none of my business. I'm just curious how retraction is instructed by G code under the hood. I remembered you can tweak retraction speed and distance.
As your confirmation, Cura used absolute positioning. Thus, moving parts optimization in later stage (after all infill, wall and etc in the part is processed) won't impact G code semantic at all.
@smartavionics I apologize for my rudeness. I'm always mocking the college kids being a snow flake generation in US. In fact, I'm also the one who gets easily offended. I just felt annoyed that my question didn't get an straight answer.
To all, I'm currently working on 15.04 branch. I love the old one because it is simple. Here is my plan:
My work is here. I don't know who else will be interested in my proposal. Maybe I should exchange my discussion to Ultimaker community.
In any case, you guys are superb helpful. Thanks all for your help!
I don't know much about that part of the cura engine. But it occurred to me you might need to worry a little about E after all. If a block of gcode starts with E position at say 120mm and ends at 150.00mm and the next block starts at 150mm and ends at 160mm and you swap the blocks then it will mess up. But all you have to do is add some M92 commands which can tell the extruder where it is without actually moving the extruder. For example if you had swapped the blocks:
M92 E150 do block2
M92 E120 do block3
M92 E160 continue
Basically for the last extruder position before a block starts - save that and issue an M92 Exxx.xx for that position before a block of gcode that has been moved.
@gr5 Your insight is absolutely valid! When reorder the order of the parts/island, I think I should reset the extruder to 0.
G92 E0;(Zero the extruder)
But one more thing it is not clear to me is that in Exxx
coordinate the numeric xxx seems like the delta of distance of extruder, instead of accumulated sum of distance.
Perhaps, I may misunderstand what Exxx
means. Does it mean how long the extruder/feeder move the filament in delta of the distance?
-- Updated
After reading G code, I realized that I was wrong about the coordinate of Exxx
. It is accumulated value but you can reset it at 0 at any time. I think I should find a way to handle extrusion distance in my proposal.
In any case, I made some progress and see my update here
zeroing the extruder would be bad. The very next gcode will move it forward potentially by many meters.
The positions of the extruder from cura are normally all absolute. Actually there may be a way to do relative extrusion moves but absolute X,Y moves? Probably not. I'm not certain.
If you look at any gcode file do you see the E value increasing continuously throughout the file? When creating gcode for Ultimaker printers the E value keeps increasing (except on retractions) because E is sent as an absolute position.
I'd believe you can either choose relative or absolute in all X, Y, Z and E. But you can't do it a partial way for one specific set of coordinate.
Zero E may not be that bad if and only if nozzle move in the air while not printing. It might or might not perform retraction. That can be easily identify in the end of G code in that part.
If you do thing this is a trouble, fixing E in absolute term should not be too hard because when reorder the parts, all E coordinates within the part can be adjusted by a fixed amount. It is just more patching compared to just zero E.
In any case, your insight is absolute helpful. That saves me a lot of trouble which I overlooked! Now I started to learn some G Code. Thanks!
@rickyzhang82 Just wanted to let you know that optimising printing paths and minimising travels is always very welcome and if you would come with a (good) PR the Cura team will definitely test and merge it. If you want early feedback on a PR, just make one with [WIP] in the title so our devs can have an early look and provide feedback.
@Appesteijn
The complex problem as such can not be solved by one single PR or one single person alone but it requires more thorough, lengthy and intelligent thought exchange.
Everybody just know one facet of the problem. I may know the crux of sub par path optimization. But I may not know the impact on G code when re-ordering of the parts or I may not be able to write an mixed integer linear programming solver as efficient as commercial one.
Problems as such requires a virtual team assembled with people from different domain. I really don't know where to find such group of the people to talk to.
So far I have made some tiny progress to isolate the parts path planning into an external file and get a visualization UI. It should open the door to more OR people to look into tool path optimization given we can present the data format openly and easily.
That parts can also be easily port to master branch from the legacy branch I'm working on. Perhaps it should be a good initial start as you sugggest.
In any case, I'd continue my research on TSP using empirical data I extracted from Cura engine.
Move the parts order optimization after the processing of the walls/infill in part.
We have one other constraint though: Keep flow changes to a minimum. You don't want the printer to print one infill line, then a piece of wall, then another infill line, then another piece of wall, etc. Preferably print all walls in one go, all infill in one go and all skin in one go.
good point! In path planning maybe it needs to keep track of speed - maybe group those together. I usually change all the speeds in cura to be the same so it's often not an issue for my prints.
@rickyzhang82 - speed is the "F" parameter in mm/minute (not mm/sec). F stands for "feedrate" and has to do with the days when gcodes were used in milling machines and computer controlled lathes.
@Ghostkeeper
Variable extrusion speed doesn't yield a good print result. But this is is a side topic. We can always find a different way to group things.
@gr5 My project is still work-in-progress. But you can follow my progress here: https://hackaday.io/project/158802-improve-tool-path-planning-in-cura
I have done several interesting things so far.
Any luck, so far?
I know it's kind of difficult to improve significantly upon the current path planning routine. I've tried this as well as one of the first things I implemented for CuraEngine: A path planner based on the random insertion heuristic for TSP. The implementation was fine and in theory better. It even had a better time complexity. In practice as well, but the improvement that was achievable was very slight: Between 0% and 5% reduction in time spent travelling. Since the travel time is typically 5-10% of the total print time, the actual reduction of 0% to 0.5% just wasn't worth the additional complexity that this added. And in practice, it took just about the same time to execute since the heuristics in CuraEngine make it practically linear anyway.
You can't very easily generalise the path planning beyond the fragmented planning that we do now: Just one print feature of one part at a time. This is due to the constraints we're under: We need to minimise flow changes as well as minimise crossing walls and retractions to get to other parts.
Optimization still sticks to the basic assumption that generate planned path by parts as you state. The interesting thing I haven't explore yet is how to pick a starting point in an intelligent way.
Regarding to the progress, I spent my time reading exact TSP approach from the book and reading paper on one of the heuristics approach -- LKH. Both implementation are very sophisticated. Neither of owner is willing to make their implementation licensing under GPL. So I will lower my bar to implement 3-opt.
I did integrate Concorde to Cura, where I output parts coordinate of each layer and convert the problem statement into Corcorde TSP format. But I can't release it due to Concorde license.
The estimated print time from Cura is far from accurate. In my printer, I saw from 10% to 30% underestimate all the time. I assume your improvement number is based on real print, instead of estimation number from Cura.
In any case, thanks for asking. I will keep you posted when I have something concrete to release.
Cura is assuming Marlin firmware on the printer with jerk=20mm/sec and acceleration=5000mm/sec I think. There are also acceleration settings for extruder which matters quite a bit on retractions. On UM printers Cura is usually within 5% accuracy. The display on the UM printer that estimates completion time however can be very far off. You can adjust the jerk and acceleration settings in Cura such that they only control the estimated time (don't actually change any gcodes).
@gr5
Thanks for pointing it out. That can explain why it is way off in my brand.
In legacy branch, the jerk and acceleration value are hardcoded in src/timeEstimate.cpp
. There are jerk/acceleration value for (x, y), z and e respectively.
To be more precise, the unit for acceleration and jerk should be mm/sec^2 and mm/sec^3.
mm/sec^3
So the word "jerk" was a horrible choice of words when the first person wrote sprinter which became Marlin. The physics term "jerk" is indeed in mm/sec^3 but the definition of jerk in the Marlin firmware (and most firmwares out there) is: the maximum allowed instantanious velocity change. And expressed in mm/sec. And for XY axes it is the magnitude of velocity change.
The theory goes like this - very old versions of firmware brought the head to a complete stop at EVERY vertex. But that was silly when printing circles. So they look at the velocity vector going into and out of each vertex and subtract the vectors and find the magnitude of that and keep that magnitude below the "jerk" setting. For example on a 90 degree corner if jerk is set to 20mm/sec then the max speed at the corner is 14mm/sec (square root of 2 over 2 times jerk speed). The jerk setting is used ONLY to set the maximum velocity at each vertex. The planning software then is able to ignore the angles and just slow down to the max vertex speed at each vertex and then accelerate out of that vertex in the next movement (the next gcode). The planner looks always at 16 moves ahead (well Marlin and repetier do 16 but other firmwares such as redeem does 256 move look ahead) and is always careful that it can stop the head from moving 16 moves ahead if necessary without violating max allowed deceleration.
Application Version
3.31 and also 3.0
Platform
Mac/Linux
Qt
N/A
PyQt
N/A
Display Driver
N/A
Steps to Reproduce
Actual Results
Regardless, the path planning is sub par in 3.31 compared to 15.04. It should limit the travel distance of hot end as much as possible. The starting point of next closed polygon should be just right next to the end of previous closed polygon. However, there are sub par planning path in both cases above.
Expected results
The better path planning in 15.04 yield a better print result. There is no stringing and no bulged in the square corner.
See the print result in here
Additional Information