tomvandeneede / p2pp

Palette2/3 Post Processing tool for Prusa Slicer
142 stars 39 forks source link

Buffer underrun prevention #51

Closed GarthSnyder closed 4 years ago

GarthSnyder commented 4 years ago

My Palette 2 is having a hard time keeping up with my printer, leading to frequent Buffer Error 120's. This error means that the printer has emptied the filament buffer when that was not the system's intention (that is, not during a pong).

Since there's no P2PP configuration option that sets the length of the filament path, I'm guessing that there's currently no provision for avoiding these errors other than the obvious approaches: print slower, or use the Palette OctoPrint plugin to slow printing automatically whenever a splice is going on. But those options are both needlessly conservative.

Perhaps I'm being unreasonably optimistic, but it seems like this should be a solvable problem. G-code move operations have a clear upper bound on execution time, and splices take a fixed amount of time (per splice type). So there's a rough estimate of flow rate on both ends for any given locality in time. The filament path length can be roughly translated into a (varying) time offset, and the momentary consumption rate balanced against production. Bonus points for incorporating the effect of the buffer and for using sliding windows rather than instantaneous rates.

If I were to work on this in P2PP, would you be interested in accepting it as a PR? Do you think the current architecture (which I have not reviewed yet) would lend itself to this kind of analysis?

tomvandeneede commented 4 years ago

Hi Garth,

it is an interesting thought for sure… I have considered this som time ago when looking for possible expansions…

technically most of the info should indeed be available but as you may have noticed when looking at the print time predictions… they are often way off… but still you could look at the shortest possible time for the command to finish which is length/speed not taking into account acceleration Adding speed control commands from there should be feasible…

What you say about you know the splice time is a hard one, also that one depends on the p2 / p2p and the splice stings used but this information could be empirically determined to get a safe workable value..

I will try to code and see if this can lead to something workable….

what I have in mind

user input
;P2PP FEEDPATHLENGTH=n  need to know the distance between splices and the consumer
;P2PP AUTOSPEED         parameter to engage the function

extra loop over the code before the conversion to absolute extrusion (if requested)

1. function that models the time required to come up to a certain amount of filament
2. compare that to the estimated print time and if larger, reduce the speed of the print

known challenge at this point:

1. support different types of printers with different type of codes for speed settings
2 this will not cover all buffer underpins
3. need to model the time better than listed above or the printer will print consistently slower over the entire print rather than just over splices

Please let me know your thoughts on this. I think I have most of what is needed in the existing data structures kept throughout processing so should not be too difficult.

About PR… I tried that and caused me a lot of issues… P2PP has grown very complex and with the different processing modes, some small changes are result in unwanted behaviour in the long run… I am hoing to rewrite P2PP once more for next year getting a better handle on complexity so maybe at that time PR of more than trivial changes would be an option. Just doing this for self-protection as last PR caused me two weeks of debugging for a fairly small change

Regards Tom

On 20 Dec 2019, at 00:58, Garth Snyder notifications@github.com wrote:

My Palette 2 is having a hard time keeping up with my printer, leading to frequent Buffer Error 120's. This error means that the printer has emptied the filament buffer when that was not the system's intention (that is, not during a pong).

Since there's no P2PP configuration option that sets the length of the filament path, I'm guessing that there's currently no provision for avoiding these errors other than the obvious approaches: print slower, or use the Palette OctoPrint plugin to slow printing automatically whenever a splice is going on. But those options are both needlessly conservative.

Perhaps I'm being unreasonably optimistic, but it seems like this should be a solvable problem. G-code move operations have a clear upper bound on execution time, and splices take a fixed amount of time (per splice type). So there's a rough estimate of flow rate on both ends for any given locality in time. The filament path length can be roughly translated into a (varying) time offset, and the momentary consumption rate balanced against production. Bonus points for incorporating the effect of the buffer and for using sliding windows rather than instantaneous rates.

If I were to work on this in P2PP, would you be interested in accepting it as a PR? Do you think the current architecture (which I have not reviewed yet) would lend itself to this kind of analysis?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tomvandeneede/p2pp/issues/51?email_source=notifications&email_token=AK6CHJJF3L43AKD536QCBJ3QZQDCDA5CNFSM4J5TGU52YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IBZVGIQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6CHJP55FTQ6RPFN4DI6PLQZQDCDANCNFSM4J5TGU5Q.

tomvandeneede commented 4 years ago

There is one challenge though…

you have no idea what is in the buffer of the palette during the print.

if the buffer is full you have about 120mm of filament to work with, if the buffer is ding a pong it will be empty and you have no reserve

this needs some careful thinking and likely is going to end up in pretty much the same solution as we have not but implemented differently

On 20 Dec 2019, at 00:58, Garth Snyder notifications@github.com wrote:

My Palette 2 is having a hard time keeping up with my printer, leading to frequent Buffer Error 120's. This error means that the printer has emptied the filament buffer when that was not the system's intention (that is, not during a pong).

Since there's no P2PP configuration option that sets the length of the filament path, I'm guessing that there's currently no provision for avoiding these errors other than the obvious approaches: print slower, or use the Palette OctoPrint plugin to slow printing automatically whenever a splice is going on. But those options are both needlessly conservative.

Perhaps I'm being unreasonably optimistic, but it seems like this should be a solvable problem. G-code move operations have a clear upper bound on execution time, and splices take a fixed amount of time (per splice type). So there's a rough estimate of flow rate on both ends for any given locality in time. The filament path length can be roughly translated into a (varying) time offset, and the momentary consumption rate balanced against production. Bonus points for incorporating the effect of the buffer and for using sliding windows rather than instantaneous rates.

If I were to work on this in P2PP, would you be interested in accepting it as a PR? Do you think the current architecture (which I have not reviewed yet) would lend itself to this kind of analysis?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tomvandeneede/p2pp/issues/51?email_source=notifications&email_token=AK6CHJJF3L43AKD536QCBJ3QZQDCDA5CNFSM4J5TGU52YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IBZVGIQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6CHJP55FTQ6RPFN4DI6PLQZQDCDANCNFSM4J5TGU5Q.

GarthSnyder commented 4 years ago

Dangit! I just had another buffer underrun - this time with 140mm splices and the OctoPrint plugin set at 50% speed during slices. I wasn't watching, unfortunately, so I'm not sure of the exact circumstances. It doesn't seem to have hurt the print, though.

I think you're right that the buffer is a potential problem. Still, there are a couple of things we can say about it. First, the Palette does not seem to do a pong unless it's safe; that is, it can always mostly-fill the buffer immediately once the limit switch has been tripped. It does sometimes start a splice right after a pong, but only if it can prefeed about half a bufferload. The fast feed immediately following the cut tops up the buffer to nearly full. So really, it seems like pongs can be ignored because they behave just like full buffers.

The only other time the Palette lets the buffer get emptier than a certain level is when it has no choice because it's busy splicing. I'd have to measure the size of the conserved buffer zone in mm, but visually it seems to be about the level at which the filament crosses midway through the lowest box in the Mosaic logo (on the clear cover of the buffer). I'd guess it's about 50mm. Essentially, you can always count on a 50mm cushion unless you've gotten yourself into jeopardy already. If we can keep the system out of jeopardy, it should be possible to treat that 50mm as a constant lower bound on the amount of filament available.

I've given a bit more thought to exactly how one might calculate the necessary speed adjustments, and I think it should be possible to do it without really entering the time domain, which I'm hoping might simplify things a bit. See what you think of the following algorithm.

The scheme is supply-centric. The supply model is that the Palette can deliver arbitrarily much filament arbitrarily fast, except that at each splice, it must halt for T seconds (say, 50). And for the reasons outlined above, we assume that whenever a splice starts, there are at least 50mm of filament available to be consumed. (Let's call that length B = 50.)

So, the golden rule is that you may not consume more than B mm of filament during a splice. We only need to check the usage patterns in the neighborhood of splices.

Here's how to enforce that. We start by traversing the splicing timeline in forward order. At each splice point S:

1) Subtract L, the feed path length, to get the corresponding nozzle position D.

2) However, D is uncertain because of a couple of different factors, so expand it in both directions to get interval D1-D2. I don't have much idea of what the appropriate safety margin would be; we'd have to determine it empirically. But it's easiest if D1 and D2 both fall on move-command boundaries. It doesn't matter if this gives a longer interval than desired; that will just make this region a bit more conservatively managed.

3) For each move command D' from D1 to D2, set a time counter and a filament length counter to zero. Then work forward one move command at a time from D', adding up the filament and seconds consumed. Stop when either the length sum exceeds B or the time sum exceeds T. (You may go beyond D2; the D1-D2 interval just determines the starting locations of these sequences.)

4a) If the time sum exceeds T, this T-second period will not use more than B mm of filament and is safe. Go on to the next D'.

4b) If the length sum exceeds B, the move sequence may drain the buffer. Apply a time-scaling factor to all move commands in the current sequence to bring it into compliance. Then go on to the next D'.

That's it. Note that a given move command may receive multiple slowdown scalings because it is part of more than one sequence starting in the region D1-D2. It doesn't matter; there won't be excessive slowdown overall. And the earliest moves will tend to get slowed the most, which seems prudent. Also, you are guaranteed not to violate the 50mm rule no matter where reality actually falls within the window of variability.

I've left out details and edge cases, but one scenario that deserves mention is hypothetical back-to-back splices - cases where one splice depletes the 50mm cushion and is quickly followed by another splice without there being time to restore the 50mm buffer.

I question whether this situation really needs to be addressed. The minimum segment length is > 50mm, so each splice should be guaranteed to disgorge at least a 50mm buffer on completion.

If it IS necessary to think about this, I would imagine it's pretty easy to accumulate filament shortfalls and reduce the local value of B accordingly. But it probably doesn't need to be in an initial implementation.

I've phrased all of this in terms of an N^2 algorithm, but in reality it's linear because you can just use a sliding window in the neighborhood of D.

Sorry you've had bad experiences with contributors in the past! Let me know if there's anything I can do to help.

tomvandeneede commented 4 years ago

Hi Garth,

Trying to figure out what you wrote

Let’s for simplicity first look at the splicing process:

1/ first loading sequence is easy… all is length as specified, fed into the extruder and off you go. 2/ X mm later you issue a PING… this is a game changer… if off Palette will correct the splice lengths to meed the shorter or longer length needed to stay in tune with the print 3/ when a splice hit, the filament generation is off for almost 20-30seconds… during this time the filament consumption needs to be probably reduced by a factor to avoid collapse

Issue is that the changes introduced by ping are cumulative so can mount up to some distance in both directions….
So getting a handle on this is of key importance…. if D goes too big you are moving outside of the sweet spot zone gently and the corrections will not align with the actual splices

What could be done..

Add a new O-Command (or another command) the informs the printer of the filament consumption rate every few cm in the print. The plugin can use this information to assess the need to slow down when it gets a splice request. Still this is a long shot as the tool path buffer on the printer side will still function at the old speed for some time.

Technically you should start working from the location of the last splice and see the counter of generated filament increase vs the really extruded filament… correlating this info could give you heads up in the print in terms of upcoming splice so you can slow down before the buffer starts collapsing

Your plan below looks like it could work but should be implemented at the run side instead of the splice side.

What do you think?

Dangit! I just had another buffer underrun - this time with 140mm splices and the OctoPrint plugin set at 50% speed during slices. I wasn't watching, unfortunately, so I'm not sure of the exact circumstances. It doesn't seem to have hurt the print, though.

I think you're right that the buffer is a potential problem. Still, there are a couple of things we can say about it. First, the Palette does not seem to do a pong unless it's safe; that is, it can always mostly-fill the buffer immediately once the limit switch has been tripped. It does sometimes start a splice right after a pong, but only if it can prefeed about half a bufferload. The fast feed immediately following the cut tops up the buffer to nearly full. So really, it seems like pongs can be ignored because they behave just like full buffers.

The only other time the Palette lets the buffer get emptier than a certain level is when it has no choice because it's busy splicing. I'd have to measure the size of the conserved buffer zone in mm, but visually it seems to be about the level at which the filament crosses midway through the lowest box in the Mosaic logo (on the clear cover of the buffer). I'd guess it's about 50mm. Essentially, you can always count on a 50mm cushion unless you've gotten yourself into jeopardy already. If we can keep the system out of jeopardy, it should be possible to treat that 50mm as a constant lower bound on the amount of filament available.

I've given a bit more thought to exactly how one might calculate the necessary speed adjustments, and I think it should be possible to do it without really entering the time domain, which I'm hoping might simplify things a bit. See what you think of the following algorithm.

The scheme is supply-centric. The supply model is that the Palette can deliver arbitrarily much filament arbitrarily fast, except that at each splice, it must halt for T seconds (say, 50). And for the reasons outlined above, we assume that whenever a splice starts, there are at least 50mm of filament available to be consumed. (Let's call that length B = 50.)

So, the golden rule is that you may not consume more than B mm of filament during a splice. We only need to check the usage patterns in the neighborhood of splices.

Here's how to enforce that. We start by traversing the splicing timeline in forward order. At each splice point S:

Subtract L, the feed path length, to get the corresponding nozzle position D.

However, D is uncertain because of a couple of different factors, so expand it in both directions to get interval D1-D2. I don't have much idea of what the appropriate safety margin would be; we'd have to determine it empirically. But it's easiest if D1 and D2 both fall on move-command boundaries. It doesn't matter if this gives a longer interval than desired; that will just make this region a bit more conservatively managed.

For each move command D' from D1 to D2, set a time counter and a filament length counter to zero. Then work forward one move command at a time from D', adding up the filament and seconds consumed. Stop when either the length sum exceeds B or the time sum exceeds T. (You may go beyond D2; the D1-D2 interval just determines the starting locations of these sequences.)

4a) If the time sum exceeds T, this T-second period will not use more than B mm of filament and is safe. Go on to the next D'.

4b) If the length sum exceeds B, the move sequence may drain the buffer. Apply a time-scaling factor to all move commands in the current sequence to bring it into compliance. Then go on to the next D'.

That's it. Note that a given move command may receive multiple slowdown scalings because it is part of more than one sequence starting in the region D1-D2. It doesn't matter; there won't be excessive slowdown overall. And the earliest moves will tend to get slowed the most, which seems prudent. Also, you are guaranteed not to violate the 50mm rule no matter where reality actually falls within the window of variability.

I've left out details and edge cases, but one scenario that deserves mention is hypothetical back-to-back splices - cases where one splice depletes the 50mm cushion and is quickly followed by another splice without there being time to restore the 50mm buffer.

I question whether this situation really needs to be addressed. The minimum segment length is > 50mm, so each splice should be guaranteed to disgorge at least a 50mm buffer on completion.

If it IS necessary to think about this, I would imagine it's pretty easy to accumulate filament shortfalls and reduce the local value of B accordingly. But it probably doesn't need to be in an initial implementation.

Sorry you've had bad experiences with contributors in the past! Let me know if there's anything I can do to help.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tomvandeneede/p2pp/issues/51?email_source=notifications&email_token=AK6CHJJ2235XOVILWFLCCF3QZR5VZA5CNFSM4J5TGU52YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHMH24I#issuecomment-567836017, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6CHJIJ442KWUSRPCWHCLDQZR5VZANCNFSM4J5TGU5Q.

tomvandeneede commented 4 years ago

I have done some coding but still need to rethink some of the features. will keep you posted on where I get ... integration as an extra step is seamless so we might work on this together

GarthSnyder commented 4 years ago

The ping calibrations are a complication, but I don't think they're as much of a problem as you seem to believe. But it's quite possible there's something I don't understand about the ping system, so let me describe my reasoning and perhaps you can pick out any mistaken assumptions.

As I understand it, pings are a one-directional communication from the printer to the Palette. The printer just announces from time to time, "Well, here I am, 3200mm into the filament". The Palette knows how much filament the printer has actually pulled, so it uses this information to develop a calibration ratio that is characteristic of the printer, e.g., "When this printer thinks it has fed 100mm of filament, it has actually fed only 97mm." The Palette compresses or expands all splice segments by this ratio to stay in sync with the printer. There's probably a short-term correction factor as well to handle random variations, but the characteristic ratio is more important over the long term.

You mention concerns about filament skew errors being both unknowable and compounded over time. However, if the speed correction algorithm only looks at local windows, this really shouldn't be too much of a concern. The Palette is constantly synchronizing itself to the printer. If the feed path is 500mm and the printer thinks it's at 3200mm, the Palette is always going to be following along at 3700mm, at least to a first approximation. The whole system depends on this correction being accurate, so it doesn't seem unreasonable to rely on it in P2PP.

What does seem like a source of error is the skew encoded in the current filament in the feed path. If the printer is consistently 3% short, as outlined above, the 500mm feed path is actually a 515mm virtual feed path. When the printer thinks it's printing millimeter 3200, the Palette will be feeding or splicing whatever filament is called for at 3715mm in the original G-code, because that is the filament that will be hitting the nozzle when the printer thinks it has reached millimeter 3715, even though in reality it is only 500mm of filament away from the nozzle. If the MCF calls for a splice at 3715mm, the printer will be constrained by this operation while it prints what it thinks is millimeter 3200.

This local skew is a source of error, but it's not that large, a few percentage points over the length of the feed path. I think it's small enough to be encompassed by the "window of error" system I described above.

I wonder also if you can just fish the long-term skew ratio value out of the Palette and enter it in the G-code header for P2PP. I'm printing something right now, but I'll check once that job is done.

tomvandeneede commented 4 years ago

Garth,

can you send me your email address, I’ll send you what I have for further discussion

On 21 Dec 2019, at 05:14, Garth Snyder notifications@github.com wrote:

The ping calibrations are a complication, but I don't think they're as much of a problem as you seem to believe. But it's quite possible there's something I don't understand about the ping system, so let me describe my reasoning and perhaps you can pick out any mistaken assumptions.

As I understand it, pings are a one-directional communication from the printer to the Palette. The printer just announces from time to time, "Well, here I am, 3200mm into the filament". The Palette knows how much filament the printer has actually pulled, so it uses this information to develop a calibration ratio that is characteristic of the printer, e.g., "When this printer thinks it has fed 100mm of filament, it has actually fed only 97mm." The Palette compresses or expands all splice segments by this ratio to stay in sync with the printer. There's probably a short-term correction factor as well to handle random variations, but the characteristic ratio is more important over the long term.

You mention concerns about filament skew errors being both unknowable and compounded over time. However, if the speed correction algorithm only looks at local windows, this really shouldn't be too much of a concern. The Palette is constantly synchronizing itself to the printer. If the feed path is 500mm and the printer thinks it's at 3200mm, the Palette is always going to be following along at 3700mm, at least to a first approximation. The whole system depends on this correction being accurate, so it doesn't seem unreasonable to rely on it in P2PP.

What does seem like a source of error is the skew encoded in the current filament in the feed path. If the printer is consistently 3% short, as outlined above, the 500mm feed path is actually a 515mm virtual feed path. When the printer thinks it's printing millimeter 3200, the Palette will be feeding or splicing whatever filament is called for at 3715mm in the original G-code, because that is the filament that will be hitting the nozzle when the printer thinks it has reached millimeter 3715, even though in reality it is only 500mm of filament away from the nozzle. If the MCF calls for a splice at 3715mm, the printer will be constrained by this operation while it prints what it thinks is millimeter 3200.

This local skew is a source of error, but it's not that large, a few percentage points over the length of the feed path. I think it's small enough to be encompassed by the "window of error" system I described above.

I wonder also if you can just fish the long-term skew ratio value out of the Palette and enter it in the G-code header for P2PP. I'm printing something right now, but I'll check once that job is done.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tomvandeneede/p2pp/issues/51?email_source=notifications&email_token=AK6CHJJXG54B34X6N2OWB3TQZWJ2FA5CNFSM4J5TGU52YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHOUVCA#issuecomment-568150664, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6CHJOBGMLHSTXFUCUWOVLQZWJ2FANCNFSM4J5TGU5Q.

tomvandeneede commented 4 years ago

Ton of different thoughts about this...

Your observation about ping is about the same as I have. I don’t necessarily agree to your remark of long term vs short term in terms. Long term is something that cen be managed easily as you work within local window parameters of let’s say +/- 1 or 2%. Tube length of 50mm is really short and probably short tubes are only used by people ruining on Bowden systems. When using the medium (80mm) tube on direct drive systems, you have to count about 90-100cm so you are looking at +/- 3cm window…

Short term I have no idea how palette also works… Sometimes people have weird color swaps at very strange places in their prints… so at some point the corrections must be quite significant to manifest like that and resolve over the course of one layer… This is quite important though… if you have a +/- 3cm window + +/- 1cm corrections the period to monitor is quite extended. Palette plugin could technically keep track of the extrusions and better match real vs expected as it seed the start and end of the splices…. I also believe P2 regularly sends out the amount of produced filament… this might be a source a possibility to track the buffer size but only in the palette plugin. Feel uncomfortable making other than worst case assumptions for something we really don’t control….

assuming worst case -3cm (printer is using less and generated earlier)… now you need to account 6cm while we only have 50mm buffer so our margin for error is already large that the buffer to work with so you end up pretty much slowing down every splice

During approx 20 to 25 sec there will be no filament generation so you could optimize by looking at the 6cm zone + 30sec extra as you have to account for error margin + actual splice and see you never purge 40mm or so (10 keep 10mm buffer) during any 20 sec period…

Palette 2 plugin can in any case speed up again as soon as it receives the splice … unless there are special short splices one after the other That means however there can be no more corrections AFTER the end of the splice or they would undo the corrections by the plugin

I am really wondering why you want to tackle this purely from P2PP…

there are 3 locations where we can do corrections

1/ P2 hardware 2/ Palette Plugin running on the Octopi ( also sort of open source, look at :https://gitlab.com/mosaic-mfg/palette-2-plugin https://gitlab.com/mosaic-mfg/palette-2-plugin ) 3/ Code generation - P2PP

1/ is out of scope since there is no access to the Mosaic firmware, I guess for good reason you might want to add features to help improve quality 2/ is open source and you can work on any solution there and ask Mosaic to merge it into their base code (as they did with the original slow down) 3/ is whet we are looking at but this misses all direct information

what we could do:

1/ Introduce new info in the file that lists the information on how much filament is going to be used in the next 30 seconds 2/ asm mosaic to include a pre-warning that fires e.g. 10mm BEFORE a splice ill happen, 3/ in Palette 2 plugin assess the value in (1) and issue the correct code to make sure no more than 40 or 50 mills will be used during the splice…

This would be very similar to the current slow down but 1/ only happens when needed 2/ the plugin has live information so no guessing when to start the slowdown 3/ improvement over the current method as the heads up period will allow the buffer to deplete and the speedchange to kick in BEFORE hitting the slowdown… There still will be code buffered but 10mm is quite long so should be enough to deplete the buffer or most of it

I am interested to getting this in in some form, the major concern is that when you add a feature loads of unrelated issues get linked to it…and with all different setup possibilities it already is a nearly full time job to keep things running…

So for the implementation

At this moment the best and safest guess is to perform the corrections AFTER the full processing and right BEFORE the conversion to absolute filament. this way we have added a module autoslow.py that will contain all processing for this v.splicelength[] array withs he length of all splices v.filament_path_length should be a new variable containing the exact length of the filament path still need tweak the plugin to reset to 100% after each splice. so we don’t slow down after a splice ends

did a short routine on calculate the point at risk ( this is early start point of the splices that are happening during the print
auto slowdown process tracks movements in X,Y,Z,Eto calculate minimal time (this is unaccelerated moves, but since not all prints include acceleration info there is not always enough information to work with…. anyway  movement/speed = duration
all other commands are expected to take 0 time...
next step it to build a list of slow down parameters per command taking into account the reduction required when the splice were to start there

I would rely on the P2 plugin to restore full speed

On 21 Dec 2019, at 05:14, Garth Snyder notifications@github.com wrote:

The ping calibrations are a complication, but I don't think they're as much of a problem as you seem to believe. But it's quite possible there's something I don't understand about the ping system, so let me describe my reasoning and perhaps you can pick out any mistaken assumptions.

As I understand it, pings are a one-directional communication from the printer to the Palette. The printer just announces from time to time, "Well, here I am, 3200mm into the filament". The Palette knows how much filament the printer has actually pulled, so it uses this information to develop a calibration ratio that is characteristic of the printer, e.g., "When this printer thinks it has fed 100mm of filament, it has actually fed only 97mm." The Palette compresses or expands all splice segments by this ratio to stay in sync with the printer. There's probably a short-term correction factor as well to handle random variations, but the characteristic ratio is more important over the long term.

You mention concerns about filament skew errors being both unknowable and compounded over time. However, if the speed correction algorithm only looks at local windows, this really shouldn't be too much of a concern. The Palette is constantly synchronizing itself to the printer. If the feed path is 500mm and the printer thinks it's at 3200mm, the Palette is always going to be following along at 3700mm, at least to a first approximation. The whole system depends on this correction being accurate, so it doesn't seem unreasonable to rely on it in P2PP.

That does seem like a source of error is the skew encoded in the current filament in the feed path. If the printer is consistently 3% short, as outlined above, the 500mm feed path is actually a 515mm virtual feed path. When the printer thinks it's printing millimeter 3200, the Palette will be feeding or splicing whatever filament is called for at 3715mm in the original G-code, because that is the filament that will be hitting the nozzle when the printer thinks it has reached millimeter 3715, even though in reality it is only 500mm of filament away from the nozzle. If the MCF calls for a splice at 3715mm, the printer will be constrained by this operation while it prints what it thinks is millimeter 3200.

This local skew is a source of error, but it's not that large, a few percentage points over the length of the feed path. I think it's small enough to be encompassed by the "window of error" system I described above.

I wonder also if you can just fish the long-term skew ratio value out of the Palette and enter it in the G-code header for P2PP. I'm printing something right now, but I'll check once that job is done.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tomvandeneede/p2pp/issues/51?email_source=notifications&email_token=AK6CHJJXG54B34X6N2OWB3TQZWJ2FA5CNFSM4J5TGU52YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHOUVCA#issuecomment-568150664, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6CHJOBGMLHSTXFUCUWOVLQZWJ2FANCNFSM4J5TGU5Q.

GarthSnyder commented 4 years ago

Sorry for the delay - I've got family staying for the week, so my availability is going to be spotty. It doesn't mean I've lost interest. 🙂

I will write more later, but to pick out one issue: I was thinking only in terms of a P2PP-based solution because I wasn't aware that the Palette OctoPrint plugin source code was available and that Mosaic was open to absorbing updates. (I suppose, now that you mention it, that since OctoPrint plugins are written in Python, you can look at the source code whether the provider wants you to or not...)

I completely agree with you: if a dynamic solution based on OctoPrint is possible, that is much preferable to static analysis and is likely to give better results.

As far as general pessimism, it would be interesting to know what proportion of the time real-world G-code files are in danger of exhausting the buffer. I have been thinking as if this were a relatively uncommon situation, so it wouldn't hurt too much to be excessively pessimistic when an adjustment was made. Even with harsh crackdowns over extended regions, I was thinking it would be better than the current "slow down during every splice" approach used in the Palette plugin (which doesn't even work). But perhaps that assumption isn't at all accurate.

It is possible to inspect the printer profile from the Palette touch screen. You get the feed path length and the "historical modifier", which I assume is just a length ratio. I was surprised to see that my feed path is actually 865mm and my ratio is ~95%, so that's about 45mm of variation right there.

I wonder if you can ask the Palette what its buffer state is. How much do you know about the communication protocol used over USB?

Again, it will seem like I'm not responding to what you actually said, which is true. I'm just trying to get some main points down while I have a few minutes. More later.

GarthSnyder commented 4 years ago

I took a brief look at the Palette plugin repo; it seems pretty straightforward to adapt and feed back as a PR, and it does indeed seem like Mosaic might be receptive to this.

I notice that the mosaic-mfg account also includes a PlatformIO configuration, which I assume is for device firmware. However, the actual firmware doesn't seem to be published.

If I were working on this on my own, I guess the next steps would be to investigate the Palette plugin source code to try to characterize how the plugin gets access to the G-code, and also to try to reverse-engineer the communication protocol between the Palette device and the plugin. Have you worked on either of these things? Is it something I could look at without stepping on any toes?

tomvandeneede commented 4 years ago

Hi Garth,

plugin does seem to get access one line at a time. It filters it for O-commands and inserts stuff where needed both in terms to commas to the P2 as to towards the printer. Guess a step would be needed to drive the “pre-knowledge” so you would not have to parse code in the plugin… guess that would be too late and at that point there is not just the gcode we know but also post processed gcode versions that you cannot just interpret like that (some printers have this)

On 2 Jan 2020, at 03:16, Garth Snyder notifications@github.com wrote:

I took a brief look at the Palette plugin repo https://gitlab.com/mosaic-mfg/palette-2-plugin; it seems pretty straightforward to adapt and feed back as a PR, and it does indeed seem like Mosaic might be receptive to this.

I notice that the mosaic-mfg account also includes a PlatformIO configuration https://gitlab.com/mosaic-mfg/platform-ststm32, which I assume is for device firmware. However, the actual firmware doesn't seem to be published.

If I were working on this on my own, I guess the next steps would be to investigate the Palette plugin source code to try to characterize how the plugin gets access to the G-code, and also to try to reverse-engineer the communication protocol between the Palette device and the plugin. Have you worked on either of these things? Is it something I could look at without stepping on any toes?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tomvandeneede/p2pp/issues/51?email_source=notifications&email_token=AK6CHJLY2PWUHJ4CGMP72DDQ3VE7NA5CNFSM4J5TGU52YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEH5RSOQ#issuecomment-570104122, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6CHJIQHIYAV6GWHCZZI2DQ3VE7NANCNFSM4J5TGU5Q.