LoopKit / Loop

An automated insulin delivery app for iOS, built on LoopKit
https://loopdocs.org
Other
1.49k stars 1.3k forks source link

Potential enhancements to carb absorption algorithm - brainstorm #629

Closed jeremybarnum closed 10 months ago

jeremybarnum commented 6 years ago

This is a thread to trigger brainstorming on potential next generation carb absorption prediction algorithms.

The current dynamic carbs algorithm accepts a carb input from the user with a specified absorption time. From that user-entered data, it calculates an absorption rate. (For simplicity, this discussion ignores the issues discussed in #577). The initial prediction is based on this absorption rate. Then, as the actual BG develops, Loop computes the actual absorption of carbs by comparing insulin-only effects to observed BG changes. The carbs actually absorbed are deducted from the carb entry and the remaining carbs are projected according to the original computed absorption rate. This makes the prediction much more responsive to real world observations, rather than user input and mathematical decay assumptions. It has significantly improved results for Loop users.

This approach means that if carbs are absorbed faster than expected, the end-point of carb absorption is sooner; and if slower than expected, the end-point is later. Implicitly, this assumes that the original carb input is exact, because any differences between expected absorption and observed absorption are attributed to the absorption time, rather than to the number of carbs consumed. This is the natural choice where the meal in question allows for exact carb counting. (Of course, momentum and RC do a lot to capture carb counting errors as a sort of "second line of defense" - but here I'm focusing exclusively on the algorithm for the carb absorption itself)

However, in situations where exact carb counting is not possible (call it "party mode" - I'm picturing social restaurant dinner where an accurate carb count could be challenging) it's easy to see that you might want to allocate some of the the observed deviation to carb counting error, rather than 100% to absorption time estimate error. Just for the sake of argument you could pick 50% to absorption time, and 50% to carb count error.

So assume a 30 gram carb input with a 3 hour absorption. Expected absorption over the first 60 minutes is 10 grams/hour. Assume that observed absorption is 20 grams in the first hour. The excess is 10 grams over an hour. Allocating 50% to absorption time would mean that 5 grams of the 10 gram "excess" is just faster-than expected absorption. But the other 5 grams of "excess" could be associated with undercounting carbs.

Then you have some choices, from least to most sensitive, about how to project the 5 grams associated with carb count error:

-no projection. Only observed deviation affects original input assumption. Original input is now assumed to have been 35 grams, with 20 grams absorbed over the first hour, and 15 remaining, which are decayed at the original absorption rate of 10g/hr, so the end of the absorption is in 1.5 hours. But as a result of the increase in the assumed carbs inputted, there will be high temping to cover the extra carbs. Momentum and RC will do some of this as well, but this would make it more aggressive.

-project with decay - for example 1 hour. Observed absorption is 15g/hr, project this forward decaying back to 10g/hr for an hour, averaging 12.5 g/hr for the next hour, and then back to 10g/hr. This would imply the original entry was actually 37.5 grams.

-project without decay. Scale up the entire assumed original carb entry by the observed deviation - so in this case the entry is assumed to have actually been 45 grams, instead of 30, because after allocating 50% of the deviation to carb counting error, the observed absorption rate is 50% bigger than the inputted one.

-The most extreme case would allocate 100% of the deviation to carb counting error and project the sample without decay. In this case, observed absorption is double the expected absorption, and so the carb input is assumed to be off by a factor of 2 - so it increases to 60 grams. Or, put differently, the remaining carb absorption is assumed to be 20g/hr, as observed, rather than 10g/hr, as inputted. That would increase the prediction by the excess of 10g/hr over 2 remaining hours, or 20g total - leading to pretty aggressive high temping. Conversely, if the absorption rate observed is much lower than the input implies, this would lead to more aggressive low-temping that the current factor of 1.5 produces in calculating the minimum absorption rate.

In addition, the observation time window can vary - trailing 15 minutes, trailing 30 minutes, etc, time from the consumption of the carbs, etc.

This approach requires us to have rules for a valid sample. Probably best not to make big decisions until there is some reasonable data (15 minutes of absorption after the first 10 minute delay?) but don't wait too long either. Some of this is already in Loopkit for calculating momentum.

There also need to be some safety parameters, and these should probably be asymmetric, but also configurable, probably in the code not the app, to make it a bit harder for people to get carried away with aggressiveness.

One example is max carb counting error. So, if the user selects uncertain count, and observed absorption is (for example) 3 times the inputted carb absorption, the assumed carbs consumed that would be high-temped against wouldn't exceed (for example) 2 times the inputted amount.

When absorption is much slower than expected arguably it is conservative to let this flow though, and lead to carb expiry rate that it is much faster than 1.5*input. In effect, this will start zero-temping earlier where, for example, a meal is not consumed after a bolus is taken. But for a variety of reason (GP, etc) people might want to configure this behavior differently.

It's worth noting that I believe the OpenAPS approach behaves somewhat this way - i.e. it dynamically changes the prediction of total carbs expected to be absorbed (as opposed to just the speed of absorption) in a fairly sensitive way as a function of the recently-observed carb absorption trend.

scottleibrand commented 6 years ago

(For purposes of this response, I'm going to focus on oref0 0.6.0-dev, although much of this applies (with modifications) to 0.5.x and earlier as well.)

OpenAPS behaves the same as Loop in terms of only correcting the assumed timing of future carb absorption based on past carb absorption, not changing the total amount. OpenAPS starts with a weaker assumption about projected carb absorption, because it does not ask the user to input a more precise estimate of carb absorption time, and instead initially assumes a bilinear /\ shaped carb absorption curve with a total duration of 3 hours. As soon as we see carb absorption, we assume that it will continue into the future, linearly decreasing to zero over however long it takes to absorb the remaining COB. (That sounds similar to your "project with decay" scenario, except that it only applies to how fast the remaining COB absorbs, not what the remaining COB actually is.) If the projected time to absorb all the remaining COB is more than 3 hours + 1.5x the time since the last carb entry (which usually happens before absorption ramps up, or if it slows early for some reason), then the remaining carbs not projected to absorb in time are overlaid using the bilinear /\ shaped carb absorption curve.

In addition, OpenAPS calculates multiple predicted BG "scenarios" and incorporates predictions from more than one of them in a blended manner. For some of the predictions this is somewhat analogous to the way Loop uses RC and momentum, but for the one most relevant to carb absorption, called UAM (for unannounced meal) it is somewhat different. UAM is now largely used to provide a "second opinion" on predicted carb effects for meals that are actually announced, but whose carb counts may be imprecise or inaccurate. UAM uses solely the last ~45m of deviations (BG deltas compared to insulin-only effects), and extrapolates how such deviations will continue into the future based on the recent deviation trend. If the current deviation is lower than the highest recent deviation, it assumes that deviations will continue to decrease at the rate they have been decreasing since that time. When deciding how to dose for predicted BGs, OpenAPS blends the COB BG predictions with the UAM BG predictions according to what percentage of entered carbs remain as COB. So when 90% of carbs have been absorbed, the original carb count, and resulting COB-based BG predictions, are only used for 10% of the decisionmaking, and UAM is used for the other 90%.

Hopefully that helps - let me know if you have any other questions about how OpenAPS handles any particular scenario and I'll be happy to answer them, but hopefully without hijacking the entire thread.

jeremybarnum commented 6 years ago

Thanks @scottleibrand - that helps a lot. I'll be following up, I'm sure.

dm61 commented 6 years ago

@jeremybarnum first, thanks for opening this issue. I think that your opening paragraph, concluding with the first sentence of the second paragraph, could go as is to the algorithm section of the Loop docs.

Regarding the dynamic CA algorithm, after some more thinking, I am not really sure I'd be in favor of re-adjusting the user entered carb values. You provided nice examples of how this could potentially be done, but I still do not see how Loop would decide to add or subtract carbs (as opposed to simply adjusting the absorption time). Any such decisions would seem to be arbitrary. IMO, handling any unmodeled effects should be conceptually delegated to RC/momentum, as Loop is already doing (pretty well I should say).

But, I do think there is some room for improvements in the dynamic carb algorithm. In particular, quoting your summary ("the remaining carbs are projected according to the original computed absorption rate"), it seems to me that we should be able to do a bit better than that. As an example, lets take a look at these two very different CA curves: img_5113 None of the two look like rectangles or triangles (although a triangle could apply perhaps to the first fast CA case). The snapshot is taken in the middle of the second CA (a large mixed lunch). It's pretty clear that Loop's CA prediction is somewhat below the actual future CA. This could be addressed, for example, by applying the RC/momentum approach to the CA curve itself (while still keeping the total area under curve equal to the carbs entered), thus providing a longer-term corrective impact on BG predictions, and offloading the main Loop's RC/momentum mechanism that would still handle short-term unmodeled effects.

I think there is also some room for improvements in the Loop's RC/momentum calculations, specifically related to carb entry errors or any unannounced meals. @scottleibrand, many thanks for providing some more insights into oref0 algorithms. I'll try to look into UAM some more, and will likely have more questions.

jeremybarnum commented 6 years ago

@dm61 thanks so much for your thoughts on this (and when I get a second maybe I'll do some bite-sized PRs to the Loopdocs algo section - although I don't want to distract @ps2 from his exciting current priorities!) A couple of things are clear me from reading your post and @scottleibrand's a second time through. One is that what I have in mind is, in fact, a Loop version of what UAM sounds like. Specifically, quoting Scott, a version of the algorithm that is designed to improve the

predicted carb effects for meals that are actually announced, but whose carb counts may be imprecise or inaccurate

When I get a second I will catch up on OpenAPS to try to understand how the "multiple opinions" are handled - i.e. are they a function of user-specified information at meal time?. Personally I think it would make sense in Loop to have this be specifiable by the user as part of carb entry - i.e., the user would specify that the entry is either a high certainty or low certainty entry, and the prediction would react to observed absorption accordingly.

@dm61 I take your point that actually modifying the carb entry seems weird. That's not exactly what I meant - it was more a heuristic for how I'm thinking about the problem. Take a simplified thought experiment whereby you consume a packaged, simple meal of fast burning, exactly measured carbs, with known absorption time. But for whatever reason you misread the label on the package and enter only half the number of carbs you are actually consuming. I find it useful to think about a "true" number of carbs as distinct from the "inputted" number of carbs in this scenario. And, if you assume away any cross terms whereby absorption time is sensitive to meal size, then the observed absorption rate will be exactly double what is expected - and the question is then what to do with this information. (of course this is a bit of a bad example because in this case the user would have said that the carb count was certain, since it's packaged food, in which case the current algorithm would be the right one in my view - but it's a useful thought experiment nonetheless).

I also agree that conceptually there is a lot of overlap between what we're talking about here and both RC and Momentum. And maybe it will turn out to be true that a very simple way of implementing this is simply to make RC more aggressive if the user has specified high uncertainty - maybe instead of projecting RC out for an hour with decay, project out to the specified carb absorption time, maybe with less decay. I wouldn't be surprised if mathematically, that would up being a very similar solution in practice to both UAM and some of what I was proposing initially. But rightly or wrongly I like to think of momentum as an all-in statement across all effects that what has happened most recently is the best prediction of what is about to happen; and of RC as a catch-all term for all unmodeled effects contributing to prediction error. If we could do a better job of predicting carb absorption by using the historical information we have, then RC could be left to capture truly unknown/unmodeled effects. This is a similar objective to the one we discussed in #577 - decreasing the workload on RC from knowable sources of error.

Finally, you also are raising a question of the curve shape for future carb absorption. Triangles, rectangles, others? I personally think that's a slightly less important question. I start with saying - how can we use the observed absorption information to get the best prediction of remaining COB and expected absorption time. If we have good central case expectations of those, the exact shape of the curve will not affect the eventual BG prediction, even it it does affect the shape of the prediction curve. In any event if the empirical absorption data from the community gives us new insights into what the shape actually is, then by all means we should use it - which would further decrease the workload on RC and clearly improve performance.

I started building a crude spreadsheet simulator for myself a couple of days ago which was quite helpful in developing intuition. After I finish it and play with it a little, I'll be back with some more precise thoughts, I hope.

jeremybarnum commented 6 years ago

After spending some time thinking about this, some updated thoughts:

I still think that in theory, it could be interesting to modify the carb effect prediction approach itself. It does feel slightly more "pure". But being pragmatic, leveraging RC seems more than adequate.

I do think having a simulator to test these ideas (key question: would this approach create more problematic lows or fewer, relative to the current approach, and how predictable would they be, even if low temping can't fix them?). The simulator needs to be stochastic, and I suspect this would be pretty easy to add to @dm61's Matlab simulator, but to be useful we would also have to enhance that simulator to include RC and Momentum. In any event I've decided to suck it up and try to build a basic simulator from scratch in a Swift playground as my "learn by doing" project. That will take longer but I've been putting it off learning Swift for too long. I'll report back with any progress.

dm61 commented 6 years ago

@jeremybarnum thanks for the updates. I've been planning to update the simulator with momentum, RC, and dynamic carb algorithm, as well as to allow for differences between user entered and "real" parameters both for meals and for ISF, CR, basal rates. I'd like to have a tool that would allow us to evaluate (at least in a very basic form) potential effectiveness of various algorithm improvements in the presence of randomized inputs and randomized user versus real parameters, along the lines you've suggested, I also agree we could examine some updates to how RC works. I am slow, and time available is very limited, so I am unable to give any timeline for simulator improvements, hopefully over the coming break.

jeremybarnum commented 6 years ago

@dm61 we are thinking about this identically - including in terms of speed - although I'm sure I'm much slower than your are. I wish I had much more time! Let's see how it goes - maybe we'll wind up with two competing simulators (mine will be very crude compared to yours) and we will get some benefit from the crosschecking that will result.

Kdisimone commented 4 years ago

Just adding a link to related conversations: https://loop.zulipchat.com/#narrow/stream/144111-general/topic/Possible.20Carb.20Model.20Changes and https://loop.zulipchat.com/#narrow/stream/144111-general/topic/InitialDelay.20in.202019-09-02.20Dev

github-actions[bot] commented 10 months ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 10 months ago

This issue was closed because it has been inactive for 14 days since being marked as stale.