Closed holbrooa closed 3 years ago
@TomRust @holbrooa @saveth and @saveth
For when we return to this later:
We propose defining high-symptom patients as those who are above 50% on the symptom distribution for this team.
- How is that defined, from their measures? If so, I’m VERY excited about this possibility.
- New floors and exempt PC/PCMHI - this makes a lot of sense to me. Let’s plan to meet about this after @holbrooa’s work on #615 is handed off to @saveth
- Yes, @TomRust and I talked about this at our meeting last week, and as a health services researcher working with a better estimate of the distribution of engagement durations makes a ton of sense to me. But, again, if this takes a fair bit of time, we do need to support a few other key priorities, do our best to come up with a cost estimate for this and then return to it.
Thanks so much All 😎
@holbrooa
@TomRust and I met today and want to return to this if your hand-off to Savet is complete, and the documention on OSF is done.
- Do we need a meeting?
Decision: - CHANGE 1 is APPROVED.
2. Let's review the case examples together, using the updated PC/PCMHI data guidance we received from Jodie. Cross-ref #596
Next Steps: @holbrooa
3. The recommendation is: "Pulling the episode counts along with the engagement durations, which Tom can use in the model to weight the flows."
This continues to make a lot of sense - it seems to primarily be a burden on @holbrooa**
Example: Number of patients who step down from SMH to GMH, currently based on engagement duration before step down, but not accounting for how many patients that is.
In this example, @TomRust might normalize all 3 outflows (i.e., the episode count) from the Low Symptom SMH Stock, to say 90% follow this path, 10% follow this path, 0% follow this path, etc.
In other words, we would validate the SP parameters and derive values, assuming that what we observe is driven by the team's patient characteristics (i.e., not under the team's control)
Therefore, in the model diagram, it would be red variables without sliders.
But, it likely would really help the team, if they had a way to see what % followed each flow.
Next Steps:
@staceypark FYI: And, we may need a meeting 😄
@holbrooa and @TomRust met and talked at length about these issues. We have a multi-pronged plan involving model changes and data changes. But first, some data exploration. We came up with a reasonable set of data to review, which @holbrooa will work on then @lzim can see it and make some decisions next week. I will try to have it done early in the week, to give myself some time to implement any changes resulting from that meeting.
Great! Thanks @TomRust and @holbrooa
I’m presenting at a conference next week, but ping me on GitHub and I will take a look.
Lindsey
@holbrooa @TomRust I believe you guys were going to provide an update on potential options we could pursue to resolve this. Could you two both join the Monday Workgroups meeting to run by your options with @lzim?\
Let's update this issue with as much detail before hand so we can be efficient.
New idea from Lindsey: Include the 1-n-dones as an outflow from "Early in Care"!!!
There was a bug in the gap thresholds. It's fixed, and the fix dramatically increases the engagement parameters, but that fix doesn't quite solve the problem (i.e. the parameters are still not quite up to the level of the floors).
There are 0-length engagements included for engagement durations in engagement time before ending and engagement time before step up/down. Removing them from the median calculation dramatically increases the parameters. By itself, this doesn't quite solve the problem, but together with the above change, most data pulls seem to be above the floors.
Does time to detect happen before the actual start of the new episode?
We have various options for how to deal with and how to explain the potential mismatch between the floors and the data. I'll make a separate post here.
Option A: Fixing the bug mentioned above and removing the 0-length engagements (and potentially moving them to a separate outflow as @lzim suggests) actually boosts the parameters above the level of the floors for the teams that @holbrooa has tested recently. So we could just make those two small data fixes and not change anything else. The removal of 0-length engagement is very easy to explain if they form a separate outflow in the case of ending. It's less clear for the step ups/downs.
So option A1 is removing the 1-n-dones and leaving it at that. A2 is implementing them as their own outflow. But what do we do with the 1-n-steps?
Option B: Do option A but also change the time to detect to allow a value of 0.1, giving us a lot more probability of our data parameters coming in above the floor. This could be explained as the mix of situation 1: "new episodes of care for existing patients," in which the time to detect often happens before our algorithm actually detects the new episode, and situation 2: new episodes for patients being referred in from elsewhere, where the detection really does have to happen in/after that first visit. If you have as much or more of situation 1 as you do of situation 2 (the data suggest this is usually the case, at least for GMH) then a median time to detect of 0 (ish) is totally reasonable.
@TomRust and @holbrooa also discussed getting rid of floors altogether and an option where teams pick their own time to detect. I think that latter case could just be option B above. We can also pull episode counts alongside the various parameters. I'm not totally sure how they'd be used.
Engagement duration will now only include duration in episode in treatment. And no longer includes the time/work it takes for the care team to make a decision. Time to Detect will no longer influence other outflows of Engagement Duration. The two variables are completely separate of each other now. (This accounts for patients who may have Engagement duration of 0-1 weeks, or patients that start new episode from being existing patients).
Inflow to High to Low Symptom Proportions to be 50%
0 length engagements - Yes, remove them to fix, but still need to resolve 1-n-step and 1-n-done situations.
Time to Detect - basecase will be 4 weeks as the default across all three settings
@TomRust and @holbrooa will update the model and parameters respectively based on changes discussed above. @TomRust & @holbrooa will craft updated i information text for @lzim & @staceypark to review.
Quoth the server: Four oh four.
,::::.._
,':::::::::.
_,-'`:::,::(o)::`-,.._
_.', ', `:::::::::;'-..__`.
_.-'' ' ,' ,' ,\:::,'::-`'''
_.-'' , ' , ,' ' ,' `:::/
_..-'' , ' , ' ,' , ,' ',' '/::
_...:::'`-..'_, ' , ,' , ' ,'' , ,'::|
_`.:::::,':::::,'::`-:..'_',_'_,'..-'::,'|
_..-:::'::,':::::::,':::,':,'::,':::,'::::::,':::;
`':,'::::::,:,':::::::::::::::::':::,'::_:::,'/
__..:'::,':::::::--''' `-:,':,':::'::-' ,':::/
.::::::,:::.-''--
..','. ,', , ' , ,' ', ',' ,::SSt:''''
\:. . ,' ' ,',' ','
``::.,'',',.-'
\ \
\\
\-
.-'
.-.\\__
.
-.-._
`
😂
@TomRust @holbrooa decided to leave Time to Improve as is, but we do need @TomRust to review definition for Time to Improve & Time to Detect and make sure they reflect decisions made during the meeting.
In @holbrooa 's data UI, everything to the right of the numbers are not dependent on the code and can be changed in final_datafiles before propagation. For anything to the left of the numbers, it needs to be updated in the code.
@holbrooa Any update on the code to the Data UI?
Yeah the SP stuff that we talked about is all implemented in nevermore. The descriptions aren't current anymore, though. Those can and should be changed directly in nevermore. For example, the symptom proportions are now spitting out 0.5 instead of a calculated value, but it still has the old description describing how it used to be calculated.
@holbrooa where would we find the current descriptions so we can update? would @TomRust have them or do we need to draft them?
I'm talking about the descriptions in the Data UI, so you'll find them in the relevant columns of SPParams in nevermore. Those description columns are static (i.e. not coming from the code) so you can just change it right there.
I don't think there's a new description anywhere for you to just use. Like, the data UI is out of date in this known way, and I don't think new language has been drafted for this parameter in any crosswalk or sim UI or anything. So yes, you'll need to draft something new. I can probably call in to a meeting or something to help with that, but I'm in SLC all week next week, so it might be easier to just email me whatever you come up if you want my double-check.
Revisiting decisions made on 10/23 Plan is to follow-up week of Jan. 6, 2020
FYI: @TomRust @staceypark @jamesmrollins @holbrooa
Tom, Stacey and Lndsey met and tested and reviewed SP in Nevermore.
Symptom Proportions New/edited definition edited in "Nevermore:" "An estimate of the proportion of patients engaged in episodes of care in each setting who have high symptoms with 50% above and 50% below the median. (pct)"
Question: Can we edit the "i" information that pops off to the side of the Team Data Table, or only individual pop-ups?
Re-cap: 0 length engagement time - we will exclude zero length engagements (i.e., "one and done") as this not an episode of care. And therefore, these zeros should not be included in calculations of medians (zero-inflation). This includes engagement time before step up/down & engagement time before ending.
Question about how we did or did not resolve 1-n-step and 1-n-done situations?
Based on record "Time to Detect" bc is 4 weeks.
Could not identify "Time to Improve" related decisions.
Still need to make a decision using the Episode Counts about whether and how to display.
Try to schedule a meeting in January with Tom, Lindsey & James (try to touch base with Andrew before then).
Thanks!
@jamesmrollins Could you update the 'i' pop-up for the Team Data Table in SP with the updated definition for Symptom Proportions:
- Symptom Proportions definition: New "Team Data Table" and "i" information for Master Cross Walk and Sim UI
Symptom Proportions New/edited definition edited in "Nevermore:" "An estimate of the proportion of patients engaged in episodes of care in each setting who have high symptoms with 50% above and 50% below the median. (pct)"
@staceypark will you please confirm the change below?
@hirenp-waferwire - please make the change indicated below in the SP Team Data Table.
Looks correct
Hi @TomRust - An action item in Lucid says, "Also need to comb through an look for any mentions that states MTL units are "Patients" or "Appointments" because we also have the unit: "Episodes of Care" for SP." @branscombj and I went through the SP module and found 2 occurrences of Episodes of Care on the model diagram linking dials or rectangle measured in patients; 2 mentions of Episodes of Care in the team data table measured in patients/week or weeks, and one 1 experimental variable measured in patients/week. Screen shots below. Not sure how to best edit the see/say files given the Episodes of Care are measured mainly in patients/week or weeks. Talked with @jamesmrollins @lzim about this today and suggestion was to put the information here. Thanks.
Discussed this issue with @TomRust last Friday. We reviewed the episodes count issue and determined that if less than 1% of the total population of episodes of care is returned from the team data query, than the sample size should be considered insufficient. If the sample is insufficient, then the model should read the number as a zero; thereby, shutting down the flow in question and preventing the user from making a less-than-plausible inference.
We discussed two ways to manage the logic for this:
In either case, insufficient data will return zero, thus not putting the user in a position to judge whether or not the model extrapolation is valid. There would be no graphical warning or measurement such a value weighting similar to Yelp or Amazon style ratings necessary, since the sim would not present insufficient measurements in the form of team data in the first place.
@TomRust indicated it would take 10 hours to complete the task at the model level.
It would about 5 hours of effort on our end due to some research that @anazariz and I would have to do prior to making the change. @jamesmrollins
@jamesmrollins and @TomRust
How would this zero be explained to the user?
@anthonycpichardo @anazariz
@lzim @TomRust @anthonycpichardo @anazariz couldn't we put an explanation in the Team Data table? This assumes they are looking at the Team Data when they download it from the portal.
Otherwise, we are back to trying to figure out how to graphically warn them in the Team Data Table in the very busy Experiments Section, which could be accomplished by:
The yellow cells indicate numbers that do not have a sample size adequate for representation in the model.
@jamesmrollins @lzim @anthonycpichardo I moved this to the current Epic, as I believe this will affect documentation for go_live.
Updates from James 2020/02/24 workgroup leads meeting:
ISSUE: We need to make a decision on where this will be implemented (quant, model/sim) and clarify which documents it will affect.
DECISION: We decided to resolve this issue at this Thursday's Support Workgroups Meeting.
From Support Groups Meeting 2/27
Agree that the exclamation point indicator on Issue #648 is good and sufficient for Sim UI Team Data Table.
Need language for it, and detail in the SEE and SAY guides (session 3 Team Data and session 5 - Experiments section of the Sim UI).
Would be great to add the parallel exclamation point to the Model Diagram for SP.
Lindsey proposes that we need to understand the fuller implications of setting flow diagrams to zero, affect other flows in the diagram.
James, Tom & Lindsey need to find a time to meet. Friday 2/28 or next week after hours may work. James to reach out to Tom for possible times.
Guide language workshopped during the meeting
Our goal is to make the learners aware when low counts used for estimates. A key systems insight for facilitators is that teams will always find the highest leverage change focusing on where the patients are (i.e., stocks or flows).
Sometimes we may have estimates derived from very few observations. We decided to set infrequently observed values to zero to avoid inflation of rarer episodes of care.
Poor inferences are possible using estimates when the count is low and the duration is low. If the values are <1% of the total counts of episodes of care in the team, and they are retained in the model, it will inflate rare episodes.
Here is example competing for the same outflow of patients from PC/PCMHI PC/PCMHI to GMH
PC/PCMHI to SMH
Because the flow is based on the engagement duration (weeks), below is an example 30:1 difference in episode counts, but the difference in engagement duration is actually only 2:1 (30 weeks to 14 weeks).
Retaining this parameter to estimate these model outflow (pts/wk) over the course of a two year experiment timeline would exaggerate the number of patients who step up from PC/PCMHI to SMH from 1 to 20 and reduce the number who step up from PC/PCMHI to GMH from 30 to 10.
If we set it to zero it won't include this rare flow in the basecase. Because the outflow that is infrequently observed is set to zero, it will slightly inflate the other stock outflows, but this is better than extreme exaggeration of an uncommon episodes of care.
However, the Measurement Based Stepped Care for Suicide Prevention module includes low base rate patterns for high risk flag patients.
The outflows from the HRF Patients stock are to residential
Sim UI - Setting View [CURRENT IS]: PC/PCMHI - Residential or Unflag (2 outflows) GMH - Residential, GMH or Unflag (3 outflows) SMH - Residential, GMH or Unflag (3 outflows) - no change
Setting View [CHANGE NEEDED IS]: PC/PCMHI - Residential, GMH or Unflag (3 outflows) GMH - Residential or Unflag (2 outflows) - likely to keep their HRF flag patients SMH - Residential, GMH or Unflag (3 outflows) - no change
Inpatient was not included as an outflow from any setting view because the patient never really transfer fully to the inpatient setting, they are still typically managed by the team coordinating their treatment before the inpatient stay and after discharge from an inpatient stay.
The episode counts for HRF patients are always likely to represent a low percentage of the total episodes of care in the team.
We use the same logic for parameters that control the outflows from the HRF patients stock. These parameter estimates are set to zero when they are < 1% of the HRF episodes of care, and not when they are < 1% of the total episodes of care in the team.
Hi @TomRust, I know you are busy, so I will just put my inputs here. In order for the simulation to turn on an exclamation point, it will need a signal from the model. To that end I propose that we create a data variable structure in the example below. I count about 50 data variables (not including “adjusted”), although I don’t think they are all related to this problem.
Variable Name | What it does in the model | What is does in the SIM UI |
---|---|---|
Data Sum of Episodes | Carries the sum of all episode counts | Not used by Sim UI |
Data GMH to PC/PCMHI Engagement Time before Step down Flag | If the episodes of care associated with this Engagement Time are less than 1% of total episode counts, then = 1. The 1 can be used to turn the value of the flow to “0.” | The sim logic will read the “1” and turn the related “!” icons next to the team data table entry and the related parts of the flow diagram. |
What do you think? Let me know if I can help. James
@jamesmrollins @lzim Just touch based quickly with @TomRust who is hoping to turn to and resolve this on Monday. Tom has been made aware of the documentation dependencies that are waiting on his work.
@TomRust has blocked out time next Monday 3/23 to check the models and the <1% threshold.
@TomRust @staceypark @jamesmrollins
The "SP floors" decision rules only refer to the basecase:
For experiments you can turn these sections back "on:"
"!" pop-up language: "This parameter value is < 2% of total episodes of care, which is an unreliable estimate. In the basecase, the parameter reads "0" to disengage this model flow. However, a non-zero value can be used in an experiment."
@staceypark Can you please confirm with @anthonycpichardo & @jamesmrollins about using the correct Team 4 data. Thanks!
@staceypark @lijenn @jamesmrollins @dlkibbe and I are not sure if we have a documentation updates task tied to this and, if so, whether that has to wait for this to be implemented or we (you:)) know enough now for those updates to be made? Thanks!
@TomRust @jamesmrollins
@branscombj @dlkibbe I think wait on documentation until we get an update from Tom
*Also, I edited the title, milestone and epic to reflect may_epic
@TomRust I just wanted to quickly ping to see if you are able to post the model, and hypotheses for the users you want to review it. I know your attention will be focused elsewhere (or maybe already is?) - let us know. Thanks!
Hi all! @lzim @jamesmrollins @branscombj @dlkibbe @staceypark
Just trying to close this issue before I’m in newborn-land. I’d like feedback on a prototype from both James and some facilitators that addresses the three issues raised at the top of this issue thread. Unfortunately, the prototype is just a Vensim model, so I guess the only thing to for facilitators to test is the description of the use case? I hope that works! I’d love for Debbie and Jane to see how this change to SP fits into the facilitation of Session 5, and James to check out all the related SimUI changes.
I've split the hypotheses to be tested into three comments, as there were three "issues" raised that started this thread in the first place. I assume it will be easier to track responses when they are separated.
--TOM
Address part 1 (from top of issue thread): The symptom % is now hard-coded to be a 50/50 split, which means that we're defining "high symptom" as the any patient that has greater than average symptoms, and "low symptom" as less than average symptoms. Up to now, we've been defining high and low symptom by where a patient transfers to (e.g., a patient who steps up is labeled "high symptom"), and then the model takes these "where they went" data and calculates a their initial symptom state based on all the other parameters...which is very hard to explain and added seconds to the sim run time.
Use-case to test for SP sessions for @branscombj and @dlkibbe : End user – In session 3, we produce the team data table, and are told that the model includes an a few important dynamic that we don't have good data for (yet) in VHA: 1) patients get better when they're in treatment, 2) not all patients have the same severity of symptoms . . . and others that i don't remember now :) . When explaining how the model captures patients coming to our setting with a distribution of symptoms, the model simplifies this to be half of patients start out "high" symptom (i.e., with more of / more severe than average symptoms) and the other half start out "low" symptom.
Sim Dev @jamesmrollins – Now that we've changed the underlying definition, the symptoms % is no longer based on team data.
Address issues #3: I included a “parameter scrub” logic that uses a "threshold for inclusion" based on whether or not a parameter is based on >2% of either total HRF or no-HRF care episodes (across all the clinics the team selected).
Use case to test by facilitators @dlkibbe @branscombj -- End-user: I’m in session 5, reviewing how all my patients’ data are used to customize the sim. My team data is has some “wacky” values that I don’t trust/don’t mesh with my experience. But, looking at the “episode count column” in the model parameters file shows me that these numbers are based on some “one-off” incidents, and thus should probably be ignored as statistically insignificant. My facilitator explains that model parameters based on a tiny fraction of the care we deliver (less than 2% of the total episodes for either patients with or without a High Risk Flag) are excluded from the simulation. This allows me to focus my learning on what happens most of the time, to find change ideas that will improve care for the most patients. They reassure me that we can always add any of these values back into the simulation as an experiment, if we want.
Hypotheses to test:
a. End-users will no longer be distracted by “invalid” model parameters based on a small number of care episodes b. Learning will go faster, as the simulation behavior will no longer be skewed by these “untrustworthy” parameter values c. Easier for users and facilitators to focus on the most important flows in the model, as the “less significant” flows will be turned off by leaving out these parameter values, and the whole SP model will become simpler
Questions for the @jamesmrollins : The Vensim now has all these additional variables and equations that can be used to modify the graphics in the SimUI. Last time we talked, we came up with the idea that if a parameter fell below our “threshold for inclusion,” then that parameter would be literally flagged (with a red exclamation point, right?) in the team data table, these exclamation points would have a pop-up with explanation text, and the related model structure (the flow that parameter governed) would be “grayed out” or made noticeably more transparent. (see note from March20 in this issue)
Three questions for everyone on the thread:
I dropped the new version of SP with these changes here: teampsd/model_workgroup/models/mtl_1.8_models/
Finally, issue part 2, about modifying team data "floors" and Time to Detect:
I haven't made any model changes about this part, but could very easily. I think we should get rid of the floors, but wanted to run it by you all first. Here is a discussion of the details:
1) Do we keep “floors” for Time to Improve and Engagement Duration before Step Up/Down?
Facilitator changes: We remove references to data "floors" in guides for sessions 3 and 5. Sim UI changes: We remove references to data "floors" in the i text in the Team Data Table.
2) Do we keep base case Time to Detect set at 24 weeks for PC, 12 for GMH, and 4 for SMH?
@TomRust @dlkibbe @branscombj please see the mock ups and responses to Tom's questions below:
Sim Dev @jamesmrollins – Now that we've changed the underlying definition, the symptoms % is no longer based on team data.
- Can we remove symptom proportions from the team data table in the Sim UI?
ANSWER: Yes. The illustration below shows that line removed.
Questions for the @jamesmrollins : The Vensim now has all these additional variables and equations that can be used to modify the graphics in the SimUI. Last time we talked, we came up with the idea that if a parameter fell below our “threshold for inclusion,” then that parameter would be literally flagged (with a red exclamation point, right?) in the team data table, these exclamation points would have a pop-up with explanation text, and the related model structure (the flow that parameter governed) would be “grayed out” or made noticeably more transparent. (see note from March20 in this issue)
- Can you upload a mock-up?
I've given all the variables in Vensim that would trigger these exclamation points to appear the prefix "flag...," where a zero means the parameter falls below our threshold for inclusion and thus the exclamation point should show up in the Data Table. Does those variables work?
- Graying out the rate in the model diagram related to any of these excluded parameters is straightforward, but graying out some stocks will need a bit of logic, as some stocks have more than one inflow. Any stock should only be “grayed out” if all its inflows are zero. Also, if a stock is grayed out, then all its outflows should also be “grayed out.”
ANSWERS: Regarding #1 - Mock-ups are shown below. Regarding #2 - An exclamation point inside a red circle icon is shown by all flagged values. When the user clicks the red icon, a pop up with explanation text is presented. Regarding #3 - I don't think we can support graying out sections of the diagrams. However, we can put red icons near the affected variables. We may need to expand the explanation in the Pop Up that explains that since the threshold is not met, the variable is returning a "0" value. Also note how many variables are affected by SMH values that fall below threshold. Lights up much of the Team Data Table.
Team Data Table Mock-up
Diagram Mock-up
"!" Icon Pop up
We may need to expand the explanation in the Pop Up that explains that since the threshold is not met, the variable is returning a "0" value.
Am I understanding this correctly: The values with exclamation points would always be zeroes?
Hi @branscombj , yes, correct. If the Team Data value does not meet the 2% threshold, a zero gets passed from the variable to all the other variables with which it is connected. Therefore, if the variable is multiplied in the downstream relationship, the product will be zero. However, if associated with a user-defined variable (slider), then the slider value would be reflected in the calculation.
In the illustration above, the red slider "SMH Recommended New Episodes of Care Rate," would be initially zero because the corresponding value in the Team Data Table (1.02) is below the 2% threshold. This should indicate to the user that the value in the Team Data Table is statistically unreliable; therefore, not used (set to zero). However, the value can be experimentally manipulated using the slider to a desired value.
I am available - please let me know if I can be of further assistance.
Hi @lijenn @dlkibbe @branscombj .
Vensim Model Variables for SP Floors
Updated Team Data Table with Added "!" Icons in the "Time to Improve" row.
Hi @lijenn @branscombj @dlkibbe . The updated Sim UI with floors and episode count flags is available Tuesday in test. I want today to take a look, so don't look until tomorrow. I need to update the "i" information and stuff. We will probably need to look through the See and Say guides. Also take a look at the explanatory language for the "i" information. I will double check your TEST logins in Epicenter (it's been awhile) and make sure you have current logins.
Hi @hirenp-waferwire. Please find the following changes/corrections:
1. Please add the "!" icons to the main diagram.
2. Please correct the variable mapping.
@branscombj @lijenn @dlkibbe
Hi @lijenn, @branscombj and @dlkibbe.
Bottom line up front (BLUF): I have completed the design of the find-and-replace, spell checker, link checker and the markdown style linter. The instructions for their use is in the Team PSD SOP. I think we wanted to use this issue as a test case for the actions. If you would like, I can be available to provide you with support during your next workgroup meeting or whenever.
Below are a few notes for your consideration.
@jamesmrollins and @lijenn
What is the latest on this? Thanks!
@TomRust and @holbrooa are meeting to discuss emergent SP parameter issues.
High-symptom proportions are limited to user-selected SMH team Currently, high-symptom proportions are very low, because patients are categorized as high-symptom based on being stepped up to the SMH team that the user selected. In real life, many patients are stepped up to other SMH teams, so categorizing those patients as low-symptom seems wrong. We propose defining high-symptom patients as those who are above 50% on the symptom distribution for this team. There would not necessarily be the same number of high- and low-symptom patients, but a new patient to the team would be equally likely to flow into either stock. This seems easier to explain than the current state, where we sometimes have to explain percentages as low as 2%. This is very little work on both the data and the model side. This change would necessitate changes on the sim UI - both i text and removing the symptom proportion entries in the team data table.
Floors should be re-examined In light of almost all engagement durations being below the floor for recent data pulls, it seems like we should re-think the clinical floors and how they map onto the transfer logic. We went back and looked at meeting notes, and found that the floors may have been implemented incorrectly to begin with - the 24 week time to stabilize was applied everywhere. Also, we think PC/PCMHI should probably be exempt from most or all of these floors, because they don't necessarily perform the same clinic activities i.e. they're not a place where patients stabilize. Andrew will try to pull some example cases for us to think through the interaction between the clinical floors and the transfer algorithms. This is very little work for the data and model side. This would necessitate some sim UI i text changes.
Use episode count parameters Currently, we are in danger of using medians that have been calculated from very few data points and then applying them to all patients in a setting, potentially distorting the patient outflows. We propose pulling the episode counts along with the engagement durations, which Tom can use in the model to weight the flows to better fit to the team. There are some good reasons to proceed with this change, but it would take a significant amount of extra work on both the data and the model side. The sim UI shouldn't be affected.