ethanopp / fitly

Self hosted web analytics for endurance athletes
MIT License
180 stars 25 forks source link

Oura import wont work when API returns multiple readiness entries for the same day #3

Closed pierretamisier closed 3 years ago

pierretamisier commented 3 years ago

I suspect Oura to record incomplete datasets for sleep data when the ring runs out of battery. For example, in my case, the pull_sleep_data() import blows up when the oura API returns a set of data without all the columns such as 'rmssd_5min' or 'hr_5min'.

Incomplete dataset:

{'midpoint_time': 19020, 'score_total': 96, 'score_alignment': 100, 'total': 30420, 'awake': 5100, 'score_disturbances': 69, 'is_longest': 1, 'light': 9420, 'score_latency': 52, 'bedtime_end': '2016-11-02T07:24:58-05:00', 'hypnogram_5min': '44444444433334433333333333333224322223322333333333332333332333222211222222233223333333322223333333333344233333323333344', 'breath_average': 16, 'efficiency': 86, 'hr_average': 68.125, 'score_efficiency': 86, 'rem': 20550, 'period_id': 0, 'duration': 35520, 'bedtime_start': '2016-11-01T21:32:58-05:00', 'score': 79, 'score_rem': 100, 'deep': 450, 'score_deep': 8, 'timezone': -300, 'onset_latency': 2610, 'summary_date': '2016-11-01', 'restless': 40}

The above leads to a KeyError on '5min_hr'.

The one below is a "valid" record:

{'awake': 3120, 'bedtime_end': '2020-08-06T06:32:30-07:00', 'bedtime_end_delta': 23550, 'bedtime_start': '2020-08-05T22:32:30-07:00', 'bedtime_start_delta': -5250, 'breath_average': 15.75, 'deep': 2790, 'duration': 28800, 'efficiency': 89, 'hr_5min': [0, 0, 62, 62, 59, 59, 59, 59, 60, 61, 61, 61, 62, 62, 63, 62, 62, 62, 62, 63, 63, 61, 61, 60, 60, 61, 62, 66, 65, 68, 69, 61, 64, 65, 63, 60, 60, 58, 62, 61, 62, 61, 62, 60, 58, 57, 58, 58, 57, 58, 58, 58, 56, 54, 54, 54, 55, 54, 54, 54, 52, 53, 53, 53, 54, 57, 56, 54, 51, 50, 50, 50, 50, 51, 51, 51, 52, 52, 52, 52, 52, 52, 53, 53, 56, 56, 54, 53, 51, 51, 50, 51, 51, 52, 52, 50, 52], 'hr_average': 57.12, 'hr_lowest': 50, 'hypnogram_5min': '442222211111133222211222233333443322223322222222222222222222222233322222222222223333322222222224', 'is_longest': 1, 'light': 18420, 'midpoint_at_delta': 9390, 'midpoint_time': 14640, 'onset_latency': 690, 'period_id': 1, 'rem': 4470, 'restless': 35, 'rmssd': 48, 'rmssd_5min': [0, 0, 41, 34, 46, 45, 31, 28, 25, 21, 21, 27, 18, 15, 18, 41, 31, 45, 36, 31, 31, 50, 36, 39, 34, 48, 35, 29, 33, 25, 24, 44, 29, 42, 53, 54, 48, 55, 45, 54, 42, 42, 33, 32, 45, 45, 31, 31, 42, 50, 34, 40, 74, 69, 46, 60, 47, 51, 45, 61, 69, 87, 62, 74, 53, 61, 68, 75, 74, 57, 49, 59, 67, 50, 46, 51, 40, 61, 44, 42, 72, 58, 72, 83, 76, 55, 72, 60, 77, 81, 73, 61, 60, 54, 63, 74, 66], 'score': 76, 'score_alignment': 100, 'score_deep': 49, 'score_disturbances': 63, 'score_efficiency': 93, 'score_latency': 91, 'score_rem': 64, 'score_total': 76, 'summary_date': '2020-08-05', 'temperature_delta': -0.24, 'temperature_deviation': -0.24, 'temperature_trend_deviation': 0.02, 'timezone': -420, 'total': 25680}

I'm reviewing the import methodology in ouraAPI.pull_sleep_data(oura, days_back=7)

pierretamisier commented 3 years ago

How about adding this line (sorry cant create a branch on the repo so I'm copying/pasting my suggestions here):

# When Oura doesnt record all the fields we want, we '0' them (instead of dealing with NaN)
df_sleep_summary = df_sleep_summary.fillna(0)

right before the drop columns:

df_sleep_summary = df_sleep_summary.drop(columns=['rmssd_5min', 'hr_5min', 'bedtime_end', 'bedtime_start'],
                                                 axis=1)

Then taking the incomplete & "valid" data sets from my first comment, I see that this line blows up because cant multiply:

df['timestamp_local'] = pd.to_datetime(x['bedtime_start']) + pd.to_timedelta(df.index * 5, unit='m')

Here's how I debugged. I created one dict for each dataset, converted to dataframes (df_missing_fields & df_valid) and ran the relevant lines.

>>> missing_fields = {'midpoint_time': 19020, 'score_total': 96, 'score_alignment': 100, 'total': 30420, 'awake': 5100, 'score_disturbances': 69, 'is_longest': 1, 'light': 9420, 'score_latency': 52, 'bedtime_end': '2016-11-02T07:24:58-05:00', 'hypnogram_5min': '44444444433334433333333333333224322223322333333333332333332333222211222222233223333333322223333333333344233333323333344', 'breath_average': 16, 'efficiency': 86, 'hr_average': 68.125, 'score_efficiency': 86, 'rem': 20550, 'period_id': 0, 'duration': 35520, 'bedtime_start': '2016-11-01T21:32:58-05:00', 'score': 79, 'score_rem': 100, 'deep': 450, 'score_deep': 8, 'timezone': -300, 'onset_latency': 2610, 'summary_date': '2016-11-01', 'restless': 40}
>>> missing_fields
{'midpoint_time': 19020, 'score_total': 96, 'score_alignment': 100, 'total': 30420, 'awake': 5100, 'score_disturbances': 69, 'is_longest': 1, 'light': 9420, 'score_latency': 52, 'bedtime_end': '2016-11-02T07:24:58-05:00', 'hypnogram_5min': '44444444433334433333333333333224322223322333333333332333332333222211222222233223333333322223333333333344233333323333344', 'breath_average': 16, 'efficiency': 86, 'hr_average': 68.125, 'score_efficiency': 86, 'rem': 20550, 'period_id': 0, 'duration': 35520, 'bedtime_start': '2016-11-01T21:32:58-05:00', 'score': 79, 'score_rem': 100, 'deep': 450, 'score_deep': 8, 'timezone': -300, 'onset_latency': 2610, 'summary_date': '2016-11-01', 'restless': 40}
>>> df_missing_fields = pd.concat([pd.Series(missing_fields.get('hr_5min'), name='hr_5min'), pd.Series(missing_fields.get('rmssd_5min'), name='rmssd_5min'), pd.Series([int(y) for y in missing_fields.get('hypnogram_5min')], name='hypnogram_5min')], axis=1)
<stdin>:1: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
>>> valid = {'awake': 3120, 'bedtime_end': '2020-08-06T06:32:30-07:00', 'bedtime_end_delta': 23550, 'bedtime_start': '2020-08-05T22:32:30-07:00', 'bedtime_start_delta': -5250, 'breath_average': 15.75, 'deep': 2790, 'duration': 28800, 'efficiency': 89, 'hr_5min': [0, 0, 62, 62, 59, 59, 59, 59, 60, 61, 61, 61, 62, 62, 63, 62, 62, 62, 62, 63, 63, 61, 61, 60, 60, 61, 62, 66, 65, 68, 69, 61, 64, 65, 63, 60, 60, 58, 62, 61, 62, 61, 62, 60, 58, 57, 58, 58, 57, 58, 58, 58, 56, 54, 54, 54, 55, 54, 54, 54, 52, 53, 53, 53, 54, 57, 56, 54, 51, 50, 50, 50, 50, 51, 51, 51, 52, 52, 52, 52, 52, 52, 53, 53, 56, 56, 54, 53, 51, 51, 50, 51, 51, 52, 52, 50, 52], 'hr_average': 57.12, 'hr_lowest': 50, 'hypnogram_5min': '442222211111133222211222233333443322223322222222222222222222222233322222222222223333322222222224', 'is_longest': 1, 'light': 18420, 'midpoint_at_delta': 9390, 'midpoint_time': 14640, 'onset_latency': 690, 'period_id': 1, 'rem': 4470, 'restless': 35, 'rmssd': 48, 'rmssd_5min': [0, 0, 41, 34, 46, 45, 31, 28, 25, 21, 21, 27, 18, 15, 18, 41, 31, 45, 36, 31, 31, 50, 36, 39, 34, 48, 35, 29, 33, 25, 24, 44, 29, 42, 53, 54, 48, 55, 45, 54, 42, 42, 33, 32, 45, 45, 31, 31, 42, 50, 34, 40, 74, 69, 46, 60, 47, 51, 45, 61, 69, 87, 62, 74, 53, 61, 68, 75, 74, 57, 49, 59, 67, 50, 46, 51, 40, 61, 44, 42, 72, 58, 72, 83, 76, 55, 72, 60, 77, 81, 73, 61, 60, 54, 63, 74, 66], 'score': 76, 'score_alignment': 100, 'score_deep': 49, 'score_disturbances': 63, 'score_efficiency': 93, 'score_latency': 91, 'score_rem': 64, 'score_total': 76, 'summary_date': '2020-08-05', 'temperature_delta': -0.24, 'temperature_deviation': -0.24, 'temperature_trend_deviation': 0.02, 'timezone': -420, 'total': 25680}
>>> valid
{'awake': 3120, 'bedtime_end': '2020-08-06T06:32:30-07:00', 'bedtime_end_delta': 23550, 'bedtime_start': '2020-08-05T22:32:30-07:00', 'bedtime_start_delta': -5250, 'breath_average': 15.75, 'deep': 2790, 'duration': 28800, 'efficiency': 89, 'hr_5min': [0, 0, 62, 62, 59, 59, 59, 59, 60, 61, 61, 61, 62, 62, 63, 62, 62, 62, 62, 63, 63, 61, 61, 60, 60, 61, 62, 66, 65, 68, 69, 61, 64, 65, 63, 60, 60, 58, 62, 61, 62, 61, 62, 60, 58, 57, 58, 58, 57, 58, 58, 58, 56, 54, 54, 54, 55, 54, 54, 54, 52, 53, 53, 53, 54, 57, 56, 54, 51, 50, 50, 50, 50, 51, 51, 51, 52, 52, 52, 52, 52, 52, 53, 53, 56, 56, 54, 53, 51, 51, 50, 51, 51, 52, 52, 50, 52], 'hr_average': 57.12, 'hr_lowest': 50, 'hypnogram_5min': '442222211111133222211222233333443322223322222222222222222222222233322222222222223333322222222224', 'is_longest': 1, 'light': 18420, 'midpoint_at_delta': 9390, 'midpoint_time': 14640, 'onset_latency': 690, 'period_id': 1, 'rem': 4470, 'restless': 35, 'rmssd': 48, 'rmssd_5min': [0, 0, 41, 34, 46, 45, 31, 28, 25, 21, 21, 27, 18, 15, 18, 41, 31, 45, 36, 31, 31, 50, 36, 39, 34, 48, 35, 29, 33, 25, 24, 44, 29, 42, 53, 54, 48, 55, 45, 54, 42, 42, 33, 32, 45, 45, 31, 31, 42, 50, 34, 40, 74, 69, 46, 60, 47, 51, 45, 61, 69, 87, 62, 74, 53, 61, 68, 75, 74, 57, 49, 59, 67, 50, 46, 51, 40, 61, 44, 42, 72, 58, 72, 83, 76, 55, 72, 60, 77, 81, 73, 61, 60, 54, 63, 74, 66], 'score': 76, 'score_alignment': 100, 'score_deep': 49, 'score_disturbances': 63, 'score_efficiency': 93, 'score_latency': 91, 'score_rem': 64, 'score_total': 76, 'summary_date': '2020-08-05', 'temperature_delta': -0.24, 'temperature_deviation': -0.24, 'temperature_trend_deviation': 0.02, 'timezone': -420, 'total': 25680}
>>> df_valid = pd.concat([pd.Series(valid.get('hr_5min'), name='hr_5min'), pd.Series(valid.get('rmssd_5min'), name='rmssd_5min'), pd.Series([int(y) for y in valid.get('hypnogram_5min')], name='hypnogram_5min')], axis=1)
>>> pd.to_timedelta(df_missing_fields.index * 5, unit='m')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/site-packages/pandas/core/ops/invalid.py", line 53, in invalid_op
    raise TypeError(f"cannot perform {name} with this index type: {typ}")
TypeError: cannot perform __mul__ with this index type: Index
>>> pd.to_timedelta(df_valid.index * 5, unit='m')
TimedeltaIndex(['0 days 00:00:00', '0 days 00:05:00', '0 days 00:10:00',
                '0 days 00:15:00', '0 days 00:20:00', '0 days 00:25:00',
                '0 days 00:30:00', '0 days 00:35:00', '0 days 00:40:00',
                '0 days 00:45:00', '0 days 00:50:00', '0 days 00:55:00',
                '0 days 01:00:00', '0 days 01:05:00', '0 days 01:10:00',
                '0 days 01:15:00', '0 days 01:20:00', '0 days 01:25:00',
                '0 days 01:30:00', '0 days 01:35:00', '0 days 01:40:00',
                '0 days 01:45:00', '0 days 01:50:00', '0 days 01:55:00',
                '0 days 02:00:00', '0 days 02:05:00', '0 days 02:10:00',
                '0 days 02:15:00', '0 days 02:20:00', '0 days 02:25:00',
                '0 days 02:30:00', '0 days 02:35:00', '0 days 02:40:00',
                '0 days 02:45:00', '0 days 02:50:00', '0 days 02:55:00',
                '0 days 03:00:00', '0 days 03:05:00', '0 days 03:10:00',
                '0 days 03:15:00', '0 days 03:20:00', '0 days 03:25:00',
                '0 days 03:30:00', '0 days 03:35:00', '0 days 03:40:00',
                '0 days 03:45:00', '0 days 03:50:00', '0 days 03:55:00',
                '0 days 04:00:00', '0 days 04:05:00', '0 days 04:10:00',
                '0 days 04:15:00', '0 days 04:20:00', '0 days 04:25:00',
                '0 days 04:30:00', '0 days 04:35:00', '0 days 04:40:00',
                '0 days 04:45:00', '0 days 04:50:00', '0 days 04:55:00',
                '0 days 05:00:00', '0 days 05:05:00', '0 days 05:10:00',
                '0 days 05:15:00', '0 days 05:20:00', '0 days 05:25:00',
                '0 days 05:30:00', '0 days 05:35:00', '0 days 05:40:00',
                '0 days 05:45:00', '0 days 05:50:00', '0 days 05:55:00',
                '0 days 06:00:00', '0 days 06:05:00', '0 days 06:10:00',
                '0 days 06:15:00', '0 days 06:20:00', '0 days 06:25:00',
                '0 days 06:30:00', '0 days 06:35:00', '0 days 06:40:00',
                '0 days 06:45:00', '0 days 06:50:00', '0 days 06:55:00',
                '0 days 07:00:00', '0 days 07:05:00', '0 days 07:10:00',
                '0 days 07:15:00', '0 days 07:20:00', '0 days 07:25:00',
                '0 days 07:30:00', '0 days 07:35:00', '0 days 07:40:00',
                '0 days 07:45:00', '0 days 07:50:00', '0 days 07:55:00',
                '0 days 08:00:00'],
               dtype='timedelta64[ns]', freq=None)
>>> df_missing_fields.index
Index([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9,
       ...
       109, 110, 111, 112, 113, 114, 115, 116, 117, 118],
      dtype='object', length=119)
>>> df_valid.index
RangeIndex(start=0, stop=97, step=1)

The only difference I see being a problem is the Index vs RangeIndex between thing. Food for thought. I'll keep digging.

ethanopp commented 3 years ago

I'm not sure how exactly you're getting that response from the oura API. If the ring died I don't think you would have gotten a full a value for hypnogram_5min.

...And I'm not 100% on this, but I would think even if there was no value for the hr_5min, it would still return an empty list.

Can you try hitting the API again directly for that given date and make sure that is the actual response, and its not something else in the code screwing it up?

Filling with 0s before dropping I don't think will do anything, because the df_samples builds off of the api response, not df_summary:

        # Sleep Samples
        df_samples_list = []
        for x in oura_data:

If you're still not getting the hr_5min columns at all in the response, we could probably just skip the creation of df_samples all together, because that df is just the 4 '5min' columns in 1 df, and we could just store the summary info.

These are all the columns you should be getting: https://cloud.ouraring.com/docs/sleep

pierretamisier commented 3 years ago

I reached out to Oura support regarding this issue. I had the first version of the ring prior to everything published to the cloud so maybe data was recorded differently at the time (2016)?

Regarding filling with 0s you are right. It is useless. However, since I'm dealing with some sleep datasets with missing fields such as '5min_hr' (waiting on Oura to get back to me on that) I think I need to retrieve columns thru x.get('5min_hr') rather than x['5min_hr']. The get() knows what to do if there's no data.

I replace this block:

# Sleep Samples
        df_samples_list = []
        for x in oura_data:
            df = pd.concat([pd.Series(x['hr_5min'], name='hr_5min'), pd.Series(x['rmssd_5min'], name='rmssd_5min'),
                            pd.Series([int(y) for y in x['hypnogram_5min']], name='hypnogram_5min')],
                           axis=1)

with this block instead:

# Sleep Samples
        df_samples_list = []
        for x in oura_data:
            df = pd.concat([pd.Series(x.get('hr_5min'), name='hr_5min'), pd.Series(x.get('rmssd_5min'), name='rmssd_5min'),
                            pd.Series([int(y) for y in x.get('hypnogram_5min')], name='hypnogram_5min')],
                           axis=1)

Which then allows me to go to the following line which blows up because of the index thing.

            df['timestamp_local'] = pd.to_datetime(x['bedtime_start']) + pd.to_timedelta(df.index * 5, unit='m')
Error pulling oura data: cannot perform __mul__ with this index type: Index

Either way, I removed the loop populating the df_sleep_samples in the meantime but there seems to be an issue with the primary key of oura_readiness_summary.report_date (even after doing a 'truncate all')

Error pulling oura data: (sqlite3.IntegrityError) UNIQUE constraint failed: oura_readiness_summary.report_date
[SQL: INSERT INTO oura_readiness_summary (report_date, score_sleep_balance, score_temperature, score_activity_balance, score_previous_day, score_resting_hr, score, period_id, score_recovery_index, summary_date, score_previous_night, score_hrv_balance) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: (('2016-06-28', 71, 0, 85, 0, 0, 70, 0, 0, '2016-06-27', 60, None), ('2016-06-29', 71, 99, 58, 78, 99, 74, 0, 46, '2016-06-28', 69, None), ('2016-06-30', 77, 100, 82, 88, 93, 79, 0, 44, '2016-06-29', 78, None), ('2016-07-01', 66, 100, 74, 83, 1, 47, 0, 62, '2016-06-30', 42, None), ('2016-07-02', 85, 0, 81, 92, 0, 85, 0, 0, '2016-07-01', 85, None), ('2016-07-03', 85, 82, 83, 90, 22, 67, 0, 23, '2016-07-02', 58, None), ('2016-07-04', 72, 84, 89, 82, 40, 61, 0, 36, '2016-07-03', 8, None), ('2016-07-05', 78, 68, 85, 89, 87, 77, 0, 39, '2016-07-04', 64, None)  ... displaying 10 of 757 total bound parameter sets ...  ('2020-08-06', 100, 97, 59, 1, 76, 65, 1, 51, '2020-08-05', 61, 56.0), ('2020-08-07', 100, 98, 66, 94, 84, 82, 0, 80, '2020-08-06', 77, 59.0))]
(Background on this error at: http://sqlalche.me/e/13/gkpj)
Oura cloud not yet updated. Waiting to pull Strava data
Inserting records into db_refresh...
Refresh Complete

Ill have a look and revert back with my findings.

pierretamisier commented 3 years ago

I think the SQLite pk issue is actually an issue with the OURA API. For some reason, I sometimes get two different entries for the same day. Example attached. No idea where the red line is coming from. All the issues I had so far were when Oura hasn't the entire data recorded it seems (which is I think when I run out of battery). I noticed this pattern 11 times over the last 3+ years: when I have a rubbish entry along with another one that actually matches what I see on the Oura dashboard. image image

I'm looking at adding an extra step after the df is generated to delete the "first of the two" entries when I have dupes.

ethanopp commented 3 years ago

I can update the x.gets()...

In regards to the line that blows up when 'multiplying by 0', I think it is because the index resets when doing the pd.concat, so its trying to take the first index of 0 and multiply it by 5... Not sure why you would get that issue and I wouldn't... but try this:

            df.index +=1
            df['timestamp_local'] = (pd.to_datetime(x['bedtime_start']) + pd.to_timedelta(df.index * 5, unit='m')) - pd.to_timedelta(5, unit='m')

I recall I had some weird issues in the past with data in my app not aligning with what was on the cloud on days where I updated the firmware on my ring...so maybe those dates where you have weird data are on days when you updated firmware?

...do you have any of those duplicate records in recent years? ...This honestly looks like an issue on the oura cloud side though, so if bad data maybe we only pull past a more recent year - Not sure we can guarantee it takes the 'first' of the 2 dates because there is no key and if sorting by date the correct row could sometimes show up first

ethanopp commented 3 years ago

try the latest pull... it won't yet resolve thee pk issue, but I think it should resolve the first 2 issues you're having

ethanopp commented 3 years ago

@pierretamisier did the index fix end up working? Found another spot in the code in pull_activity_data I could update it to as well... but if it didn't resolve the issue I can revert it back and just leave the x.gets()

pierretamisier commented 3 years ago

Your changes with .get() work, thanks!

I added a few lines of code to manipulate the df_readiness_summary to exclude the 'dupes' i.e. the lines where period_id is the lowest when the API returns 2 rows for the same report_date (red line in my google sheet above). It's a bit funky and I reached out to Oura support to let them know. In the meantime the below works for me and my Oura dashboard is now UP AND RUNNING 🚀🚀🚀

def pull_readiness_data(oura, days_back=7):
[...]
        # add a new 'primary key' column (hash of the row) on each row which we'll use to cleanup bad data
        ids = pd.DataFrame(df_readiness_summary.apply(lambda x: hash(tuple(x)), axis = 1))
        df_readiness_summary['id'] = ids

        # when we have 2 entries for the same report_date, we want to exclude the first one (lower period_id)
        dupes = df_readiness_summary.groupby('report_date').filter(lambda x: len(x) > 1)
        dupes = (dupes.assign(rn=dupes.groupby(['report_date'])['period_id']
                          .rank(method='first', ascending=True))
                          .query('rn == 1'))

        # we want everything from df_readiness_summary NOT IN dupes
        df_readiness_summary = df_readiness_summary[~df_readiness_summary['id'].isin(dupes['id'])]
        df_readiness_summary = df_readiness_summary.drop(columns=['id'])

        return df_readiness_summary
ethanopp commented 3 years ago

Great, glad you got it sorted! I updated the index*5 scenario in the activity pull section as well just for good measure...

If you have any other ideas to add to the repo feel free to fork/make some pull requests! (theres still a bunch of TODOs in the code I'm sure you've already come across which I plan to eventually get to... but for many they rely on the third parties to make some changes)...

You'll also see some of the 'animations' on the charts are a little buggy... I'm hoping dash comes out with a fix for that, but may just revert back to spinners if in the near future they don't...

Interested to hear what oura support teams response is. Above code looks like it should work, but assumes the api always chronologically provides the bad row first...

pierretamisier commented 3 years ago

I'm now trying to get the PMC to work. Right now it doesn't and I'll raise another issue if I cant fix it right away.

The reason why I'm very interested in fitly is that I'd like to set up a PMC on steroids with CTL/ATL/TSB/TSS + Oura readiness/sleep/calories burnt and more. To see patterns between how much I train and what Oura says basically. I'll share my findings and contribute to the project when I see what I can do! Thanks for your help Ethan I really appreciate it!

ethanopp commented 3 years ago

Yep - that’s exactly what my intent was with the PMC... you’ll need both strava and oura connected to get it up and running, but once you do the PMC will generate based on any HR or Power data you have coming in from strava, and there is also an HRV trend that will show at the top in light blue.

I’ve done a little research and was originally going to use sleep/calories etc, but once going down the rabbit hole found a lot of studies focusing around HRV. So at the top of the PMC you’ll see some KPIs with “recommendations” of how hard you should work out each day. The first recommendation is strictly based on your HRV trends and build around the diagram in this study: https://www.trainingpeaks.com/coach-blog/new-study-widens-hrv-evidence-for-more-athletes/

The second recommendation “oura recommendation” is a more holistic recommendation built around ouras readiness score (which incapsulates sleep & activity).

HRV currently is the hot trend people are using so I usually rely on that recommendation, and actually often find that the recommendations will be the same.

The PMC also has some toggles at the top, by default they are all selected so it “combines” your ATL/CTL/TSB, but if you want to dynamically see a “run only”, “cycle only” or all others you can do so. ATL will carry across sports since anything you do will add fatigue, but CTL is sport specific - this decision was based on some articles and even Facebook conversations with Coggan himself.

I’ve tried to put hints over everything so usually if you hover there will be a tooltip saying what it is...

Let me know if you can’t get it up!

ethanopp commented 3 years ago

Oh btw on the oura page you can click the daily scores and you’ll get a popup similar to ouras “weekly summary”... main difference here is that fitly does this based on rolling 7 days so you can pull it up on demand vs oura app you have to wait until the beginning of every week