Open Kypaz opened 3 years ago
Hello. Is there anyone to confirm the issue or not?
Hi @Kypaz
I am looking at analysing "appended" hr data in fit files, did you ever get any futher with this?
In my fitfile there are a few times where the "normal" hr data was received (because the heartrate strap and watch were out of the water for a short while). Looking at fitparse the appended hr records look like this:
mesg_type.name == 'hr', get_values(): {'timestamp': datetime.datetime(2021, 9, 12, 7, 56, 4), 'event_timestamp': 2242909.0, 'fractional_timestamp': 0.0, 'filtered_bpm': 71, 'unknown_251': (0,)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (71, 71, 71, 71, 71, 71, 71, 71), 'event_timestamp': 4.841796875, 'event_timestamp_12': (0, 80, 33, 185, 116, 102, 205, 217, 213, 206, 224, 53)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (71, 71, 71, 71, 71, 71, 71, 71), 'event_timestamp': 18.416015625, 'event_timestamp_12': (0, 246, 122, 113, 25, 46, 121, 209, 175, 16, 161, 154)}
[... a few more more of tehse with increasing event_timestamp ...]
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (88, 88, 88, 88, 88, 88, 88, 88), 'event_timestamp': 165.423828125, 'event_timestamp_12': (23, 44, 228, 13, 113, 61, 116, 90, 222, 85, 33, 91)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (88, 88, 88, 89, 89, 89, 89, 89), 'event_timestamp': 173.4541015625, 'event_timestamp_12': (188, 55, 162, 25, 103, 197, 44, 29, 93, 209, 21, 93)}
mesg_type.name == 'hr', get_values(): {'timestamp': datetime.datetime(2021, 9, 12, 7, 59, 3), 'event_timestamp': 2243088.0, 'fractional_timestamp': 0.0, 'filtered_bpm': 89, 'unknown_251': (0,)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (89, 89, 88, 88, 88, 88, 88, 88), 'event_timestamp': 11.14453125, 'event_timestamp_12': (0, 48, 83, 183, 109, 100, 120, 235, 38, 242, 74, 201)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (89, 90, 90, 90, 89, 89, 89, 89), 'event_timestamp': 19.1826171875, 'event_timestamp_12': (36, 79, 27, 230, 198, 186, 108, 176, 81, 229, 183, 203)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (89, 89, 90, 90, 90, 91, 91, 91), 'event_timestamp': 27.2001953125, 'event_timestamp_12': (114, 111, 74, 186, 170, 226, 124, 194, 94, 92, 217, 204)}
[...]
So the pattern to me seems to be I get one "hr" record with a timestamp, then a whole bunch of "hr" records with filtered_bpm set and an increasing event_timestamp. Then the pattern repeats with a fresh record with timestamp, and the following "hr" records with filtered_bpm restart the event_timestamp at zero again.
Comparing this to the output of the fit2csv tool from the fitsdk, I see the first few lines to look like this:
Data,12,hr,timestamp,"1000367764",,event_timestamp,"2242909.0",s,fractional_timestamp,"0.0",s,filtered_bpm,"71",bpm,unknown,"0",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Data,11,hr,filtered_bpm,"71|71|71|71|71|71|71|71",bpm,event_timestamp_12,"0|80|33|185|116|102|205|217|213|206|224|53",,event_timestamp,"2242912.0|2242912.5205078125|2242913.1806640625|2242913.6005859375|2242914.4501953125|2242915.3408203125|2242916.201171875|2242916.841796875",s,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Data,11,hr,filtered_bpm,"71|71|71|71|71|71|71|71",bpm,event_timestamp_12,"0|246|122|113|25|46|121|209|175|16|161|154",,event_timestamp,"2242917.5|2242917.9208984375|2242918.3603515625|2242920.7197265625|2242924.3681640625|2242926.7470703125|2242928.265625|2242930.416015625",s,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Data,11,hr,filtered_bpm,"71|71|71|72|72|72|72|74",bpm,event_timestamp_12,"98|75|210|211|110|23|6|68|105|4|122|229",,event_timestamp,"2242930.845703125|2242931.28515625|2242931.7060546875|2242932.365234375|2242933.005859375|2242933.64453125|2242934.50390625|2242935.5849609375",s,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Note that I do not see multiple "event_timestamp" in records within one "filtered_bpm" through fitparse, unlike you seem to have? In the fit2csv tool I do see that "event_timestamp" is indeed multiple values.
Some observations:
After some looking into this, these are my observations so far:
get_values()
, due to the dict nature of allowing only one key "event_timestamp". Looking at record.fields shows all of the event_timestamp fields:
[('timestamp', datetime.datetime(2021, 9, 12, 7, 56, 4)), ('event_timestamp', 2242909.0), ('fractional_timestamp', 0.0), ('filtered_bpm', 71), ('unknown_251', (0,))] # initial synchronization record
[<FieldData: filtered_bpm: (71, 71, 71, 71, 71, 71, 71, 71) [bpm], def num: 6, type: uint8 (uint8), raw value: (71, 71, 71, 71, 71, 71, 71, 71)>,
<FieldData: event_timestamp: 0.0 [s], def num: 9, type: uint32 (uint32), raw value: 0.0>,
<FieldData: event_timestamp: 0.5205078125 [s], def num: 9, type: uint32 (uint32), raw value: 0.5205078125>,
<FieldData: event_timestamp: 1.1806640625 [s], def num: 9, type: uint32 (uint32), raw value: 1.1806640625>,
<FieldData: event_timestamp: 1.6005859375 [s], def num: 9, type: uint32 (uint32), raw value: 1.6005859375>,
<FieldData: event_timestamp: 2.4501953125 [s], def num: 9, type: uint32 (uint32), raw value: 2.4501953125>,
<FieldData: event_timestamp: 3.3408203125 [s], def num: 9, type: uint32 (uint32), raw value: 3.3408203125>,
<FieldData: event_timestamp: 4.201171875 [s], def num: 9, type: uint32 (uint32), raw value: 4.201171875>,
<FieldData: event_timestamp: 4.841796875 [s], def num: 9, type: uint32 (uint32), raw value: 4.841796875>,
<FieldData: event_timestamp_12: (0, 80, 33, 185, 116, 102, 205, 217, 213, 206, 224, 53), def num: 10, type: byte (byte), raw value: (0, 80, 33, 185, 116, 102, 205, 217, 213, 206, 224, 53)>]
The corresponding fit2csv tool output for this record is
Data,12,hr,timestamp,"1000367764",,event_timestamp,"2242909.0",s,fractional_timestamp,"0.0",s,filtered_bpm,"71",bpm,unknown,"0"
Data,11,hr,filtered_bpm,"71|71|71|71|71|71|71|71",bpm,event_timestamp_12,"0|80|33|185|116|102|205|217|213|206|224|53",,event_timestamp,"2242912.0|2242912.5205078125|2242913.1806640625|2242913.6005859375|2242914.4501953125|2242915.3408203125|2242916.201171875|2242916.841796875",s
Note that fitparse starts the event_timestamps at zero after each resync, while fit2csv continues with absolute event_timestamp values. This is not a problem by itself, but fitparse also uses the wrong zero value for the event_timestamp, making it impossible to use the event_timestamps of the tupled filtered_bpm values: In the above example the resync is at event_timestamp of 2242909.0 [correctly shown by fitparse], but the tuple of filtered_bpm is listed at timestamps [0.0, 0.52, 1.18, 1.60....], when they infact should be shifted by 3 seconds to [3.0, 3.52, 4.18, 4.60], as the absolute timestamps as shown by fit2csv are "2242912.0|2242912.5205078125|2242913.1806640625|2242913.6005859375".
My questions currently are:
@polyvertex any thoughts?
After some more analysis, I realized that the event_timestamp
value coming in records containing filtered_bpm
tuples is not handled right in fitparse, and seems to be some undocumented extension (?), as it is also not mentioned in the "definition" record:
Definition,11,hr,filtered_bpm,8,,event_timestamp_12,12
As you can see only the event_timestamp_12
is mentioned here. So, instead of trying to use a undefined/undocumented field, I chose to do what GoldenCheetah and the code in #69 do: Use the event_timestamp_12 and some magic bit shifting calculations to calculate the timestamps for each "filtered_bpm" value.
Possibly this bug can be closed, saying that the event_timestamp
field that @Kypaz is trying to use is not previoulsy defined, and hence does not work properly.
If possible it would be nice if fitparse could properly decode+accumulate these values anyhow (even if they are not previously defined), as the fit2csv utility does, or otherwise discard them if they cannot be properly decoded due to a lack of definition.
Hi, I have tried recently to integrate HR data from 'hr' fields at the end of the .fit file, because that's what HRM Swim sensor do.
No problem about that, and I've come across #69, but I believe we retrieve directly a timestamp and no need to do specific treatment.
So basically inside 'hr' field, we have 8 'event_timestamp' with associated timestamp, and at the end an array of 8 bpm values, so it's quite easy to match.
Here's the problem : the timestamps seems "OK" but they do not quite match with the duration of the session.
I've attached a .fit file below, the maximum timestamp retrieved is 1567.8544921875 seconds = 26.11 minutes, but the overall duration of the session is 1h10
I believe all the bpm data are here, it's just that the timestamp associated are wrong
I think this issue was first adressed in #26 but close due to no example
hrmswim.fit.zip
@pR0Ps
Please ask if you need more information on my end
Thanks for the help !
Alexandre
[Edit : You can just import the .fit file into Garmin Connect or GoldenCheetah if you want to see the "real data", and the Heart Stream]
[Edit 2 : For reference, Golden Cheetah seems to do it here at 'decodeHr' method]
[Edit 3 : For reference as well, see 'Plugin Example (HR)' here