rtic-scope / cargo-rtic-scope

Non-intrusive ITM tracing/replay toolset for RTIC programs with nanosecond timestamp accuracy.
16 stars 4 forks source link

Tracing with different TPIU baudrates yields different timestamps #137

Open tmplt opened 2 years ago

tmplt commented 2 years ago

With the current implementation reported timestamps are different when tpiu_baud changes.

With tpiu_freq = 120000000 and tpiu_baud = 1000000:

    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @2084 ns (+2084) [good]: [Task { name: "app::hardware", action: Entered }]
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @3584 ns (+1500) [good]: [Task { name: "app::software", action: Entered }]
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @4001 ns (+417) [good]: [Task { name: "app::software", action: Exited }]
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: app::software,3584,4001
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: 
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @4251 ns (+250) [good]: [Task { name: "app::hardware", action: Returned }, Task { name: "app::hardware", action: Exited }]
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: app::hardware,2084,4251
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: 
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @8251 ns (+4000) [bad!]: []

With same tpiu_freq and tpiu_baud = 4000000:

   Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @84 ns (+84) [good]: [Task { name: "app::hardware", action: Entered }, Task { name: "app::software", action: Entered }, Task { name: "app::software", action: Exited }]
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: app::software,84,84
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: 
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @1751 ns (+1667) [good]: []
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: @2501 ns (+750) [good]: [Task { name: "app::hardware", action: Returned }, Task { name: "app::hardware", action: Exited }]
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy: app::hardware,84,2501
    Frontend /home/tmplt/.cargo/bin/rtic-scope-frontend-dummy:

One would expect timestamps to be the same. On the target, the only difference between the above is the prescaler register: 119 and 29, respectively.

tmplt commented 2 years ago

Relevant snippets from documentation:

2022-01-25T09:56:05+01:00

2022-01-25T09:54:24+01:00

tmplt commented 2 years ago

Also related, on account of the different groupings (singular task events in the first output, back-to-back events in the second output):

2022-01-25T10:24:37+01:00

tmplt commented 2 years ago

Do we see this because state transitions between tracing instances? That is, are we supposed to power cycle the device each time?

tmplt commented 2 years ago

Do we see this because state transitions between tracing instances?

This is defenitely the case when using the system clock as the local timestamp clock. Timestamps of an initial exception trace packet wildly differ between resets. This should perhaps be expected?

Iteratively tracing (not powercycling) when using the asynchronous TPIU clock is much more determined: I get the same itm-decode output each reset.

tmplt commented 2 years ago

itm-decode also yield different timestamps when using different baud rates on the SWO pin, set via SWOSCALER: 1Mbps:

ExceptionTrace { exception: Interrupt { irqn: 12 }, action: Entered }
ExceptionTrace { exception: Interrupt { irqn: 13 }, action: Entered }
LocalTimestamp1 { ts: 25, data_relation: Sync }
DataTraceValue { comparator: 1, access_type: Write, value: [0] }
LocalTimestamp1 { ts: 18, data_relation: Sync }
DataTraceValue { comparator: 2, access_type: Write, value: [0] }
ExceptionTrace { exception: Interrupt { irqn: 13 }, action: Exited }
LocalTimestamp2 { ts: 5 }
ExceptionTrace { exception: Interrupt { irqn: 12 }, action: Returned }
ExceptionTrace { exception: Interrupt { irqn: 12 }, action: Exited }
LocalTimestamp2 { ts: 3 }
ExceptionTrace { exception: ThreadMode, action: Returned }
LocalTimestamp1 { ts: 48, data_relation: UnknownAssocEventDelay }

4Mbps:

ExceptionTrace { exception: Interrupt { irqn: 12 }, action: Entered }
ExceptionTrace { exception: Interrupt { irqn: 13 }, action: Entered }
DataTraceValue { comparator: 1, access_type: Write, value: [0] }
DataTraceValue { comparator: 2, access_type: Write, value: [0] }
LocalTimestamp2 { ts: 1 }
ExceptionTrace { exception: Interrupt { irqn: 13 }, action: Exited }
LocalTimestamp1 { ts: 20, data_relation: Sync }
ExceptionTrace { exception: Interrupt { irqn: 12 }, action: Returned }
ExceptionTrace { exception: Interrupt { irqn: 12 }, action: Exited }
LocalTimestamp1 { ts: 9, data_relation: Sync }
ExceptionTrace { exception: ThreadMode, action: Returned }
tmplt commented 2 years ago

With a smaller prescaler more events are then considered to be back-to-back (no idle time) and insight into the exact timestamps of singular events are lost, but a higher throughput increased timestamp qualities are obtained.

tmplt commented 2 years ago

But why would timestamp resolution change becauce SWOSCALER is changed?

2022-01-25T11:06:06+01:00