In order for the syscall and context switching measurement to be correct (or sufficiently so):
must we disable cpu's frequency variation (affects "rdtsc")?
must we utilize core affinity?
must we isolate the core used to run the measurement on (so the OS does not run anything else on the core we'll use for the measurement) ?
disable timer interrupt altogether so that a timer interrupt cannot occur during context switching? (according to the chapter, the OS might disable interrupts when servicing an interrupt. Can we assume that during measurement? Also is it even applicable to the case?)
Also what is an acceptable variance value for the measurement?
In order for the syscall and context switching measurement to be correct (or sufficiently so):
Also what is an acceptable variance value for the measurement?