Closed MonsieurNicolas closed 1 year ago
@anupsdf @jayz22 @graydon -- we should probably get this done as part of the testnet push
In discussion with @jayz22 today, he noted that the instructions-to-real-time ratio we observe on the dashboard shows a drop when there's a lot of data moving through the system, and this might be caused by the fact that the XDR serialization and deserialization that happens on the rust side of the rust bridge (in contract.rs) isn't accounted-for in the block of code that has a budget active, but it is currently accounted-for by the real time clock (that is done by a medida TimeScope object, on the C++ side).
This is relatively easy to fix:
std::time::Instant::now()
function to track the narrower time-scope on the rust sideMedida::Timer
directly, rather than using a TimeScope
This should improve accuracy of the time-to-instructions ratio.
We have metrics and dashboard tracking cpuInstructionCount/executionTime
. https://github.com/stellar/stellar-core/pull/3847 also adds an "invoke time" metrics (which is more directly related to the cpuInstructionCount than the "operation time") which should fix the divergence mentioned above. Dashboard needs to be updated to reflect it once the change goes alive.
Closing this for now as there are no actionable item. Feel free to reopen it in the future if more "advanced" measurements becomes necessary.
This complements #3759 that tracks production readiness for dapp developpers and to help validators pick "market" prices for different resources.
This issue tracks that we need to make sure that we have the right data streams/processes in place that we can monitor calibration accuracy: we have ways to perform calibration based on synthetic data (ie "fancy tests") in the host crate.
We need something to help us determine how close we were from our target models when we compute
cpuInstruction
.We need to be able to detect when we're "very wrong", both in places where we overestimate (in which case, we're missing out on capacity), and underestimate (possible DoS attack vector).
There are a couple ways we can try to measure this:
cpuInstructionCount/executionTime
cpuInstruction
is a proxy for execution time)cpuInstructionCount/actualCpuInstructionCount
I think that we could do a bit of both (in order of impact, descending):
cpuInstructionCount/executionTime
at the transaction and ledger level, exported as a medida metric (this would allow to catch larger trends)cpuInstructionCount/executionTime
as close as possible to components (this would allow us to quickly identify model issues)cpuInstructionCount/actualCpuInstructionCount
(maybe to be used with special build flavor?) , which we can use when running under controlled environments (to match calibration environment)We can then use this instrumentation both as part of tracking node health and when replaying historical data (catchup), the later can also be used as part of acceptance criteria when validating builds.