We just implemented profiling preprocessors' wall-time for when a request is made to the server, but when run in parallel, the high CPU usage of one preprocessor can create contention, making it difficult to pinpoint what to optimize accurately. When preprocessors run in parallel, if one preprocessor consumes 100% of available CPU resources, it starves the other preprocessors -- forcing them to compete for CPU time, increasing their wall time and hence depicting misleading insights.
In order to accurately identify an opportunity for optimization, we need more detailed insights by capturing CPU utilization at the start and end of each call to estimate CPU usage during processing.
We just implemented profiling preprocessors' wall-time for when a request is made to the server, but when run in parallel, the high CPU usage of one preprocessor can create contention, making it difficult to pinpoint what to optimize accurately. When preprocessors run in parallel, if one preprocessor consumes 100% of available CPU resources, it starves the other preprocessors -- forcing them to compete for CPU time, increasing their wall time and hence depicting misleading insights.
In order to accurately identify an opportunity for optimization, we need more detailed insights by capturing CPU utilization at the start and end of each call to estimate CPU usage during processing.