Closed cmasmas closed 9 years ago
@cmasmas Probably, the message is not clear enough. The prediction models were implemented to calculate the future values from a number of data points. In this implementation, we gather the data points from all the backend nodes during approx last 30min and store them in a circular queue size=5. This operation gives us an important amount of data points. For instance, 5 backends with 50 data points for the CPU values will provide 5 * 50 * 5 = 1250 points
Unfortunately, I was not able to extend the implementation of the predictor models to process a larger amount of data points in a reasonable amount of time (possible extension). Another alternative would be to get less data points per backend but to collect data for a longer period of time.
Hello, It looks like we can now close this issue. Do not hesitate to reopen it if necessary! Guillaume
Please, could you explain about this FIXME note:
FIXME: Size is 5 due to an excessive number of items to be predicted, please repair this part.
, as we want to store the monitoring data during 60min, considering 5min between iterations
self.predictorScaler_cpu_usage_1h = Queue( [] , 5) self.predictorScaler_req_rate_1h = Queue( [] , 5)
Then, how many values does list_data_cpu contain (lines 503-511)?
self.optimal_scaling.calculate_error_prediction_cpu(list_data_cpu)