Closed rohinkalra21 closed 8 months ago
The predictions_delay_real_traces.npy
file contains the model's predictions, representing the delays of each flow in the network. The array is organized in a specific order, with entries corresponding to flows from source to destination, e.g., [Flow_0_to_1, Flow_0_to_2, Flow_0_to_3, ..., Flow_N_to_0, Flow_N_to_N-1].
To evaluate the accuracy of these predictions, you can compare them with the true labels. Here's an example of how to do this:
true_labels = []
for _, y in ds_test:
true_labels.extend(y)
# Assuming predictions is loaded from predictions_delay_real_traces.npy
# Compare predictions with true labels
compute_error(true_labels, predictions)
However, if you only want to compute the error of the predictions, you can change the model.predict function by the model.evaluate one.
model.predict(ds_test, verbose=1)
model.evaluate(ds_test, verbose=1)
Bests, Miquel
Thank you!
I am trying to run predict.py for some of the datasets. Once we obtain the predictions_delay_real_traces.npy array, what do these values represent, and what are the expected range of values that we should be getting? I am trying to evaluate the accuracy of the predictions.