Hi, when using the code to learn shapelets on time series, I got stucked inside a while loop as follow, which inside the class "CrossEntropyLearningShapelets". I found loss_iteration will be a fixed number after several rounds in the loop. I provide a quick but not that feasible solution to jump out the infinite loop as follow, which examine if loss at last round is equal to loss_iteration (i.e. loss at current round), and it will leave and continue next learning if it is.
I think the more great solution is using adaptive gradient approach to learn, but it might not the same as the author did in their paper.
Steps/Code to Reproduce
# If loss is increasing, decrease the learning rate
last_loss = None
if losses[-1] < loss_iteration:
while losses[-1] < loss_iteration:
""" Note : Since source code might stuck in this while loop
We should stop stucking after loss never change at this round.
"""
if last_loss == loss_iteration:
break
# Go back to previous state
weights += learning_rate * gradient_weights
shapelets_array = _reshape_array_shapelets(
shapelets, lengths)
shapelets_array += learning_rate * gradient_shapelets
shapelets = tuple(
_reshape_list_shapelets(shapelets_array, lengths))
# Update learning rate
learning_rate /= 5
# Recompute shapelet gradient
weights -= learning_rate * gradient_weights
gradient_shapelets = _grad_shapelets(
X, y_ind, n_classes, weights, shapelets, lengths,
self.alpha, self.penalty, self.C, self.fit_intercept,
self.intercept_scaling, sample_weight
)
shapelets_array = _reshape_array_shapelets(
shapelets, lengths)
shapelets_array -= learning_rate * gradient_shapelets
shapelets = tuple(
_reshape_list_shapelets(shapelets_array, lengths))
last_loss = loss_iteration
loss_iteration = _loss(
X, y_ind, n_classes, weights, shapelets, lengths,
self.alpha, self.penalty, self.C, self.fit_intercept,
self.intercept_scaling, sample_weight
)
Description
Hi, when using the code to learn shapelets on time series, I got stucked inside a while loop as follow, which inside the class "CrossEntropyLearningShapelets". I found loss_iteration will be a fixed number after several rounds in the loop. I provide a quick but not that feasible solution to jump out the infinite loop as follow, which examine if loss at last round is equal to loss_iteration (i.e. loss at current round), and it will leave and continue next learning if it is.
I think the more great solution is using adaptive gradient approach to learn, but it might not the same as the author did in their paper.
Steps/Code to Reproduce
Versions