Closed anan-dad closed 2 years ago
I think i met the same problem. Once the memory exceed the max memory of my machine, an error happened somewhere like np.array(c, w, h)
As I understood, PatchCore doesn't need more than 1 epochs. because data from 2nd ~ inf epochs, it is duplicated from first epochs' data. It is named as training, but actually it is samping....
I'm also facing the same issue. What is the potential solution?
Hi I run the train.py under following env:
Problem: After epoch 11, the machine memory (not GPU memory) takes up to 40GB, and at epoch 15, the usage of machine memory would be over 50GB, then the system would kill the training process.
Any ideas why this happen, and how to fix?