When I use retrain_yolo on my own dataset, it returns this error, and the script is killed shortly afterwards. I have added print statements to my version of retrain_yolo.py and it seems the code continues to run, even though it returns this error.
This is the error: 'tcmalloc: large alloc 1410973696 bytes == 0x5408000 @ 0x7f73dd301107....'
The number of bytes(1410973696), is the same number returned when I use sys.getsizeof on the numpy array 'images' from the npz dataset file.
I am using Google Colaboratory, if it helps.
When I use retrain_yolo on my own dataset, it returns this error, and the script is killed shortly afterwards. I have added print statements to my version of retrain_yolo.py and it seems the code continues to run, even though it returns this error. This is the error: 'tcmalloc: large alloc 1410973696 bytes == 0x5408000 @ 0x7f73dd301107....' The number of bytes(1410973696), is the same number returned when I use sys.getsizeof on the numpy array 'images' from the npz dataset file. I am using Google Colaboratory, if it helps.