Hi, I'm trying to run this model with the CamVid dataset in a GTX 1060 with 6GB of memory and it gives me this error (this is the whole code output):
[INFO]Defined all the hyperparameters successfully!
[INFO]Starting to define the class weights...
[INFO]Fetched all class weights successfully!
[INFO]Model Instantiated!
[INFO]Defined the loss function and the optimizer
[INFO]Staring Training...
--------------- Epoch 1 ---------------
here
0%| | 0/36 [00:03<?, ?it/s]
Traceback (most recent call last):
File "init.py", line 151, in <module>
train(FLAGS)
File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\train.py", line 81, in train
out = enet(X_batch.float())
File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\models\ENet.py", line 231, in forward
x = self.fullconv(x)
File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\conv.py", line 776, in forward
return F.conv_transpose2d(
RuntimeError: CUDA out of memory. Tried to allocate 1020.00 MiB (GPU 0; 6.00 GiB total capacity; 3.68 GiB already allocated; 932.14 MiB free; 3.69 GiB reserved in total by PyTorch)
I think I should have enough memory, since this is not an excessively large dataset. Is there anything I might be doing wrong?
Hi, I'm trying to run this model with the CamVid dataset in a GTX 1060 with 6GB of memory and it gives me this error (this is the whole code output):
I think I should have enough memory, since this is not an excessively large dataset. Is there anything I might be doing wrong?