Closed amith-moorkoth closed 7 years ago
Memory error happens simply because your memory size is not enough. Try reducing the batchsize further.
For the training the deep convolutional neural network, it takes time with CPU. Using NVIDIA GPU is recommended.
Reducing the batch size it worked... Thank you
Can we work with Intel with cuda
Cuda is NVIDIA GPU library. If you can use GPU, and also install cudnn library which is also GPU library provided by NVIDIA, memory efficiency is much utilized and you can train with much more batchsize at the same time.
prepare model -------- training parameter -------- GPU ID : -1 archtecture : seranet_v1 batch size : 5 epoch : 1000 color scheme : rgb size : 64
loading data file size 5000 total skip file size = 0 after resize: data_x.shape (5000, 3, 32, 32) sum 1890920924.0 after resize: data_y.shape (5000, 3, 64, 64) sum 7555895403.0 setup model training epoch: 1 Traceback (most recent call last): File "src/train.py", line 203, in
optimizer.update(model, x, t)
File "C:\Python\lib\site-packages\chainer\optimizer.py", line 390, in update
loss = lossfun(*args, **kwds)
File "C:\Users\Amith Moorkoth\Desktop\im\src\arch\seranet_v1.py", line 87, in call
h = F.leaky_relu(self.conv11(h), slope=0.1)
File "C:\Python\lib\site-packages\chainer\links\connection\convolution_2d.py", line 108, in call
deterministic=self.deterministic)
File "C:\Python\lib\site-packages\chainer\functions\connection\convolution_2d.py", line 326, in convolution_2d
return func(x, W, b)
File "C:\Python\lib\site-packages\chainer\function.py", line 199, in call
outputs = self.forward(in_data)
File "C:\Python\lib\site-packages\chainer\function.py", line 312, in forward
return self.forward_cpu(inputs)
File "C:\Python\lib\site-packages\chainer\functions\connection\convolution_2d.py", line 69, in forward_cpu
cover_all=self.cover_all)
File "C:\Python\lib\site-packages\chainer\utils\conv.py", line 33, in im2col_cpu
col = numpy.ndarray((n, c, kh, kw, out_h, out_w), dtype=img.dtype)
MemoryError