line 835, in main io_channels=dataloader.dataset.io_channels() AttributeError: 'NoisedImageDataset' object has no attribute 'io_channels'
I tried to fix that by deriving NoisedImageDataset from Transform...
class NoisedImageDataset(TransformImageDataset): #(Dataset):
def __init__(self,dirname,max=1000000000,noise=0.33):
super().__init__(dirname,max)
However then: (I forced it to CPU, because the previous attempt with GPU crashed my PC, probably too big model/and other issues sometimes):
model config: channels=[0, 16, 32, 64, 128, 512] ks=5x5 use=5/5 skipcon=False dropout=0.25
layer[0] is cuda? False
layer[1] is cuda? False
layer[2] is cuda? False
layer[3] is cuda? False
layer[4] is cuda? False
Start training LR: 0.1 saving to: current_model/
training epoch: 0
Traceback (most recent call last):
File "Z:\convnet_stuff\pytorch_cuda\autoencoder.py", line 881, in <module>
main(sys.argv[1:])
File "Z:\convnet_stuff\pytorch_cuda\autoencoder.py", line 871, in main
train_epoch(device, ae,optimizer[0], dataloader,progress)
File "Z:\convnet_stuff\pytorch_cuda\autoencoder.py", line 679, in train_epoch
output=model(data)
File "C:\ProgramData\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "Z:\convnet_stuff\pytorch_cuda\autoencoder.py", line 123, in forward
return self.eval_unet(x)
File "Z:\convnet_stuff\pytorch_cuda\autoencoder.py", line 230, in eval_unet
x=self.conv[i](x)
File "C:\ProgramData\Miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\ProgramData\Miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\ProgramData\Miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [16, 0, 5, 5], expected input[4, 3, 255, 255] to have 0 channels, but got 3 channels instead
I realized that now there's a special format of the input (_INPUT0 , _OUTPUT0 . ) - fixed that, but then other problems in loading images, maybe something with Linux/Windows dir separators, I edited that, has some errors in preparing the names and maybe fixed some, but there are other errors and I gave up (for now):
(base) Z:\convnet_stuff\pytorch_cuda>python autoencoder.py
initializing device..
using device: cpu
grabbing dataset....\training_images
dataset: ..\training_images len= 9
input/_OUTPUT pairs detected - setting up dataloader for image transformation
init dataset from dir: ..\training_images
['capture_006_07052022_225920_440_0_INPUT0.jpg', 'capture_006_07052022_225920_440_0_INPUT2.jpg', 'capture_006_07052022_225920_440_0_OUTPUT0 .jpg', 'capture_007_07052022_225923_406_0_OUTPUT2.jpg', 'capture_008_07052022_225942_593_0_INPUT1.jpg', 'capture_008_07052022_225942_593_0_INPUT3.jpg', 'capture_009_07052022_225944_647_0_OUTPUT1.jpg', 'capture_009_07052022_225944_647_0_OUTPUT3.jpg', 'NE']
finding output images..
used channels: ['_INPUT0', '_INPUT1', '_INPUT2', '_INPUT3'] ['_OUTPUT0', '_OUTPUT1', '_OUTPUT2', '_OUTPUT3']
{'capture_006_07052022_225920_440_0': 3, 'capture_007_07052022_225923_406_0': 1, 'capture_008_07052022_225942_593_0': 2, 'capture_009_07052022_225944_647_0': 2}
error basename: 1 has missing channels
all images must have the same input/output channels supplied
input/_OUTPUT pairs detected - setting up dataloader for image transformation
init dataset from dir: ..\training_images
['aa_INPUT0.jpg', 'aa_INPUT2.jpg', 'aa_INPUT3.jpg', 'aa_OUTPUT0.jpg', 'aa_OUTPUT1.jpg', 'aa_OUTPUT2.jpg', 'aa_OUTPUT3.jpg', 'NE', 'аа_INPUT1.jpg']
finding output images..
used channels: ['_INPUT0', '_INPUT1', '_INPUT2', '_INPUT3'] ['_OUTPUT0', '_OUTPUT1', '_OUTPUT2', '_OUTPUT3']
{'aa': 7, 'аа': 1}
error basename: 1 has missing channels
all images must have the same input/output channels supplied
Then I ended up provided just 2 images, aa_INPUT0.jpg and aa_OUTPUT0.jpg and then it finally succeeded to run, but the input image is clean, not noised, also something seems wrong, the loss goes straight down. I tried both using the same INPUT-OUTPUT pair and here with different.
I tried to fix that by deriving NoisedImageDataset from Transform...
However then: (I forced it to CPU, because the previous attempt with GPU crashed my PC, probably too big model/and other issues sometimes):
I realized that now there's a special format of the input (_INPUT0 , _OUTPUT0 . ) - fixed that, but then other problems in loading images, maybe something with Linux/Windows dir separators, I edited that, has some errors in preparing the names and maybe fixed some, but there are other errors and I gave up (for now):
Then I ended up provided just 2 images, aa_INPUT0.jpg and aa_OUTPUT0.jpg and then it finally succeeded to run, but the input image is clean, not noised, also something seems wrong, the loss goes straight down. I tried both using the same INPUT-OUTPUT pair and here with different.
You may see my editions here: https://github.com/Twenkid/convnet_stuff/blob/main/pytorch_cuda/autoencoder_twenkid.py
I preferred to add them in separate file in order not to mess them when merging etc.