I was wondering whether it'd be possible to add an alpha channel in the training (my training data are rgba png's with transparency & I'm using pose2body).
I've changed all the image loaders to convert('RGBA'). As well as using --output_nc 4, and converting the images to RGB for flownet. But torch is throwing an error - RuntimeError: Given groups=1, weight of size 64 3 3 3, expected input[1, 4, 832, 768] to have 3 channels, but got 4 channels instead when calculating losses train.py#L66.
I imagine theres more to it (changing the tensor shapes somwhere) - if anyone has any ideas I'd be very grateful.
I was wondering whether it'd be possible to add an alpha channel in the training (my training data are rgba png's with transparency & I'm using pose2body).
I've changed all the image loaders to
convert('RGBA')
. As well as using--output_nc 4
, and converting the images to RGB for flownet. But torch is throwing an error -RuntimeError: Given groups=1, weight of size 64 3 3 3, expected input[1, 4, 832, 768] to have 3 channels, but got 4 channels instead
when calculating losses train.py#L66.I imagine theres more to it (changing the tensor shapes somwhere) - if anyone has any ideas I'd be very grateful.