Open sujyQ opened 2 years ago
Hi, may I ask a question, please? I wonder if there's a small mistake in the code comments. Because it says that "sv_mode=0" is spatial-variant (the wrong comment is in the class "SRMDPreprocessing", but in fact "sv_mode=1" is the spatial-variant, right?
I think so. "1<= sv_mode <= 5" is spatially-variant and "sv_mode=0" is spatially-invariant.
Thanks so much for response. I've successfully run the spacial-variant version, and the GPU usage is 17380MiB.
sorry, have you met the dimension error? The shape of self.fake_K
is Bx441xhxw, but the shape of self.real_K
is Bx21x21 for the spatial invariant case( BxHWx21x21 for the spatial variant case). How could these tensors calculate kernel loss?
sorry, have you met the dimension error? The shape of
self.fake_K
is Bx441xhxw, but the shape ofself.real_K
is Bx21x21 for the spatial invariant case( BxHWx21x21 for the spatial variant case). How could these tensors calculate kernel loss?
whether should I reshape the real kernel? like using self.real_K.view(B, -1, 1, 1).expand(1, 1, self.fake_K.size(2), self.fake_K.size(3))
to change the dimension to Bx441xhxw which is consistent with self.fake_K for the spatial invariant case?
Could u please help me what exactly is ground truth kernel? as for Super-resolution we consider ground truth image to be the actually HR image and LR image to be the downsampled version? So my doubt is what have we actually consider to be a ground truth kernel?
Hi.
I'm trying to train MANet with spatially-variant mode. I changed your code here https://github.com/JingyunLiang/MANet/blob/eaf8265ad5dd946247128a023541133b36f94f3c/codes/train.py#L172 to this :
But it returns error :
Traceback (most recent call last): File "train.py", line 347, in <module> main() File "train.py", line 210, in main model.optimize_parameters(current_step, scaler) File "/home/hsj/d_drive/hsj/hsj/MANet/codes/models/B_model.py", line 165, in optimize_parameters -1) * 10000) / self.fake_K.size(1) RuntimeError: expand(torch.cuda.FloatTensor{[16, 1, 36864, 21, 21]}, size=[-1, 36864, -1, -1]): the number of sizes provided (4) must be greater or equal to the number of dimensions in the tensor (5)
So I erased unsqueeze here (https://github.com/JingyunLiang/MANet/blob/34f90ba8888f4a1dd2a1127b97c2ec3706f06598/codes/models/B_model.py#L162) to this :
However OOM occurs :(
RuntimeError: CUDA out of memory. Tried to allocate 2.91 GiB (GPU 0; 11.93 GiB total capacity; 8.78 GiB already allocated; 1.57 GiB free; 9.73 GiB reserved in total by PyTorch)
Is MANet not enough to train sv mode with 12GB RAM??