silvandeleemput / memcnn

PyTorch Framework for Developing Memory Efficient Deep Invertible Networks
MIT License
251 stars 26 forks source link

Error: The tensor has a non-zero number of elements, but its data is not allocated yet. #73

Closed llstela closed 1 year ago

llstela commented 1 year ago

Description

I used this package to generate two Invertible Module and concat them with nn.Seqential

What I Did

I wrote the following codes:

def make_invertible_module(module_name,module_config:dict):
    gm = module_name(**module_config)
    fm = module_name(**module_config)
    coupling = create_coupling(Fm=fm, Gm=gm, coupling='additive')
    inv_module = InvertibleModuleWrapper(fn=coupling, keep_input=False)
    return inv_module

class InvertibleCADensenet(nn.Module):
    def __init__(self, conv, n_feats, n_CADenseBlocks=5):
        super(InvertibleCADensenet, self).__init__()
        self.n_blocks = n_CADenseBlocks
        DenseBlock_config = {"conv":conv,"depth":8, "rate":4, "input_dim":n_feats//2, "out_dims":n_feats//2}
        CALayer_config = {"channel":n_feats//2, "reduction":16//2}
        denseblock = [make_invertible_module(DenseBlock,DenseBlock_config) for _ in range(n_CADenseBlocks)]
        calayer = [make_invertible_module(CALayer, CALayer_config) for _ in range(n_CADenseBlocks)]
        self.CADenseblock = nn.ModuleList()
        for idx in range(n_CADenseBlocks):
            self.CADenseblock.append(nn.Sequential(denseblock[idx], calayer[idx]))
        self.CADenseblock.append(nn.Conv2d((n_CADenseBlocks+1)*n_feats, n_feats, kernel_size=1)) # transformation between channels, "+1" considering the input
    def forward(self,x):
        feat = [x]
        for idx in range(self.n_blocks):
            x = self.CADenseblock[idx](feat[-1])
            feat.append(x)
        x = torch.cat(feat[:], 1)  # Error: The tensor has a non-zero number of elements
        x = self.CADenseblock[-1](x)
        return x

I initialized "InvertibleCADensenet". After giving input, its output looks like this:

    The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
      File "/gdata/cold1/shengxuhan/codes/BasicSR/basicsr/archs/myrcan_utils/common.py", line 141, in forward
        x = torch.cat(feat[:], 1)
      File "/gdata/cold1/shengxuhan/codes/BasicSR/basicsr/archs/muse_gift_arch.py", line 65, in forward
        res = self.pre_spatial_deep[i](res)
      File "/gdata/cold1/shengxuhan/codes/BasicSR/basicsr/models/launet_model.py", line 94, in optimize_parameters
        self.output = self.net_g(self.lq)
      File "/home/ubuntu/shengxuhan/shengxuhan_cloud/codes/BasicSR/basicsr/train.py", line 174, in train_pipeline
        model.optimize_parameters(current_iter)
      File "/home/ubuntu/shengxuhan/shengxuhan_cloud/codes/BasicSR/basicsr/train.py", line 220, in <module>
        train_pipeline(root_path)
    RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
silvandeleemput commented 1 year ago

Could you provide a code snippet that you used to create and run your code?

silvandeleemput commented 1 year ago

I cannot solve this without some more information like a code snippet used for creating and running this code. I'll close this issue for now since I got no feedback for quite some while. If you can still provide me with the additional information I can reopen the issue and have a look.