szagoruyko / pytorchviz

A small package to create visualizations of PyTorch execution graphs
MIT License
3.24k stars 279 forks source link

AttributeError: 'list' object has no attribute 'grad_fn' #31

Open caffelearn opened 5 years ago

caffelearn commented 5 years ago

File "C:\Users\admin\AppData\Local\conda\conda\envs\pytorch\lib\site-packages\torchviz\dot.py", line 38, in make_dot output_nodes = (var.grad_fn,) if not isinstance(var, tuple) else tuple(v.grad_fn for v in var)

AttributeError: 'list' object has no attribute 'grad_fn'

use: x= torch.randn(1, 3, 800, 800) y = self.model.cpu()(x) vis_graph = make_dot(y, params=dict(list(self.model.named_parameters()) ))

szagoruyko commented 5 years ago

does your model output a list?

H-YunHui commented 4 years ago

@caffelearn @szagoruyko I also encountered this problem, did you solve it?

Light-- commented 4 years ago

does your model output a list?

i also meet this problem and yes, my model output a list

could you help me? @szagoruyko

class Backbone(Module):
    def __init__(self, num_layers, drop_ratio, mode='ir'):
        super(Backbone, self).__init__()
        assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
        assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
        blocks = get_blocks(num_layers)
        if mode == 'ir':
            unit_module = bottleneck_IR
        elif mode == 'ir_se':
            unit_module = bottleneck_IR_SE
        self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1 ,bias=False), 
                                      BatchNorm2d(64), 
                                      PReLU(64))
        self.output_layer = Sequential(BatchNorm2d(512), 
                                       Dropout(drop_ratio),
                                       Flatten(),
                                       Linear(512 * 7 * 7, 512),
                                       BatchNorm1d(512))
        modules = []
        for block in blocks:
            for bottleneck in block:
                modules.append(
                    unit_module(bottleneck.in_channel,
                                bottleneck.depth,
                                bottleneck.stride))
        self.body = Sequential(*modules)

        # for MTL
        self.tower = nn.Sequential(
            nn.Dropout(),
            nn.Linear(512, 32),
            nn.ReLU(),
            nn.Linear(32, 2),
        )

        self.towers = nn.ModuleList([self.tower for _ in range(40)])

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

    def forward(self,x):
        x = self.input_layer(x)
        x = self.body(x)
        h_shared = self.output_layer(x)
        # for MTL
        out = [tower(h_shared) for tower in self.towers]
        return out

how do you solve this? @caffelearn @H-YunHui

ghost commented 4 years ago

@Light-- I was able to resolve this by passing a tuple containing the output list's elements.

For example if your model has 3 outputs which you output as elements of a list called 'y', then the make_dot function would look like this:

vis_graph = make_dot((y[0], y[1], y[2]), params=dict(list(self.model.named_parameters()) ))

Light-- commented 4 years ago

@chesharma

vis_graph = make_dot((y[0], y[1], y[2]), params=dict(list(self.model.named_parameters()) ))

Genius bro! πŸ‘ πŸ‘ πŸ‘ How did you notice this problem and figure it out ?

# the output of my model is a list, and its length is 40, i used this and it worked out!
vis_graph = make_dot(tuple((y[i] for i in range(40))),)

thanks @chesharma