AbnerHqC / GaitSet

A flexible, effective and fast cross-view gait recognition network
582 stars 170 forks source link

想问下gaitset可以部署到移动端么,我尝试着转为onxx、torchscript遇到了很多问题。希望您能给些建议。 #130

Open Z-demo opened 4 years ago

Z-demo commented 4 years ago

或者哪位大佬知道有哪些相对可行的可以部署在移动端的步态识别网络么?

Z-demo commented 3 years ago

D:\ProgramData\Anaconda3\envs\pt_new\lib\site-packages\torch\nn\modules\container.py:434: UserWarning: Setting attributes on ParameterList is not supported. warnings.warn("Setting attributes on ParameterList is not supported.") Traceback (most recent call last): File "E:/Projects/kivy_projects/kive_test1/main.py", line 36, in sm = torch.jit.script(my_module) File "D:\ProgramData\Anaconda3\envs\pt_new\lib\site-packages\torch\jit_script.py", line 898, in script obj, torch.jit._recursive.infer_methods_to_compile File "D:\ProgramData\Anaconda3\envs\pt_new\lib\site-packages\torch\jit_recursive.py", line 352, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File "D:\ProgramData\Anaconda3\envs\pt_new\lib\site-packages\torch\jit_recursive.py", line 410, in create_script_module_impl create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) File "D:\ProgramData\Anaconda3\envs\pt_new\lib\site-packages\torch\jit_recursive.py", line 304, in create_methods_and_properties_from_stubs concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults) RuntimeError: Only ModuleList, Sequential, and ModuleDict modules are subscriptable: File "E:\Projects\kivy_projects\kive_test1\network\gaitset.py", line 145

C = torch.matmul(A, B)

    # print(C.shape)
    feature = torch.matmul(feature, self.fc_bin[0])
                                    ~~~~~~~~~~~~~~ <--- HERE
    feature = feature.permute(1, 0, 2).contiguous()
    # feature.shape (batch_size, scale, d)

源码:

@torch.jit.script_method
    def forward(self, silho, batch_frame: int = 50):
        # n: batch_size, s: frame_num, k: keypoints_num, c: channel
        if batch_frame is not None:

            # 待解决问题1 RuntimeError:
            # Tried to access nonexistent attribute or method 'numpy' of type 'Tensor'.:
            # batch_frame = batch_frame[0].data.cpu().numpy().tolist()
            #batch_frame = batch_frame.numpy().tolist()
            # batch_frame = list()
            # _ = len(batch_frame)
            # for i in range(len(batch_frame)):
            #     if batch_frame[-(i + 1)] != 0:
            #         break
            #     else:
            #         _ -= 1
            # batch_frame = batch_frame[:_]
            # frame_sum = np.sum(batch_frame)
            frame_sum = batch_frame
            # if frame_sum < silho.size(1):
            #     silho = silho[:, :frame_sum, :, :]
            # self.batch_frame = [0] + np.cumsum(batch_frame).tolist()
            self.batch_frame.append(frame_sum)
        n = silho.size(0)
        x = silho.unsqueeze(2)
        del silho

        x = self.set_layer1(x)
        x = self.set_layer2(x)
        gl = self.gl_layer1(self.frame_max(x)[0])
        gl = self.gl_layer2(gl)
        gl = self.gl_pooling(gl)

        x = self.set_layer3(x)
        x = self.set_layer4(x)
        gl = self.gl_layer3(gl + self.frame_max(x)[0])
        gl = self.gl_layer4(gl)

        x = self.set_layer5(x)
        x = self.set_layer6(x)
        x = self.frame_max(x)[0]
        gl = gl + x
        # 问题3 feature = list()
        feature = [torch.tensor(1)]
        # del (feature[0])
        n, c, h, w = gl.size()
        for num_bin in self.bin_num:
            z = x.view(n, c, num_bin, -1)
            z = z.mean(3) + z.max(3)[0]
            feature.append(z)
            z = gl.view(n, c, num_bin, -1)
            z = z.mean(3) + z.max(3)[0]
            feature.append(z)
        del(feature[0])
        feature = torch.cat(feature, 2).permute(2, 0, 1).contiguous()

        # feature = feature.matmul(self.fc_bin[0])
        # error code!!!!!!!!!
        feature = torch.matmul(feature, self.fc_bin[0])
        feature = feature.permute(1, 0, 2).contiguous()
        # feature.shape (batch_size, scale, d)
        # h0 = torch.randn(self.lstm_num_layers, feature.shape[0], self.lstm_hidden_size)
        # c0 = torch.randn(self.lstm_num_layers, feature.shape[0], self.lstm_hidden_size)
        # feature, (hn, cn) = self.lstm_layer(feature, (h0, c0))
        return feature, None