Open weiyutao886 opened 2 years ago
and i follow this: Yes, this is correct, the corresponding subject splits are [ ([1, 2, 3, 4, 5], [6, 7, 8, 9, 10]), ([1, 3, 5, 7, 9], [2, 4, 6, 8, 10]), ([1, 4, 7, 10, 3], [2, 5, 6, 8, 9]), ([1, 5, 9, 3, 7], [2, 4, 6, 8, 10]), ([1, 6, 2, 7, 3], [4, 5, 8, 9, 10]) ]. We just find a bug that the second group is the same as the fourth group (because we adopt (subject_idx+offset)%10 to generate the splits), and the corresponding results of PointLSTM-late are [ 93.13, 87.59, 97.07, 89.08, 92.78, 94.36, 97.07, 89.08, 91.11, 91.64], and the corrected accuracy is 92.10 \pm 2.78.
I use the same strategy as you, but I can't reach each accuracy you said
Can you provide more details about the experiments (e.g., the experimental log)? What kind of performance can you achieve?
Thank you for your reply. I currently use subjet Sprint ([1, 2, 3, 4, 5], [6, 7, 8, 9, 10]), and the corresponding accuracy should be 93.13, 87.59 you mentioned. I achieved the current effect by adjusting the parameters bitchsize to 8 and framesize to 24, as follows
{'work_dir': 'result17', 'config': 'pointlstm.yaml', 'device': '0', 'phase': 'train', 'random_fix': True, 'random_seed': 0, 'save_interval': 5, 'eval_inte train subjet[1, 2, 3, 4, 5] test [6, 7, 8, 9, 10]) result: [ Fri Apr 1 23:36:10 2022 ] Epoch 135, Test, Evaluation: prec1 90.5724, prec5 96.6330 [ Fri Apr 1 23:38:48 2022 ] Epoch 140, Test, Evaluation: prec1 89.5623, prec5 96.6330 [ Fri Apr 1 23:41:25 2022 ] Epoch 145, Test, Evaluation: prec1 90.2357, prec5 96.6330 [ Fri Apr 1 23:44:03 2022 ] Epoch 150, Test, Evaluation: prec1 88.2155, prec5 96.6330 [ Fri Apr 1 23:46:39 2022 ] Epoch 155, Test, Evaluation: prec1 88.8889, prec5 96.6330 [ Fri Apr 1 23:49:15 2022 ] Epoch 160, Test, Evaluation: prec1 90.9091, prec5 96.6330 [ Fri Apr 1 23:51:51 2022 ] Epoch 165, Test, Evaluation: prec1 89.8990, prec5 96.6330 [ Fri Apr 1 23:54:29 2022 ] Epoch 170, Test, Evaluation: prec1 89.5623, prec5 96.6330
best result [ Fri Apr 1 23:49:15 2022 ] Epoch 160, Test, Evaluation: prec1 90.9091, prec5 96.6330 Your result is 93.13,
train subjet[6, 7, 8, 9, 10]) test[1, 2, 3, 4, 5] result: [ Fri Apr 1 10:25:15 2022 ] Epoch 145, Test, Evaluation: prec1 85.1852, prec5 98.5185 [ Fri Apr 1 10:29:01 2022 ] Epoch 150, Test, Evaluation: prec1 87.0370, prec5 97.7778 [ Fri Apr 1 10:32:47 2022 ] Epoch 155, Test, Evaluation: prec1 84.4444, prec5 98.8889 [ Fri Apr 1 10:36:33 2022 ] Epoch 160, Test, Evaluation: prec1 87.0370, prec5 97.7778 [ Fri Apr 1 10:40:18 2022 ] Epoch 165, Test, Evaluation: prec1 87.7778, prec5 97.4074 [ Fri Apr 1 10:44:04 2022 ] Epoch 170, Test, Evaluation: prec1 86.6667, prec5 98.1481 [ Fri Apr 1 10:47:50 2022 ] Epoch 175, Test, Evaluation: prec1 87.0370, prec5 98.1481
best result [ Fri Apr 1 10:40:18 2022 ] Epoch 165, Test, Evaluation: prec1 87.7778, prec5 97.4074 your result is 87.59
my model:pointlstm-late
in_dims = fea2.shape[1] * 2 - 4
pts_num //= self.downsample[1]
# output = self.lstm(fea2.permute(0, 2, 1, 3))
# fea3 = output[0][0].squeeze(-1).permute(0, 2, 1, 3)
ret_group_array3 = self.group.st_group_points(fea2, 3, [0, 1, 2], self.knn[2], 3)
ret_array3, inputs, ind = self.select_ind(ret_group_array3, inputs,
batchsize, in_dims, timestep, pts_num)
fea3 = self.pool3(self.stage3(ret_array3)).view(batchsize, -1, timestep, pts_num)
# fea3 = fea3.gather(-1, ind.unsqueeze(1).expand(-1, fea3.shape[1], -1, -1))
fea3 = torch.cat((inputs, fea3), dim=1)
print('fea3===', fea3.shape)
# stage 4: inter-frame, late
in_dims = fea3.shape[1] * 2 - 4
pts_num //= self.downsample[2]
output = self.lstm(fea3.permute(0, 2, 1, 3))
fea4 = output[0][0].squeeze(-1).permute(0, 2, 1, 3)
print('lstm333=', fea4.shape)
ret_group_array4 = self.group.st_group_points(fea3, 3, [0, 1, 2], self.knn[3], 3)
ret_array4, inputs, _ = self.select_ind(ret_group_array4, inputs,
batchsize, in_dims, timestep, pts_num)
# fea4 = self.pool4(self.stage4(ret_array4)).view(batchsize, -1, timestep, pts_num)
fea4 = fea4.gather(-1, _.unsqueeze(1).expand(-1, fea4.shape[1], -1, -1))
print('outfea4=', fea4.shape)
Hi, it has been a long time from this experiment, and I remember that we select this dataset to show the generalization and do not perform ablation study on this dataset. I do not know how the changes of frames affect the performance on this dataset. I find the relevant logs, and you can find your information. Another difference is that we set offset=True for MSR Action 3D experiments, the motion information can help recognition.
For train subjet[1, 2, 3, 4, 5] test [6, 7, 8, 9, 10]): train.txt log.txt
For train subjet[6, 7, 8, 9, 10]) test[1, 2, 3, 4, 5]: train.txt log.txt
thank you, i will try it again.
Hello, I conducted the experiment again according to your parameters, but the effect is not ideal. As follows: I only conducted the experiment in train subjet [1, 2, 3, 4, 5] test [6, 7, 8, 9, 10]) [ Wed Apr 6 21:59:49 2022 ] Epoch 170, Test, Evaluation: prec1 87.9725, prec5 98.2818 best result is 87.9 your best result is 93 My parameters are as follows [ Thu Apr 7 22:00:00 2022 ] Parameters: {'work_dir': 'result19', 'config': 'pointlstm.yaml', 'device': '0', 'phase': 'train', 'random_fix': True, 'random_seed': 0, 'save_interval': 5, 'eval_interval': 5, 'print_log': True, 'log_interval': 50, 'dataloader': 'data load1.SHRECLoader', 'num_worker': 0, 'framesize': 32, 'pts_size': 128, 'train_loader_args': {'phase': 'train', 'framerate': 32}, 'test_loader_args': {'phase': 'test', 'framerate': 32}, 'valid_loader_args': {}, 'model': 'models.motion10.Motion', 'model_args': {'pts_size': 128, 'num_classes': 20, 'knn': [16, 24, 48, 12], 'offsets': True, 'topk': 16}, 'weights': None, 'ignore_weights': [], 'batch_size': 8, 'test_batch_size': 8, 'optimizer_args': {'optimizer': 'Adam', 'base_lr': 0.0001, 'step': [100, 160, 180], 'weight_decay': 0.005, 'start_epoch': 0, 'nesterov': False}, 'num_epoch': 300}
I wonder if there is a problem with data processing or my model. What do you think
Sorry for late reply, you can visualize the point cloud sequence at different stages, which should be likely to Figure 4. If you will, I can send the source data processing code to you via email, and you can create a PR after you reimplement the results.
This is my dataloader and data processing code. You are welcome to correct my mistakes. I would be grateful if I could use your code for reference. And this is my email [935628178@qq.com](mailto:935628178@qq.com msraction_process.zip data load1.zip )
我通过您发的trian的日志文件,发现您报告的准确率不是里面最高的,请问您每个subject取准确率的标准是什么
[ Thu Mar 5 23:16:02 2020 ] Epoch 110, Test Evaluation: prec1 93.4708, prec5 98.6254 [ Thu Mar 5 23:18:14 2020 ] Epoch 115, Test Evaluation: prec1 92.7835, prec5 98.6254 [ Thu Mar 5 23:20:27 2020 ] Epoch 120, Test Evaluation: prec1 92.0962, prec5 98.6254 [ Thu Mar 5 23:22:39 2020 ] Epoch 125, Test Evaluation: prec1 93.1272, prec5 98.6254 [ Thu Mar 5 23:24:52 2020 ] Epoch 130, Test Evaluation: prec1 93.1272, prec5 98.6254 [ Thu Mar 5 23:27:04 2020 ] Epoch 135, Test Evaluation: prec1 93.4708, prec5 98.6254 这里最高的准确率是93.4,但是您报告的准确率是93.1
[ Thu Mar 5 23:29:19 2020 ] Epoch 130, Test Evaluation: prec1 87.5940, prec5 99.6241 [ Thu Mar 5 23:31:41 2020 ] Epoch 135, Test Evaluation: prec1 87.5940, prec5 99.6241 [ Thu Mar 5 23:34:04 2020 ] Epoch 140, Test Evaluation: prec1 88.3459, prec5 99.6241 [ Thu Mar 5 23:36:27 2020 ] Epoch 145, Test Evaluation: prec1 88.3459, prec5 99.2481 [ Thu Mar 5 23:38:50 2020 ] Epoch 150, Test Evaluation: prec1 87.9699, prec5 99.6241 [ Thu Mar 5 23:41:13 2020 ] Epoch 155, Test Evaluation: prec1 89.0977, prec5 99.6241 [ Thu Mar 5 23:43:35 2020 ] Epoch 160, Test Evaluation: prec1 87.9699, prec5 98.8722 [ Thu Mar 5 23:45:58 2020 ] Epoch 165, Test Evaluation: prec1 87.2180, prec5 98.8722 [ Thu Mar 5 23:48:21 2020 ] Epoch 170, Test Evaluation: prec1 87.2180, prec5 99.2481 这里最高的准确率是89.1,而您报告的是87.59,再次麻烦您非常抱歉。
The accuracy of the last epoch.
This is my dataloader and data processing code. You are welcome to correct my mistakes. I would be grateful if I could use your code for reference. And this is my email @.***
------------------ 原始邮件 ------------------ 发件人: "ycmin95/pointlstm-gesture-recognition-pytorch" @.>; 发送时间: 2022年4月11日(星期一) 上午10:12 @.>; @.>;"State @.>; 主题: Re: [ycmin95/pointlstm-gesture-recognition-pytorch] msr action (Issue #23)
Sorry for late reply, you can visualize the point cloud sequence at different stages, which should be likely to Figure 4. If you will, I can send the source data processing code to you via email, and you can create a PR after you reimplement the results.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you modified the open/close state.Message ID: @.***>
Hello, I used your code to experiment on MSR action dataset, and the accuracy did not reach the level in your paper. Why? My parameters are as follows: num_epoch: 300 work_dir: ./work_dir/baseline/ batch_size: 8 test_batch_size: 8 num_worker: 10
empty for cpu
device: 1 log_interval: 50 eval_interval: 5 save_interval: 5
weights: ./work_dir/pointlstm/epoch200_model.pt
framesize: &framesize 32 pts_size: &frame_pts_size 128
optimizer_args: optimizer: Adam base_lr: 0.0001 step: [ 100, 160, 180] weight_decay: 0.005 start_epoch: 0 nesterov: False