pjl1995 / CTracker

Other
247 stars 47 forks source link

test error #7

Open jinyl777 opened 4 years ago

jinyl777 commented 4 years ago

When I run the test I got result = self.forward(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'inputs'

I debug the test ,then I find the error occur in modle.py:335 if last_feat is None: return torch.zeros(0), torch.zeros(0, 4), features my conput env is four 2080Ti python0.41 and I don not know why it is occur

chtx827 commented 3 years ago

I get the same error.

ZhangwenguangHikvision commented 3 years ago

I get the same error.

pjl1995 commented 3 years ago

I test it and do not encounter these problems. You need to use the latest code and install the environment according to the readme. And maybe you can check the --dataset_path and --model_dir whether they are as required.

ZhangwenguangHikvision commented 3 years ago

One simple solution may just use one GPU, "CUDA_VISIBLE_DEVICES=0, python train.py"

chtx827 commented 3 years ago

Thanks, I solved it by using one GPU.

zhengjyu commented 3 years ago

Thanks, I solved it by using one GPU.

Could you tell me your memory on GPU? There is only 11G memory in my 2080Ti, It will out of memory。But when I set batch_size=4 and num_workers=16, The #13 will be occur. Have you ever encountered such a problem?

chtx827 commented 3 years ago

Thanks, I solved it by using one GPU.

Could you tell me your memory on GPU? There is only 11G memory in my 2080Ti, It will out of memory。But when I set batch_size=4 and num_workers=16, The #13 will be occur. Have you ever encountered such a problem?

My GPU is the same as yours. Maybe you can switch to Pytorch ==1.4.0.

zhengjyu commented 3 years ago

Thanks, I solved it by using one GPU.

Could you tell me your memory on GPU? There is only 11G memory in my 2080Ti, It will out of memory。But when I set batch_size=4 and num_workers=16, The #13 will be occur. Have you ever encountered such a problem?

My GPU is the same as yours. Maybe you can switch to Pytorch ==1.4.0.

OK thank you very much

jinyl777 commented 3 years ago

@ZhangwenguangHikvision Is I train on one GPU and then test can slove this problem?

jinyl777 commented 3 years ago

@pjl1995 I train 100 epoch the last loss is 0.4 .And I test MOT16 train dateset ,got MOTA is 65.2 . I think is low of your paper is it code can not recurcent

pjl1995 commented 3 years ago

You train on MOT17 training set, so you should test this model on MOT17 test set, instead of the MOT16 training set.

LiangXiaoguo commented 2 years ago

Thank you very much for your excellent work, I would like to know whether the MOT7 dataset only keeps the pictures within each sequence for training

pjl1995 commented 2 years ago

Thank you very much for your excellent work, I would like to know whether the MOT7 dataset only keeps the pictures within each sequence for training

Yes.

JuntongMeng commented 2 years ago

@pjl1995 I train 100 epoch the last loss is 0.4 .And I test MOT16 train dateset ,got MOTA is 65.2 . I think is low of your paper is it code can not recurcent hi Are you using a pretrained model in training