ClementPinard / SfmLearner-Pytorch

Pytorch version of SfmLearner from Tinghui Zhou et al.
MIT License
1.01k stars 224 forks source link

For prepare_train_data.py #80

Closed yxh1993 closed 4 years ago

yxh1993 commented 4 years ago

When I input ''python3 data/prepare_train_data.py /workspace/SfmLearner-Pytorch-master/raw_data/ --dataset-format 'kitti' --dump-root /workspace/SfmLearner-Pytorch-master/raw_data/formatted/data/ --width 416 --height 128 --num-threads 4 --static-frames /workspace/SfmLearner-Pytorch-master/data/static_frames.txt --with-depth --with-pose'', I can't tun it.

I want to know what went wrong.

ClementPinard commented 4 years ago

what message do you get ? Can you copy paste the console output ?

yxh1993 commented 4 years ago

When I input the command [--static-frames /workspace/SfmLearner-Pytorch-master/data/static_frames.txt] [--with-depth] [--with-pose] with [], it inputs unrecognized parameter [--static-frames] [--with-depth] [--with-pose].

I want to know how do I enter the command line. Should it be with or without [] ? Also, I want to understand with depth and pose whether they represent specific value. What do they mean? I have observed that for cityspace datasets, there is no such operation, is this necessary?

And I have another question.

When I accidentally quit during the training of the model. Do I need to retrain? Are the trained models saved?

Thanks a lot. I'm looking forward to your reply.

ClementPinard commented 4 years ago

You need input the command without the []

The kitti dataset has pose and depth groundtruth, while Cityscapes only has pose, hence the unavailability of [--with-depth] option for cityscapes.

For the training, the network is saved each epoch. However I would suggest restarting from scratch, as there is a recipe for learning rate, decreasing every n epochs.

yxh1993 commented 4 years ago

Thanks a lot for your reply.

In addition, I have another question. I want to ask can I change the epochs which default=200 to default=50 and train it for four times?

ClementPinard commented 4 years ago

You can train for 200 epoches with the arguent --epochs 200 (so that you won't have to change the code)