Open LHM3762 opened 3 years ago
You can use test.py to do inference, at the same time, in config.yaml you can use pre-trained model which you obtained after training.
@LHM3762 as @rginjapan already answered you need to use test.py for testing. Assuming you have trained your model, let say you trained the whole deeplio-network with all its components. Now for testing, you need to provide a config-file with the path of the pre-trained model as below:
### DeepLIO Network ##############################
deeplio:
dropout: 0.25
pretrained: true
model-path: "/path/to/my/model.ckpt"
lidar-feat-net:
name: "lidar-feat-pointseg"
pretrained: false
model-path: ""
requires-grad: true
imu-feat-net:
name: "imu-feat-rnn"
pretrained: false
model-path: ""
requires-grad: true
odom-feat-net:
name: "odom-feat-rnn"
pretrained: false
model-path: ""
requires-grad: true
fusion-net:
name: "fusion-layer-soft"
requires-grad: true # only soft-fusion has trainable params
good luck!
Dear author,
We re really intriguing in your current project and did training some models. The training phase worked well, paradoxically, the testing phase doesn't seem to be successful. We encountering a problem, in which the T_local.shape and pred_f2f_t_b.shape as well as pred_f2f_w_b are distinct. It could be vividly exemplified by the following figure. Simultaneously, we found that the testing phase doesn't utilize the trained models. May we ask how we can directly using the trained models. Or the training and testing phase were intentionally designed separately ?
Thank you in advance for your attention.