yzcjtr / GeoNet

Code for GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose (CVPR 2018)
MIT License
723 stars 181 forks source link

A problem of training #26

Closed FatEvilCat closed 5 years ago

FatEvilCat commented 5 years ago

Sorry,I am the beginner of tensorflow. When I prepared data ,and then I open terminal.

$ python data/prepare_train_data.py --dataset_dir=/home/panc/dataset/ --dataset_name=kitti_odom --dump_root=/home/panc/formatted/data/ --seq_length=5 --img_height=128 --img_width=416 --num_threads=16 --remove_static

The first step was successful BUT when I trained the dataset , an error occurs. I have no idea to solve it .

$ python geonet_main.py --mode=train_rigid --dataset_dir=/home/panc/formatted/data/ --checkpoint_dir=/home/panc/save/ckpts/ --learning_rate=0.0002 --seq_length=5 --batch_size=4 --max_steps=350000

x_steps=350000 {'add_dispnet': True, 'add_flownet': False, 'add_posenet': True, 'alpha_recon_image': 0.85, 'batch_size': 4, 'checkpoint_dir': '/home/panc/save/ckpts/', 'dataset_dir': '/home/panc/formatted/data/', 'depth_test_split': 'eigen', 'disp_smooth_weight': 0.5, 'dispnet_encoder': 'resnet50', 'flow_consistency_alpha': 3.0, 'flow_consistency_beta': 0.05, 'flow_consistency_weight': 0.2, 'flow_smooth_weight': 0.2, 'flow_warp_weight': 1.0, 'flownet_type': 'residual', 'img_height': 128, 'img_width': 416, 'init_ckpt_file': None, 'learning_rate': 0.0002, 'max_steps': 350000, 'max_to_keep': 20, 'mode': 'train_rigid', 'num_scales': 4, 'num_source': 4, 'num_threads': 32, 'output_dir': None, 'pose_test_seq': 9, 'rigid_warp_weight': 1.0, 'save_ckpt_freq': 5000, 'scale_normalize': False, 'seq_length': 5} Traceback (most recent call last): File "geonet_main.py", line 166, in tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "geonet_main.py", line 157, in main train() File "geonet_main.py", line 72, in train tgt_image, src_image_stack, intrinsics = loader.load_train_batch() File "/home/panc/GeoNet/data_loader.py", line 21, in load_train_batch file_list['image_file_list'], shuffle=False) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 217, in string_input_producer raise ValueError(not_null_err) ValueError: string_input_producer requires a non-null input tensor

谢谢, 如果有好的建议的话,中文回复我就可以了

yzcjtr commented 5 years ago

Seems like the dataloader meets problem. Can you check if your formatted data is correct, and the path is assigned correctly? Or which tensorflow version are you using?

xiaomeng9532 commented 5 years ago

Seems like the dataloader meets problem. Can you check if your formatted data is correct, and the path is assigned correctly? Or which tensorflow version are you using?

{'add_dispnet': True, 'add_flownet': False, 'add_posenet': True, 'alpha_recon_image': 0.85, 'batch_size': 4, 'checkpoint_dir': 'checkpoint/checkpoint_depth/', 'dataset_dir': 'data_preprocessing/dump_data_depth/', 'depth_test_split': 'eigen', 'disp_smooth_weight': 0.5, 'dispnet_encoder': 'resnet50', 'flow_consistency_alpha': 3.0, 'flow_consistency_beta': 0.05, 'flow_consistency_weight': 0.2, 'flow_smooth_weight': 0.2, 'flow_warp_weight': 1.0, 'flownet_type': 'residual', 'img_height': 128, 'img_width': 416, 'init_ckpt_file': None, 'learning_rate': 0.0002, 'max_steps': 350000, 'max_to_keep': 20, 'mode': 'train_rigid', 'num_scales': 4, 'num_source': 2, 'num_threads': 32, 'output_dir': None, 'pose_test_seq': 9, 'rigid_warp_weight': 1.0, 'save_ckpt_freq': 5000, 'scale_normalize': False, 'seq_length': 3} [<tf.Tensor 'gradients/image_sampling_3/split_grad/concat:0' shape=(8, 16, 52, 2) dtype=float32>, None, None] [<tf.Tensor 'gradients/image_sampling_7/split_grad/concat:0' shape=(8, 16, 52, 2) dtype=float32>, None, None] [<tf.Tensor 'gradients/image_sampling_2/split_grad/concat:0' shape=(8, 32, 104, 2) dtype=float32>, None, None] [<tf.Tensor 'gradients/image_sampling_6/split_grad/concat:0' shape=(8, 32, 104, 2) dtype=float32>, None, None] [<tf.Tensor 'gradients/image_sampling_1/split_grad/concat:0' shape=(8, 64, 208, 2) dtype=float32>, None, None] [<tf.Tensor 'gradients/image_sampling_5/split_grad/concat:0' shape=(8, 64, 208, 2) dtype=float32>, None, None] [<tf.Tensor 'gradients/image_sampling/split_grad/concat:0' shape=(8, 128, 416, 2) dtype=float32>, None, None] [<tf.Tensor 'gradients/image_sampling_4/split_grad/concat:0' shape=(8, 128, 416, 2) dtype=float32>, None, None] 2019-06-09 03:40:26.142938: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2019-06-09 03:40:26.142960: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2019-06-09 03:40:26.142982: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2019-06-09 03:40:26.142989: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2019-06-09 03:40:26.143012: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. Trainable variables: depth_net/Conv/weights:0 depth_net/Conv/BatchNorm/beta:0 depth_net/Conv_1/weights:0 depth_net/Conv_1/BatchNorm/beta:0 depth_net/Conv_2/weights:0 depth_net/Conv_2/BatchNorm/beta:0 depth_net/Conv_3/weights:0 depth_net/Conv_3/BatchNorm/beta:0 depth_net/Conv_4/weights:0 depth_net/Conv_4/BatchNorm/beta:0 depth_net/Conv_5/weights:0 depth_net/Conv_5/BatchNorm/beta:0 depth_net/Conv_6/weights:0 depth_net/Conv_6/BatchNorm/beta:0 depth_net/Conv_7/weights:0 depth_net/Conv_7/BatchNorm/beta:0 depth_net/Conv_8/weights:0 depth_net/Conv_8/BatchNorm/beta:0 depth_net/Conv_9/weights:0 depth_net/Conv_9/BatchNorm/beta:0 depth_net/Conv_10/weights:0 depth_net/Conv_10/BatchNorm/beta:0 depth_net/Conv_11/weights:0 depth_net/Conv_11/BatchNorm/beta:0 depth_net/Conv_12/weights:0 depth_net/Conv_12/BatchNorm/beta:0 depth_net/Conv_13/weights:0 depth_net/Conv_13/BatchNorm/beta:0 depth_net/Conv_14/weights:0 depth_net/Conv_14/BatchNorm/beta:0 depth_net/Conv_15/weights:0 depth_net/Conv_15/BatchNorm/beta:0 depth_net/Conv_16/weights:0 depth_net/Conv_16/BatchNorm/beta:0 depth_net/Conv_17/weights:0 depth_net/Conv_17/BatchNorm/beta:0 depth_net/Conv_18/weights:0 depth_net/Conv_18/BatchNorm/beta:0 depth_net/Conv_19/weights:0 depth_net/Conv_19/BatchNorm/beta:0 depth_net/Conv_20/weights:0 depth_net/Conv_20/BatchNorm/beta:0 depth_net/Conv_21/weights:0 depth_net/Conv_21/BatchNorm/beta:0 depth_net/Conv_22/weights:0 depth_net/Conv_22/BatchNorm/beta:0 depth_net/Conv_23/weights:0 depth_net/Conv_23/BatchNorm/beta:0 depth_net/Conv_24/weights:0 depth_net/Conv_24/BatchNorm/beta:0 depth_net/Conv_25/weights:0 depth_net/Conv_25/BatchNorm/beta:0 depth_net/Conv_26/weights:0 depth_net/Conv_26/BatchNorm/beta:0 depth_net/Conv_27/weights:0 depth_net/Conv_27/BatchNorm/beta:0 depth_net/Conv_28/weights:0 depth_net/Conv_28/BatchNorm/beta:0 depth_net/Conv_29/weights:0 depth_net/Conv_29/BatchNorm/beta:0 depth_net/Conv_30/weights:0 depth_net/Conv_30/BatchNorm/beta:0 depth_net/Conv_31/weights:0 depth_net/Conv_31/BatchNorm/beta:0 depth_net/Conv_32/weights:0 depth_net/Conv_32/BatchNorm/beta:0 depth_net/Conv_33/weights:0 depth_net/Conv_33/BatchNorm/beta:0 depth_net/Conv_34/weights:0 depth_net/Conv_34/BatchNorm/beta:0 depth_net/Conv_35/weights:0 depth_net/Conv_35/BatchNorm/beta:0 depth_net/Conv_36/weights:0 depth_net/Conv_36/BatchNorm/beta:0 depth_net/Conv_37/weights:0 depth_net/Conv_37/BatchNorm/beta:0 depth_net/Conv_38/weights:0 depth_net/Conv_38/BatchNorm/beta:0 depth_net/Conv_39/weights:0 depth_net/Conv_39/BatchNorm/beta:0 depth_net/Conv_40/weights:0 depth_net/Conv_40/BatchNorm/beta:0 depth_net/Conv_41/weights:0 depth_net/Conv_41/BatchNorm/beta:0 depth_net/Conv_42/weights:0 depth_net/Conv_42/BatchNorm/beta:0 depth_net/Conv_43/weights:0 depth_net/Conv_43/BatchNorm/beta:0 depth_net/Conv_44/weights:0 depth_net/Conv_44/BatchNorm/beta:0 depth_net/Conv_45/weights:0 depth_net/Conv_45/BatchNorm/beta:0 depth_net/Conv_46/weights:0 depth_net/Conv_46/BatchNorm/beta:0 depth_net/Conv_47/weights:0 depth_net/Conv_47/BatchNorm/beta:0 depth_net/Conv_48/weights:0 depth_net/Conv_48/BatchNorm/beta:0 depth_net/Conv_49/weights:0 depth_net/Conv_49/BatchNorm/beta:0 depth_net/Conv_50/weights:0 depth_net/Conv_50/BatchNorm/beta:0 depth_net/Conv_51/weights:0 depth_net/Conv_51/BatchNorm/beta:0 depth_net/Conv_52/weights:0 depth_net/Conv_52/BatchNorm/beta:0 depth_net/Conv_53/weights:0 depth_net/Conv_53/BatchNorm/beta:0 depth_net/Conv_54/weights:0 depth_net/Conv_54/BatchNorm/beta:0 depth_net/Conv_55/weights:0 depth_net/Conv_55/BatchNorm/beta:0 depth_net/Conv_56/weights:0 depth_net/Conv_56/BatchNorm/beta:0 depth_net/Conv_57/weights:0 depth_net/Conv_57/BatchNorm/beta:0 depth_net/Conv_58/weights:0 depth_net/Conv_58/BatchNorm/beta:0 depth_net/Conv_59/weights:0 depth_net/Conv_59/BatchNorm/beta:0 depth_net/Conv_60/weights:0 depth_net/Conv_60/BatchNorm/beta:0 depth_net/Conv_61/weights:0 depth_net/Conv_61/BatchNorm/beta:0 depth_net/Conv_62/weights:0 depth_net/Conv_62/BatchNorm/beta:0 depth_net/Conv_63/weights:0 depth_net/Conv_63/BatchNorm/beta:0 depth_net/Conv_64/weights:0 depth_net/Conv_64/BatchNorm/beta:0 depth_net/Conv_65/weights:0 depth_net/Conv_65/BatchNorm/beta:0 depth_net/Conv_66/weights:0 depth_net/Conv_66/BatchNorm/beta:0 depth_net/Conv_67/weights:0 depth_net/Conv_67/BatchNorm/beta:0 depth_net/Conv_68/weights:0 depth_net/Conv_68/BatchNorm/beta:0 depth_net/Conv_69/weights:0 depth_net/Conv_69/BatchNorm/beta:0 depth_net/Conv_70/weights:0 depth_net/Conv_70/BatchNorm/beta:0 depth_net/Conv_71/weights:0 depth_net/Conv_71/biases:0 depth_net/Conv_72/weights:0 depth_net/Conv_72/BatchNorm/beta:0 depth_net/Conv_73/weights:0 depth_net/Conv_73/BatchNorm/beta:0 depth_net/Conv_74/weights:0 depth_net/Conv_74/biases:0 depth_net/Conv_75/weights:0 depth_net/Conv_75/BatchNorm/beta:0 depth_net/Conv_76/weights:0 depth_net/Conv_76/BatchNorm/beta:0 depth_net/Conv_77/weights:0 depth_net/Conv_77/biases:0 depth_net/Conv_78/weights:0 depth_net/Conv_78/BatchNorm/beta:0 depth_net/Conv_79/weights:0 depth_net/Conv_79/BatchNorm/beta:0 depth_net/Conv_80/weights:0 depth_net/Conv_80/biases:0 pose_net/Conv/weights:0 pose_net/Conv/BatchNorm/beta:0 pose_net/Conv_1/weights:0 pose_net/Conv_1/BatchNorm/beta:0 pose_net/Conv_2/weights:0 pose_net/Conv_2/BatchNorm/beta:0 pose_net/Conv_3/weights:0 pose_net/Conv_3/BatchNorm/beta:0 pose_net/Conv_4/weights:0 pose_net/Conv_4/BatchNorm/beta:0 pose_net/Conv_5/weights:0 pose_net/Conv_5/BatchNorm/beta:0 pose_net/Conv_6/weights:0 pose_net/Conv_6/BatchNorm/beta:0 pose_net/Conv_7/weights:0 pose_net/Conv_7/biases:0 ('parameter_count =', 60039504) It is stopped here.It can not continue training.here is my training code.

Train DepthNet

python geonet_main.py --mode=train_rigid --dataset_dir=data_preprocessing/dump_data_depth/ --checkpoint_dir=checkpoint/checkpoint_depth/ --learning_rate=0.0002 --seq_length=3 --batch_size=4 --max_steps=350000