facebookresearch / VMZ

VMZ: Model Zoo for Video Modeling
Apache License 2.0
1.04k stars 156 forks source link

finetune ir-csn with my own dataset #91

Open oLIVIa-Ld opened 5 years ago

oLIVIa-Ld commented 5 years ago

Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops as it is not a valid file. Ignoring @/caffe2/caffe2/contrib/nccl:nccl_ops as it is not a valid file. Ignoring @/caffe2/caffe2/contrib/gloo:gloo_ops_gpu as it is not a valid file. [E init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. [E init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. [E init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. INFO:train_net:Namespace(base_learning_rate=0.0001, batch_size=8, bottleneck_multiplier=1.0, channel_multiplier=1.0, clip_length_of=8, clip_length_rgb=32, conv1_temporal_kernel=3, conv1_temporal_stride=1, crop_size=224, cudnn_workspace_limit_mb=64, db_type='pickle', display_iter=10, do_flow_aggregation=0, epoch_size=100000, file_store_path='/tmp', flow_data_type=0, frame_gap_of=2, gamma=0.1, get_video_id=0, gpus='0', input_type=0, is_checkpoint=0, jitter_scales='128,160', load_model_path='/usr/VMZ-master-1/irCSN_152_ft_kinetics_from_ig65m_f126851907.pkl', model_depth=152, model_name='ir-csn', multi_label=0, num_channels=3, num_decode_threads=4, num_epochs=8, num_gpus=1, num_labels=5, pred_layer_name=None, profiling=0, sampling_rate_of=2, sampling_rate_rgb=2, save_model_name='simple_c3d', scale_h=256, scale_w=342, step_epoch=2, test_data='/usr/VMZ-master/datasetv/pig_test', train_data='/usr/VMZ-master/datasetv/pig_train', use_cudnn=1, use_dropout=0, use_local_file=1, use_pool1=1, video_res_type=1, weight_decay=0.005) INFO:model_builder:Validated: ir-csn with 152 layers INFO:model_builder:with input 32x224x224 INFO:train_net:Running on GPUs: [0] INFO:train_net:Using epoch size: 100000 WARNING:root:[====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information. INFO:train_net:Training set has 723 examples INFO:data_parallel_model:Parallelizing model for devices: [0] INFO:data_parallel_model:Create input and model training operators INFO:data_parallel_model:Model for GPU : 0 INFO:model_helper:outputing rgb data INFO:model_builder:creating ir-csn, depth=152... INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 256 INFO:video_model:in: 256 out: 64 INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 256 INFO:video_model:in: 256 out: 64 INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 256 INFO:video_model:in: 256 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 512 INFO:video_model:in: 512 out: 512 INFO:video_model:in: 512 out: 2048 INFO:video_model:in: 2048 out: 512 INFO:video_model:in: 512 out: 512 INFO:video_model:in: 512 out: 2048 INFO:video_model:in: 2048 out: 512 INFO:video_model:in: 512 out: 512 INFO:video_model:in: 512 out: 2048 INFO:data_parallel_model:Adding gradient operators INFO:data_parallel_model:Add gradient all-reduces for SyncSGD INFO:data_parallel_model:Post-iteration operators for updating params INFO:data_parallel_model:Add initial parameter sync WARNING:data_parallel_model:------- DEPRECATED API, please use data_parallel_model.OptimizeGradientMemory() ----- WARNING:memonger:NOTE: Executing memonger to optimize gradient memory INFO:memonger:Memonger memory optimization took 0.0483040809631 secs INFO:train_net:----- Create test net ---- WARNING:root:[====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information. INFO:train_net:Testing set has 278 examples INFO:data_parallel_model:Parallelizing model for devices: [0] INFO:data_parallel_model:Create input and model training operators WARNING:data_parallel_model: WARNING:data_parallel_model:############# WARNING ############# WARNING:data_parallel_model:Model ir-csn_test/<caffe2.python.cnn.CNNModelHelper object at 0x7fec6c3cd350> is used for testing/validation but WARNING:data_parallel_model:has init_params=True! WARNING:data_parallel_model:This can conflict with model training. WARNING:data_parallel_model:Please ensure model = ModelHelper(init_params=False) WARNING:data_parallel_model:#################################### WARNING:data_parallel_model: INFO:data_parallel_model:Model for GPU : 0 INFO:model_helper:outputing rgb data INFO:model_builder:creating ir-csn, depth=152... INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 256 INFO:video_model:in: 256 out: 64 INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 256 INFO:video_model:in: 256 out: 64 INFO:video_model:in: 64 out: 64 INFO:video_model:in: 64 out: 256 INFO:video_model:in: 256 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 128 INFO:video_model:in: 128 out: 128 INFO:video_model:in: 128 out: 512 INFO:video_model:in: 512 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 256 INFO:video_model:in: 256 out: 256 INFO:video_model:in: 256 out: 1024 INFO:video_model:in: 1024 out: 512 INFO:video_model:in: 512 out: 512 INFO:video_model:in: 512 out: 2048 INFO:video_model:in: 2048 out: 512 INFO:video_model:in: 512 out: 512 INFO:video_model:in: 512 out: 2048 INFO:video_model:in: 2048 out: 512 INFO:video_model:in: 512 out: 512 INFO:video_model:in: 512 out: 2048 INFO:data_parallel_model:Parameter update function not defined --> only forward WARNING:caffe2.python.workspace:Original python traceback for operator 9 in network ir-csn_test in exception above (most recent call last): WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/train_net.py", line 586, in WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/train_net.py", line 581, in main WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/train_net.py", line 401, in Train WARNING:caffe2.python.workspace: File "/root/pytorch/build/caffe2/python/data_parallel_model.py", line 39, in Parallelize_GPU WARNING:caffe2.python.workspace: File "/root/pytorch/build/caffe2/python/data_parallel_model.py", line 237, in Parallelize WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/train_net.py", line 275, in create_model_ops WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/models/model_builder.py", line 129, in build_model WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/models/r3d_model.py", line 179, in create_model WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/models/r3d_model.py", line 311, in create_r3d WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/models/builder/video_model.py", line 297, in add_bottleneck WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/models/builder/video_model.py", line 145, in add_conv WARNING:caffe2.python.workspace: File "/usr/VMZ-master-1/tools/models/builder/video_model.py", line 211, in add_channelwise_conv WARNING:caffe2.python.workspace: File "/root/pytorch/build/caffe2/python/brew.py", line 108, in scope_wrapper WARNING:caffe2.python.workspace: File "/root/pytorch/build/caffe2/python/helpers/conv.py", line 164, in conv_nd WARNING:caffe2.python.workspace: File "/root/pytorch/build/caffe2/python/helpers/conv.py", line 123, in _ConvBase Traceback (most recent call last): File "/usr/VMZ-master-1/tools/train_net.py", line 586, in main() File "/usr/VMZ-master-1/tools/train_net.py", line 581, in main Train(args) File "/usr/VMZ-master-1/tools/train_net.py", line 404, in Train workspace.CreateNet(test_model.net) File "/root/pytorch/build/caffe2/python/workspace.py", line 181, in CreateNet StringifyProto(net), overwrite, File "/root/pytorch/build/caffe2/python/workspace.py", line 215, in CallWithExceptionIntercept return func(*args, *kwargs) RuntimeError: [enforce fail at context.h:48] option.device_type() == PROTO_CPU. 1 vs 0 frame #0: c10::ThrowEnforceNotMet(char const, int, char const, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, void const) + 0x78 (0x7fed19c32178 in /usr/local/lib/libc10.so) frame #1: + 0x2686d70 (0x7fecdcf03d70 in /usr/local/lib/libtorch.so) frame #2: + 0x2723fec (0x7fecdcfa0fec in /usr/local/lib/libtorch.so) frame #3: + 0x3aff5ee (0x7fecde37c5ee in /usr/local/lib/libtorch.so) frame #4: std::_Function_handler<std::unique_ptr<caffe2::OperatorBase, std::default_delete > (caffe2::OperatorDef const&, caffe2::Workspace), std::unique_ptr<caffe2::OperatorBase, std::default_delete > ()(caffe2::OperatorDef const&, caffe2::Workspace)>::_M_invoke(std::_Any_data const&, caffe2::OperatorDef const&, caffe2::Workspace&&) + 0x23 (0x7fed1a4b5433 in /root/pytorch/build/caffe2/python/caffe2_pybind11_state_gpu.so) frame #5: + 0x236c25c (0x7fecdcbe925c in /usr/local/lib/libtorch.so) frame #6: caffe2::CreateOperator(caffe2::OperatorDef const&, caffe2::Workspace, int) + 0x328 (0x7fecdcbea528 in /usr/local/lib/libtorch.so) frame #7: caffe2::dag_utils::prepareOperatorNodes(std::shared_ptr const&, caffe2::Workspace) + 0x2ad (0x7fecdcbda06d in /usr/local/lib/libtorch.so) frame #8: caffe2::AsyncNetBase::AsyncNetBase(std::shared_ptr const&, caffe2::Workspace) + 0x24d (0x7fecdcbb670d in /usr/local/lib/libtorch.so) frame #9: caffe2::AsyncSchedulingNet::AsyncSchedulingNet(std::shared_ptr const&, caffe2::Workspace) + 0x9 (0x7fecdcbbb5b9 in /usr/local/lib/libtorch.so) frame #10: + 0x23410ae (0x7fecdcbbe0ae in /usr/local/lib/libtorch.so) frame #11: std::_Function_handler<std::unique_ptr<caffe2::NetBase, std::default_delete > (std::shared_ptr const&, caffe2::Workspace), std::unique_ptr<caffe2::NetBase, std::default_delete > ()(std::shared_ptr const&, caffe2::Workspace)>::_M_invoke(std::_Any_data const&, std::shared_ptr const&, caffe2::Workspace&&) + 0x23 (0x7fecdcbbdf83 in /usr/local/lib/libtorch.so) frame #12: caffe2::CreateNet(std::shared_ptr const&, caffe2::Workspace*) + 0x4a5 (0x7fecdcbb0495 in /usr/local/lib/libtorch.so) frame #13: caffe2::Workspace::CreateNet(std::shared_ptr const&, bool) + 0x103 (0x7fecdcc2fe23 in /usr/local/lib/libtorch.so) frame #14: caffe2::Workspace::CreateNet(caffe2::NetDef const&, bool) + 0x91 (0x7fecdcc30d61 in /usr/local/lib/libtorch.so) frame #15: + 0x57906 (0x7fed1a4ad906 in /root/pytorch/build/caffe2/python/caffe2_pybind11_state_gpu.so) frame #16: + 0x57bd2 (0x7fed1a4adbd2 in /root/pytorch/build/caffe2/python/caffe2_pybind11_state_gpu.so) frame #17: + 0x99e3d (0x7fed1a4efe3d in /root/pytorch/build/caffe2/python/caffe2_pybind11_state_gpu.so)

frame #33: __libc_start_main + 0xe7 (0x7fed1e897b97 in /lib/x86_64-linux-gnu/libc.so.6)
oLIVIa-Ld commented 5 years ago

@dutran dear dutran, I'm so impressed by your CSN, I want to finetune the model with my own dataset. this problem arised and I think there's something wrong with my code. could you help me figure out this, thank you. PS: I got only one GPU.

dutran commented 5 years ago

It is very likely that you got the problem of memory. Also I notice that you use jitter_scales to small (128, 160) but crop_size is larger than that e.g. 224. For memory saving you could try jitter scale (128, 160), crop_size=112, use_pool1=0

oLIVIa-Ld commented 5 years ago

It is very likely that you got the problem of memory. Also I notice that you use jitter_scales to small (128, 160) but crop_size is larger than that e.g. 224. For memory saving you could try jitter scale (128, 160), crop_size=112, use_pool1=0

I have tried these smaller parameters, also this mistake. I dont know why the exception showed PROTO_CPU.

dutran commented 5 years ago

when you created /usr/VMZ-master/datasetv/pig_test, did you use use_list=1? I ask this because you used use_local_file=1 in this script

oLIVIa-Ld commented 5 years ago

when you created /usr/VMZ-master/datasetv/pig_test, did you use use_list=1? I ask this because you used use_local_file=1 in this script

no, when i created the test file, i did not use 'use_list=1'. And i deleted 'use_local_file=1' the same exception. i'm so confused.