Closed leiucky closed 9 years ago
For hdf5 look at http://www.h5py.org/
but what's the style of raw data?
(b , 1 , 1 , n) b= batches n= dim of your vector
Please ask on the caffe-users mailing list.
I have a probelm when input the 1 dim vector into the caffe and produce the right result, the accuracy is 0, and the loss is almost 87. here is my configuration, does anybody ever get stuck in this problem?
the protototxt: name: "LeNet" layer { name: "mnist" type: "HDF5Data" top: "data" top: "label" include { phase: TRAIN } hdf5_data_param { source: "examples/mnist/train_list.txt" batch_size: 64 } } layer { name: "mnist" type: "HDF5Data" top: "data" top: "label" include { phase: TEST } hdf5_data_param { source: "examples/mnist/test_list.txt" batch_size: 100 } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 200 kernel_size: 3 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 800 kernel_size: 3 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 3200 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 0.01 } param { lr_mult: 0.02 } inner_product_param { num_output: 9307 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "accuracy" type: "Accuracy" bottom: "ip2" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss" }
the data is from the hdf5 file, and i have 9307 classes of object to classify. in the hdf5 file, i stores the feature 1024 vector,in the train_trial,h5, there are 279500 vectors, and in the test_trial.h5,there are 75000 vectors. I used the matlab hadf5 function to transform my feature data into hdf5 format, my feature vector is like this: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.015504 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.077519 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.100775 0.00
and the test hdf5 format: Dataset 'data' Size: 32x32x1x75000 MaxSize: 1024x1x1xInf Datatype: H5T_IEEE_F32LE (single) ChunkSize: 1024x1x1x100 Filters: none FillValue: 0.000000 Dataset 'label' Size: 1x75000 MaxSize: 1xInf Datatype: H5T_IEEE_F32LE (single) ChunkSize: 1x100 Filters: none FillValue: 0.000000 and the train hdf5 format: Dataset 'data' Size: 32x32x1x279500 MaxSize: 1024x1x1xInf Datatype: H5T_IEEE_F32LE (single) ChunkSize: 1024x1x1x100 Filters: none FillValue: 0.000000 Dataset 'label' Size: 1x279500 MaxSize: 1xInf Datatype: H5T_IEEE_F32LE (single) ChunkSize: 1x100 Filters: none FillValue: 0.000000 but the result is: I0811 03:22:29.963469 4906 layer_factory.hpp:74] Creating layer data I0811 03:22:29.963490 4906 net.cpp:90] Creating Layer data I0811 03:22:29.963502 4906 net.cpp:368] data -> data I0811 03:22:29.963518 4906 net.cpp:368] data -> label I0811 03:22:29.963533 4906 net.cpp:120] Setting up data I0811 03:22:29.963546 4906 hdf5_data_layer.cpp:80] Loading list of HDF5 filenames from: examples/mnist/test_list.txt I0811 03:22:29.963573 4906 hdf5_data_layer.cpp:94] Number of HDF5 files: 1 I0811 03:22:30.129009 4906 net.cpp:127] Top shape: 100 1 32 32 (102400) I0811 03:22:30.129048 4906 net.cpp:127] Top shape: 100 1 (100) I0811 03:22:30.129076 4906 layer_factory.hpp:74] Creating layer label_data_1_split I0811 03:22:30.129096 4906 net.cpp:90] Creating Layer label_data_1_split I0811 03:22:30.129192 4906 net.cpp:410] label_data_1_split <- label I0811 03:22:30.129209 4906 net.cpp:368] label_data_1_split -> label_data_1_split_0 I0811 03:22:30.129231 4906 net.cpp:368] label_data_1_split -> label_data_1_split_1 I0811 03:22:30.129245 4906 net.cpp:120] Setting up label_data_1_split I0811 03:22:30.129258 4906 net.cpp:127] Top shape: 100 1 (100) I0811 03:22:30.129276 4906 net.cpp:127] Top shape: 100 1 (100) I0811 03:22:30.129292 4906 layer_factory.hpp:74] Creating layer ip1 I0811 03:22:30.129308 4906 net.cpp:90] Creating Layer ip1 I0811 03:22:30.129318 4906 net.cpp:410] ip1 <- data I0811 03:22:30.129329 4906 net.cpp:368] ip1 -> ip1 I0811 03:22:30.129343 4906 net.cpp:120] Setting up ip1 I0811 03:22:30.129706 4906 net.cpp:127] Top shape: 100 40 (4000) I0811 03:22:30.129727 4906 layer_factory.hpp:74] Creating layer relu1 I0811 03:22:30.129740 4906 net.cpp:90] Creating Layer relu1 I0811 03:22:30.129750 4906 net.cpp:410] relu1 <- ip1 I0811 03:22:30.129762 4906 net.cpp:357] relu1 -> ip1 (in-place) I0811 03:22:30.129773 4906 net.cpp:120] Setting up relu1 I0811 03:22:30.129784 4906 net.cpp:127] Top shape: 100 40 (4000) I0811 03:22:30.129794 4906 layer_factory.hpp:74] Creating layer ip2 I0811 03:22:30.129807 4906 net.cpp:90] Creating Layer ip2 I0811 03:22:30.129820 4906 net.cpp:410] ip2 <- ip1 I0811 03:22:30.129832 4906 net.cpp:368] ip2 -> ip2 I0811 03:22:30.129843 4906 net.cpp:120] Setting up ip2 I0811 03:22:30.133378 4906 net.cpp:127] Top shape: 100 9307 (930700) I0811 03:22:30.133397 4906 layer_factory.hpp:74] Creating layer ip2_ip2_0_split I0811 03:22:30.133422 4906 net.cpp:90] Creating Layer ip2_ip2_0_split I0811 03:22:30.133445 4906 net.cpp:410] ip2_ip2_0_split <- ip2 I0811 03:22:30.133456 4906 net.cpp:368] ip2_ip2_0_split -> ip2_ip2_0_split_0 I0811 03:22:30.133469 4906 net.cpp:368] ip2_ip2_0_split -> ip2_ip2_0_split_1 I0811 03:22:30.133481 4906 net.cpp:120] Setting up ip2_ip2_0_split I0811 03:22:30.133492 4906 net.cpp:127] Top shape: 100 9307 (930700) I0811 03:22:30.133517 4906 net.cpp:127] Top shape: 100 9307 (930700) I0811 03:22:30.133527 4906 layer_factory.hpp:74] Creating layer accuracy I0811 03:22:30.133538 4906 net.cpp:90] Creating Layer accuracy I0811 03:22:30.133548 4906 net.cpp:410] accuracy <- ip2_ip2_0_split_0 I0811 03:22:30.133558 4906 net.cpp:410] accuracy <- label_data_1_split_0 I0811 03:22:30.133569 4906 net.cpp:368] accuracy -> accuracy I0811 03:22:30.133581 4906 net.cpp:120] Setting up accuracy I0811 03:22:30.133600 4906 net.cpp:127] Top shape: (1) I0811 03:22:30.133610 4906 layer_factory.hpp:74] Creating layer loss I0811 03:22:30.133620 4906 net.cpp:90] Creating Layer loss I0811 03:22:30.133630 4906 net.cpp:410] loss <- ip2_ip2_0_split_1 I0811 03:22:30.133641 4906 net.cpp:410] loss <- label_data_1_split_1 I0811 03:22:30.133651 4906 net.cpp:368] loss -> loss I0811 03:22:30.133662 4906 net.cpp:120] Setting up loss I0811 03:22:30.133674 4906 layer_factory.hpp:74] Creating layer loss I0811 03:22:30.135438 4906 net.cpp:127] Top shape: (1) I0811 03:22:30.135453 4906 net.cpp:129] with loss weight 1 I0811 03:22:30.135485 4906 net.cpp:192] loss needs backward computation. I0811 03:22:30.135509 4906 net.cpp:194] accuracy does not need backward computation. I0811 03:22:30.135519 4906 net.cpp:192] ip2_ip2_0_split needs backward computation. I0811 03:22:30.135531 4906 net.cpp:192] ip2 needs backward computation. I0811 03:22:30.135540 4906 net.cpp:192] relu1 needs backward computation. I0811 03:22:30.135550 4906 net.cpp:192] ip1 needs backward computation. I0811 03:22:30.135576 4906 net.cpp:194] label_data_1_split does not need backward computation. I0811 03:22:30.135586 4906 net.cpp:194] data does not need backward computation. I0811 03:22:30.135598 4906 net.cpp:235] This network produces output accuracy I0811 03:22:30.135608 4906 net.cpp:235] This network produces output loss I0811 03:22:30.135623 4906 net.cpp:482] Collecting Learning Rate and Weight Decay. I0811 03:22:30.135637 4906 net.cpp:247] Network initialization done. I0811 03:22:30.135665 4906 net.cpp:248] Memory required for data: 11611208 I0811 03:22:30.135717 4906 solver.cpp:42] Solver scaffolding done. I0811 03:22:30.135749 4906 solver.cpp:250] Solving I0811 03:22:30.135762 4906 solver.cpp:251] Learning Rate Policy: step I0811 03:22:30.136682 4906 solver.cpp:294] Iteration 0, Testing net (#0) I0811 03:23:21.776976 4906 solver.cpp:343] Test net output #0: accuracy = 6.66667e-05 I0811 03:23:21.777107 4906 solver.cpp:343] Test net output #1: loss = 16.7236 (* 1 = 16.7236 loss) I0811 03:23:21.786000 4906 solver.cpp:214] Iteration 0, loss = 28.4815 I0811 03:23:21.786046 4906 solver.cpp:229] Train net output #0: accuracy = 0 I0811 03:23:21.786077 4906 solver.cpp:229] Train net output #1: loss = 28.4815 (* 1 = 28.4815 loss) I0811 03:23:21.786110 4906 solver.cpp:486] Iteration 0, lr = 0.01 I0811 03:23:30.299607 4906 solver.cpp:294] Iteration 1000, Testing net (#0) I0811 03:24:21.176622 4906 solver.cpp:343] Test net output #0: accuracy = 0.00012 I0811 03:24:21.176770 4906 solver.cpp:343] Test net output #1: loss = 9.14511 (* 1 = 9.14511 loss) I0811 03:24:21.184490 4906 solver.cpp:214] Iteration 1000, loss = 9.13952 I0811 03:24:21.184516 4906 solver.cpp:229] Train net output #0: accuracy = 0 I0811 03:24:21.184547 4906 solver.cpp:229] Train net output #1: loss = 9.13952 (* 1 = 9.13952 loss) I0811 03:24:21.184563 4906 solver.cpp:486] Iteration 1000, lr = 0.01 I0811 03:24:29.575215 4906 solver.cpp:294] Iteration 2000, Testing net (#0)
and the accuracy is 0? how is this, anybody knows the reason, please help me. email:hit.yang.feng@gmail.com
hi, i have the same question as @leiucky I have two questions:
what's the style of raw data? Is it LIBSVM style ? like: label1 1:fea1 2:fea2 label2 1:fea1 2:fea2
How to convert raw data into HDF5? Is there some examples?
thank you a lot!
Did you found the solution for converting 1D vector to hdf5 format? I have the same problem and I don't know how to solve it.
I want to use 1-dim vector as input. I find some similar issues #446 #690 However, it seems that none of them can solve this problem.
I have two questions:
Thanks a lot