Closed kingrain213 closed 7 years ago
hello @kingrain213 just as i said in #25 ,in this particular case,we use crop_mean.npy just to keep the same pipeline as the original C3D caffe,if you make your new network in pure TF ,you don't really need crop_mean.npy,just use TF data normalize method. as to your question,you can use caffe's high level API to get your dataset mean.binaryproto and then convert it to python,some example may found in http://caffe.berkeleyvision.org/ or see this blog here:http://www.cnblogs.com/573177885qq/p/6049014.html
BTW ,if you are looking for the simplest way,let's say:
just do image=image/255.0
or image=tf.image.per_image_standardization(image)
for every single image in per clip.
@cc786537662 @hx173149 thank you for you reply, after your reply, I got the point. I am sorry to say that I meet another question now: 1)I want to make a 2-class model to classify video labels, then I change the NUM_CLASSES in c3d_model.py from 101 to 2, every class has only 20 videos, but when I run on my CPU-computer python train_c3d_ucf101.py , It comes out "2017-10-25 10:39:27.769221: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-25 10:39:27.769258: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-25 10:39:27.769265: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-10-25 10:39:27.769270: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-25 10:39:27.769276: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. _terminate called after throwing an instance of 'std::bad_alloc' what(): std::badalloc Aborted"
2)If I want to use more frames from the original video,is it the parameter that I need to change is NUM_FRAMES_PER_CLIP?? My video is ten minutes long in general, can I change the number from 16 to 160 or even more bigger?
thank you very much.
1; does it works fine on UCF101 dataset locally on your PC? because it's odd,since the num of classed just affect the last fc layer parameters,and also check t your data loading code as well. 2:the quick ansewer is no.if you change the NUM_FRAMES_PER_CLIP,you must change the C3D network as well,because the pool5 and the first fc layer is designed especially for NUM_FRAMES_PER_CLIP=16.
@cc786537662 thank you for your reply 1) I can run your train code on a small UCF101 dataset, 10 classes, 54 videos for train, and 19 for test, but when I changed the dataset to my own video, 2-classes, 43 for train and 11 for test, it failed with _"terminate called after throwing an instance of 'std::bad_alloc' what(): std::badalloc Aborted" I will check if there is still somecode should I change when I use my own dataset and 2-class labels to fit my condition, and check if there is really somewhere memory leak;
2)I want to ask " if I make the NUM_FRAMES_PER_CLIP longer, the more feature of the original video will be gotten?" am I right?
then, First I should make my program runnable, then finetune the C3D net
thank you very much for your reply
@cc786537662 thank you very much, I changed the batchsize to 2,then the program can run, or I change my computer to another one which has a bigger memory, it can also run, maybe the old machine with old edition tensorflow also has memory leak problem. then I will try to learn the network and change the code "NUM_FRAMES_PER_CLIP=16", I hope I can do it myself, thank you for your support
good to know!
If I want to train a new model on another dateset, shall I have to calculate the crop_mean.npy, then replace your file, train and test? I saw the crop_mean.npy's size is (16, 112, 112, 3) and type is float, Is it just a sum and then divide the image query number? Do you have the python script or other function to calculate the mean value, can you share it to me ? my email is wilson0213@126.com, thank you very much.