IcedDoggie / Micro-Expression-with-Deep-Learning

Experimentation of deep learning on the subjects of micro-expression spotting and recognition.
267 stars 102 forks source link

how to run #1

Open 13293824182 opened 6 years ago

13293824182 commented 6 years ago

Hello, I see a lot of your code. I beg you to briefly introduce your project. How to run your code? What is your final result?

IcedDoggie commented 6 years ago

Hi, I have added description on how to run single db and CDE. You may need to follow the file structure listed as well.

Simple run for single DB using optical flow: python main.py --dB 'CASME2_Optical' --batch_size=30 --spatial_epochs=100 --temporal_epochs=100 --train_id='default_test' --spatial_size=224 --flag='st'

Do not hesitate to let me know your technical difficulties or any confusion on the documentation.

13293824182 commented 6 years ago

Hello, when I want to run your code I try to modified the path of the folder:root_db_path = "/mnt/data1/lhh_data/",then I have encountered the problem:Traceback (most recent call last): File "main.py", line 56, in main(args) File "main.py", line 13, in main train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flag, args.tensorboard) File "/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/train.py", line 51, in train os.mkdir(root_db_path + 'Weights/'+ str(train_id)) FileNotFoundError: [Errno 2] No such file or directory: '/mnt/data1/lhh_data/Weights/default_test'. Another problem: Have you given the code for the data in the project? Thank you very much for your answer

IcedDoggie commented 6 years ago

1) Create a directory called "Weights" in ../lhh_data/ should help. my bad. 2) sorry i don't get the question. "code for the data" ?

13293824182 commented 6 years ago

Thank you very much for your answer. Traceback (most recent call last): File "main.py", line 56, in main(args) File "main.py", line 13, in main train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objec tive_flag, args.tensorboard) File "/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/train.py", line 234, in train vgg_model = VGG_16(spatial_size = spatial_size, classes=n_exp, channels=3, weights_path='VGG_Face_Deep_16.h5') File "/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py", line 104, in VGG_16 model.add(MaxPooling2D((2,2), strides=(2,2))) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/models.py", line 522, in add output_tensor = layer(self.outputs[0]) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/engine/topology.py", line 619, in call output = self.call(inputs, **kwargs) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/layers/pooling.py", line 158, in call data_format=self.data_format) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/layers/pooling.py", line 221, in _pooling_function pool_mode='max') File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 3663, in pool2d data_format=tf_data_format) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_ops.py", line 1958, in max_pool name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 2806, in _max_pool data_format=data_format, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2958, in create_op set_shapes_for_outputs(ret) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs shapes = shape_func(op) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn require_shape_fn) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_5/MaxPool' (op: 'MaxPool') with input shapes: [? ,1,112,128].

IcedDoggie commented 6 years ago

Hi again. :)

  1. May I know the image dimension you using? input should be 224224n
  2. can you try to run using theano backend and see if it works?
IcedDoggie commented 6 years ago

Try this suggestion: https://github.com/keras-team/keras/issues/3945

13293824182 commented 6 years ago

hi Which file is the loading picture and picture labels ? Or how do you deal with pictures and labels?

IcedDoggie commented 6 years ago

Hello.

loading picture is in utilities.py Read_Input_Images(). loading picture labels also in utilities.py label_matching(). The conversion from emotion labels to numbers are done in labelling.py

Hope it helps.

13293824182 commented 6 years ago

Sorry to bother you again CASME2_Optical arrived Loaded Images into the tray... Loaded Labels into the tray... /home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:104: UserWarning: Update your MaxPooling2D call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2)) model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th")) /home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:112: UserWarning: Update your MaxPooling2D call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2)) model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th")) /home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:120: UserWarning: Update your MaxPooling2D call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2)) model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th")) /home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:128: UserWarning: Update your MaxPooling2D call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2)) model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th")) # 33 Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 686, in _call_cpp_shape_fn_impl input_tensors_as_shapes, status) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 473, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 3 and 64 for 'Assign' (op: 'Assign') with input shapes: [3,3,224,64], [64,3,3,3].

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "main.py", line 56, in main(args) File "main.py", line 13, in main train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flag, args.tensorboard) File "/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/train.py", line 234, in train

File "/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py", line 139, in VGG_16 model.load_weights(weights_path) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/models.py", line 720, in load_weights topology.load_weights_from_hdf5_group(f, layers) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/engine/topology.py", line 3048, in load_weights_from_hdf5_group K.batch_set_value(weight_value_tuples) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2183, in batch_set_value assign_op = x.assign(assign_placeholder) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variables.py", line 573, in assign return state_ops.assign(self._variable, value, use_locking=use_locking) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 276, in assign validate_shape=validate_shape) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 57, in assign use_locking=use_locking, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2958, in create_op set_shapes_for_outputs(ret) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs shapes = shape_func(op) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn require_shape_fn) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Dimension 0 in both shapes must be equal, but are 3 and 64 for 'Assign' (op: 'Assign') with input shapes: [3,3,224,64], [64,3,3,3].

IcedDoggie commented 6 years ago

are u using "channel first" in keras?

13293824182 commented 6 years ago

I'd found the cause of these error. but i get a new error: CASME2_Optical arrived Loaded Images into the tray... Loaded Labels into the tray... 2018-06-12 21:00:34.470576: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary waspiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2018-06-12 21:00:34.816704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582 pciBusID: 0000:03:00.0 totalMemory: 10.91GiB freeMemory: 10.61GiB 2018-06-12 21:00:34.816772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, Force GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1) Train_X_shape: (237, 9, 150528) Train_Y_shape: (235, 5) Test_X_shape: (9, 9, 150528) Test_Y_shape: (11, 5) X_shape: (2133, 3, 224, 224) y_shape: (2115, 5) test_X_shape: (81, 3, 224, 224) test_y_shape: (99, 5) b'GeForce GTX 1080 Ti': 3701.4 MB free, 7471.0 MB used, 11172.4 MB total b'GeForce GTX 1080 Ti': 428.4 MB free, 10744.0 MB used, 11172.4 MB total Traceback (most recent call last): File "main.py", line 57, in main(args) File "main.py", line 14, in main train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flagensorboard) File "/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/train.py", line 318, in train vgg_model.fit(X, y, batch_size=batch_size, epochs=spatial_epochs, shuffle=True, callbacks=[history, stopping]) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/models.py", line 863, in fit initial_epoch=initial_epoch) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/engine/training.py", line 1358, in fit batch_size=batch_size) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/engine/training.py", line 1246, in _standardize_user_data _check_array_lengths(x, y, sample_weights) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/engine/training.py", line 237, in _check_array_lengths 'and ' + str(list(set_y)[0]) + ' target samples.') ValueError: Input arrays should have the same number of samples as target arrays. Found 2133 input samples and 2115 target samples.

IcedDoggie commented 6 years ago

There are certain files to be ignored due to small class. Similar to other issue thread, the ignoring part of code works on my pc but not others.

Try deleting: ['sub09/EP02_02f/', 'sub24/EP02_07/']

13293824182 commented 6 years ago

I tried to delete these files in CASME2_Optical。 but X_shape: (2133, 3, 224, 224) y_shape: (2106, 5)

ValueError: Input arrays should have the same number of samples as target arrays . Found 2133 input samples and 2106 target samples.

IcedDoggie commented 6 years ago

how about these: ['sub09/EP13_02/','sub09/EP02_02f/','sub10/EP13_01/','sub17/EP15_01/', 'sub17/EP15_03/','sub19/EP19_04/','sub24/EP10_03/','sub24/EP07_01/', 'sub24/EP07_04f/','sub24/EP02_07/','sub26/EP15_01/' ]

13293824182 commented 6 years ago

I have a question:Which file should I modify? I think the file I modified is wrong.

IcedDoggie commented 6 years ago

CASME2_Optical/CASME2_Optical/sub0n/EPxx_xx

delete the EPs that are listed.

13293824182 commented 6 years ago

Train_X_shape: (237, 9, 150528) Train_Y_shape: (228, 5) Test_X_shape: (9, 9, 150528) Test_Y_shape: (7, 5) X_shape: (2133, 3, 224, 224) y_shape: (2052, 5) test_X_shape: (81, 3, 224, 224) test_y_shape: (63, 5) ValueError: Input arrays should have the same number of samples as target arrays. Found 2133 input samples and 2052 target samples.

IcedDoggie commented 6 years ago

Something wrong with your Train_Y, and Test_Y.

13293824182 commented 6 years ago

Do you have any suggestions for the error ? I didn't modify any details in the code. Thank you again for your kind help.

IcedDoggie commented 6 years ago

To clarify, the first channel for train_x and train_y is the number of samples. What i face usually first channel of train_x less than that of train_y, because the code doesn't omit certain samples. this problem i never faced before.

mg515 commented 6 years ago

I've found the problem to me with the get_subfolders_num function, especially with how the IgnoredSamples_index gets calculated and then passed on. To me it makes no sense, and I cannot see how the code could work with it. I've written my own function to calculate the number of video samples per subject, and I don't get stuck anymore at the data dimensions being faulty.

IcedDoggie commented 6 years ago

@mg515 Great! Would appreciate it if you could create a pull request to fix that. Initially the subfolders numb all are hardcoded and i decided to change to current one, but the current one got problem on other PCs.

mg515 commented 6 years ago

I will do a pull request once I get my code running successfully at least on a 'st' learning on CASME2, so the basic example as I see it. One problem I found also is related to the temporal learning, as the output layer is defined as data_dim, and should be as n_exp, so as the number of predicted classes.

IcedDoggie commented 6 years ago

@mg515 i added the classes "n_exp"to temporal module in train.py and models.py. data_dim is the input to first layer of temporal module, it's the data dimension of the spatial features extracted from CNN

IcedDoggie commented 6 years ago

@happy1111qwwe how do u call python main.py? i suspect ur db name is wrong

happy1111qwwe commented 6 years ago

sorry, I dont understand what you mean.

IcedDoggie commented 6 years ago

how do you call the script?

happy1111qwwe commented 6 years ago

I run this code in Pycharm directly

IcedDoggie commented 6 years ago

No idea. Try to follow the guide written in the repo:

  1. create database folder using the file structures i listed.
  2. make sure you pass the correct parameters to main.py.

Or maybe u can provide more details so that we can narrow down the problem.

happy1111qwwe commented 6 years ago

I just run this code in Pycharm. And I have pass parameters to main.py in the following figure.

Then I change the right path in my computer, and I run main.py. So the error occurs, I have tried global r, but it doesn't work, also have other questions. I just want to ask whether I need do something in main.py. Thanks

------------------ Original ------------------ From: "Ice"notifications@github.com; Date: Mon, Jul 9, 2018 11:28 AM To: "IcedDoggie/Micro-Expression-with-Deep-Learning"Micro-Expression-with-Deep-Learning@noreply.github.com; Cc: "happy1111qwwe"li.hangyu@intellif.com; "Mention"mention@noreply.github.com; Subject: Re: [IcedDoggie/Micro-Expression-with-Deep-Learning] how to run (#1)

No idea. Try to follow the guide written in the repo:

create database folder using the file structures i listed.

make sure you pass the correct parameters to main.py.

Or maybe u can provide more details so that we can narrow down the problem.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

happy1111qwwe commented 6 years ago

I just run this code in Pycharm. And I have pass parameters to main.py in the following figure.

Then I change the right path in my computer, and I run main.py. So the error occurs, I have tried global r, but it doesn't work, also have other questions. I just want to ask whether I need do something in main.py. When I global all varibles, it can run, but what is C_label.txt? Traceback (most recent call last): File "/home/lhy/Desktop/ME/main.py", line 58, in main(args) File "/home/lhy/Desktop/ME/main.py", line 13, in main train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flag, args.tensorboard) File "/home/lhy/Desktop/ME/train.py", line 129, in train labelperSub = label_matching(db_home, dB, subjects, VidPerSubject) File "/home/lhy/Desktop/ME/utilities.py", line 95, in label_matching label=np.loadtxt(workplace+'Classification/'+ dB +'_label.txt') File "/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py", line 917, in loadtxt fh = np.lib._datasource.open(fname, 'rt', encoding=encoding) File "/usr/local/lib/python3.6/dist-packages/numpy/lib/_datasource.py", line 260, in open return ds.open(path, mode, encoding=encoding, newline=newline) File "/usr/local/lib/python3.6/dist-packages/numpy/lib/_datasource.py", line 616, in open raise IOError("%s not found." % path) OSError: /home/lhy/Desktop/media/ice/OS/Datasets/C/Classification/C_label.txt not found.

------------------ Original ------------------ From: "Ice"notifications@github.com; Date: Mon, Jul 9, 2018 11:28 AM To: "IcedDoggie/Micro-Expression-with-Deep-Learning"Micro-Expression-with-Deep-Learning@noreply.github.com; Cc: "happy1111qwwe"li.hangyu@intellif.com; "Mention"mention@noreply.github.com; Subject: Re: [IcedDoggie/Micro-Expression-with-Deep-Learning] how to run (#1)

No idea. Try to follow the guide written in the repo:

create database folder using the file structures i listed.

make sure you pass the correct parameters to main.py.

Or maybe u can provide more details so that we can narrow down the problem.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

IcedDoggie commented 6 years ago

the dataset name should not be C. follow the guide. if you are working on CASME2_Optical, then this your dataset name MUST be "CASME2_Optical"

IcedDoggie commented 6 years ago

the C_label.txt not found error is because the code cannot find the dataset called "C", hence unable to create the .txt file.

IcedDoggie commented 6 years ago

hope it helps.

happy1111qwwe commented 6 years ago

db_images = db_path + db_name + "/" + db_name + "/"
why there are double db_name + "/", such as db_path/CASME2_Optical/CASME2_Optical

IcedDoggie commented 6 years ago

because the second db_name contains images, whereas the first db_name is a general folder for the specified database.

happy1111qwwe commented 6 years ago

Traceback (most recent call last): File "/home/lhy/Desktop/ME/pynvml/pynvml.py", line 644, in _LoadNvmlLibrary nvmlLib = CDLL("libnvidia-ml.so.1") File "/usr/lib/python3.6/ctypes/init.py", line 348, in init self._handle = _dlopen(self._name, mode) OSError: libnvidia-ml.so.1: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/lhy/Desktop/ME/main.py", line 62, in main(args) File "/home/lhy/Desktop/ME/main.py", line 15, in main train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flag, args.tensorboard) File "/home/lhy/Desktop/ME/train.py", line 205, in train gpu_observer() File "/home/lhy/Desktop/ME/utilities.py", line 515, in gpu_observer nvmlInit() File "/home/lhy/Desktop/ME/pynvml/pynvml.py", line 608, in nvmlInit _LoadNvmlLibrary() File "/home/lhy/Desktop/ME/pynvml/pynvml.py", line 646, in _LoadNvmlLibrary _nvmlCheckReturn(NVML_ERROR_LIBRARY_NOT_FOUND) File "/home/lhy/Desktop/ME/pynvml/pynvml.py", line 310, in _nvmlCheckReturn raise NVMLError(ret) pynvml.pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

IcedDoggie commented 6 years ago

U need to get pynvml, it's in the repo. just put the pynvml in your working directory and see if it works.