Open 13293824182 opened 6 years ago
Hi, I have added description on how to run single db and CDE. You may need to follow the file structure listed as well.
Simple run for single DB using optical flow: python main.py --dB 'CASME2_Optical' --batch_size=30 --spatial_epochs=100 --temporal_epochs=100 --train_id='default_test' --spatial_size=224 --flag='st'
Do not hesitate to let me know your technical difficulties or any confusion on the documentation.
Hello, when I want to run your code I try to modified the path of the folder:root_db_path = "/mnt/data1/lhh_data/",then I have encountered the problem:Traceback (most recent call last):
File "main.py", line 56, in
1) Create a directory called "Weights" in ../lhh_data/ should help. my bad. 2) sorry i don't get the question. "code for the data" ?
Thank you very much for your answer.
Traceback (most recent call last):
File "main.py", line 56, in
Hi again. :)
Try this suggestion: https://github.com/keras-team/keras/issues/3945
hi Which file is the loading picture and picture labels ? Or how do you deal with pictures and labels?
Hello.
loading picture is in utilities.py Read_Input_Images(). loading picture labels also in utilities.py label_matching(). The conversion from emotion labels to numbers are done in labelling.py
Hope it helps.
Sorry to bother you again
CASME2_Optical
arrived
Loaded Images into the tray...
Loaded Labels into the tray...
/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:104: UserWarning: Update your MaxPooling2D
call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2))
model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th"))
/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:112: UserWarning: Update your MaxPooling2D
call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2))
model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th"))
/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:120: UserWarning: Update your MaxPooling2D
call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2))
model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th"))
/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py:128: UserWarning: Update your MaxPooling2D
call to the Keras 2 API: MaxPooling2D((2, 2), data_format="channels_first", strides=(2, 2))
model.add(MaxPooling2D((2,2), strides=(2,2),dim_ordering="th")) # 33
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 686, in _call_cpp_shape_fn_impl
input_tensors_as_shapes, status)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 473, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 3 and 64 for 'Assign' (op: 'Assign') with input shapes: [3,3,224,64], [64,3,3,3].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 56, in
File "/home/lhh/micro-expression/scr-LSTM/Micro-Expression-with-Deep-Learning-master/models.py", line 139, in VGG_16 model.load_weights(weights_path) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/models.py", line 720, in load_weights topology.load_weights_from_hdf5_group(f, layers) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/engine/topology.py", line 3048, in load_weights_from_hdf5_group K.batch_set_value(weight_value_tuples) File "/home/lhh/.pythonlib/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2183, in batch_set_value assign_op = x.assign(assign_placeholder) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variables.py", line 573, in assign return state_ops.assign(self._variable, value, use_locking=use_locking) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 276, in assign validate_shape=validate_shape) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 57, in assign use_locking=use_locking, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2958, in create_op set_shapes_for_outputs(ret) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs shapes = shape_func(op) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn require_shape_fn) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Dimension 0 in both shapes must be equal, but are 3 and 64 for 'Assign' (op: 'Assign') with input shapes: [3,3,224,64], [64,3,3,3].
are u using "channel first" in keras?
I'd found the cause of these error.
but i get a new error:
CASME2_Optical
arrived
Loaded Images into the tray...
Loaded Labels into the tray...
2018-06-12 21:00:34.470576: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary waspiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-06-12 21:00:34.816704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:03:00.0
totalMemory: 10.91GiB freeMemory: 10.61GiB
2018-06-12 21:00:34.816772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, Force GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
Train_X_shape: (237, 9, 150528)
Train_Y_shape: (235, 5)
Test_X_shape: (9, 9, 150528)
Test_Y_shape: (11, 5)
X_shape: (2133, 3, 224, 224)
y_shape: (2115, 5)
test_X_shape: (81, 3, 224, 224)
test_y_shape: (99, 5)
b'GeForce GTX 1080 Ti': 3701.4 MB free, 7471.0 MB used, 11172.4 MB total
b'GeForce GTX 1080 Ti': 428.4 MB free, 10744.0 MB used, 11172.4 MB total
Traceback (most recent call last):
File "main.py", line 57, in
There are certain files to be ignored due to small class. Similar to other issue thread, the ignoring part of code works on my pc but not others.
Try deleting: ['sub09/EP02_02f/', 'sub24/EP02_07/']
I tried to delete these files in CASME2_Optical。 but X_shape: (2133, 3, 224, 224) y_shape: (2106, 5)
ValueError: Input arrays should have the same number of samples as target arrays . Found 2133 input samples and 2106 target samples.
how about these: ['sub09/EP13_02/','sub09/EP02_02f/','sub10/EP13_01/','sub17/EP15_01/', 'sub17/EP15_03/','sub19/EP19_04/','sub24/EP10_03/','sub24/EP07_01/', 'sub24/EP07_04f/','sub24/EP02_07/','sub26/EP15_01/' ]
I have a question:Which file should I modify? I think the file I modified is wrong.
CASME2_Optical/CASME2_Optical/sub0n/EPxx_xx
delete the EPs that are listed.
Train_X_shape: (237, 9, 150528) Train_Y_shape: (228, 5) Test_X_shape: (9, 9, 150528) Test_Y_shape: (7, 5) X_shape: (2133, 3, 224, 224) y_shape: (2052, 5) test_X_shape: (81, 3, 224, 224) test_y_shape: (63, 5) ValueError: Input arrays should have the same number of samples as target arrays. Found 2133 input samples and 2052 target samples.
Something wrong with your Train_Y, and Test_Y.
Do you have any suggestions for the error ? I didn't modify any details in the code. Thank you again for your kind help.
To clarify, the first channel for train_x and train_y is the number of samples. What i face usually first channel of train_x less than that of train_y, because the code doesn't omit certain samples. this problem i never faced before.
I've found the problem to me with the get_subfolders_num
function, especially with how the IgnoredSamples_index
gets calculated and then passed on. To me it makes no sense, and I cannot see how the code could work with it. I've written my own function to calculate the number of video samples per subject, and I don't get stuck anymore at the data dimensions being faulty.
@mg515 Great! Would appreciate it if you could create a pull request to fix that. Initially the subfolders numb all are hardcoded and i decided to change to current one, but the current one got problem on other PCs.
I will do a pull request once I get my code running successfully at least on a 'st' learning on CASME2, so the basic example as I see it. One problem I found also is related to the temporal learning, as the output layer is defined as data_dim
, and should be as n_exp
, so as the number of predicted classes.
@mg515 i added the classes "n_exp"
to temporal module in train.py and models.py. data_dim is the input to first layer of temporal module, it's the data dimension of the spatial features extracted from CNN
@happy1111qwwe how do u call python main.py? i suspect ur db name is wrong
sorry, I dont understand what you mean.
how do you call the script?
I run this code in Pycharm directly
No idea. Try to follow the guide written in the repo:
Or maybe u can provide more details so that we can narrow down the problem.
I just run this code in Pycharm. And I have pass parameters to main.py in the following figure.
Then I change the right path in my computer, and I run main.py. So the error occurs, I have tried global r, but it doesn't work, also have other questions. I just want to ask whether I need do something in main.py. Thanks
------------------ Original ------------------ From: "Ice"notifications@github.com; Date: Mon, Jul 9, 2018 11:28 AM To: "IcedDoggie/Micro-Expression-with-Deep-Learning"Micro-Expression-with-Deep-Learning@noreply.github.com; Cc: "happy1111qwwe"li.hangyu@intellif.com; "Mention"mention@noreply.github.com; Subject: Re: [IcedDoggie/Micro-Expression-with-Deep-Learning] how to run (#1)
No idea. Try to follow the guide written in the repo:
create database folder using the file structures i listed.
make sure you pass the correct parameters to main.py.
Or maybe u can provide more details so that we can narrow down the problem.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
I just run this code in Pycharm. And I have pass parameters to main.py in the following figure.
Then I change the right path in my computer, and I run main.py. So the error occurs, I have tried global r, but it doesn't work, also have other questions. I just want to ask whether I need do something in main.py.
When I global all varibles, it can run, but what is C_label.txt?
Traceback (most recent call last):
File "/home/lhy/Desktop/ME/main.py", line 58, in
------------------ Original ------------------ From: "Ice"notifications@github.com; Date: Mon, Jul 9, 2018 11:28 AM To: "IcedDoggie/Micro-Expression-with-Deep-Learning"Micro-Expression-with-Deep-Learning@noreply.github.com; Cc: "happy1111qwwe"li.hangyu@intellif.com; "Mention"mention@noreply.github.com; Subject: Re: [IcedDoggie/Micro-Expression-with-Deep-Learning] how to run (#1)
No idea. Try to follow the guide written in the repo:
create database folder using the file structures i listed.
make sure you pass the correct parameters to main.py.
Or maybe u can provide more details so that we can narrow down the problem.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
the dataset name should not be C. follow the guide. if you are working on CASME2_Optical, then this your dataset name MUST be "CASME2_Optical"
the C_label.txt not found error is because the code cannot find the dataset called "C", hence unable to create the .txt file.
hope it helps.
db_images = db_path + db_name + "/" + db_name + "/"
why there are double db_name + "/", such as db_path/CASME2_Optical/CASME2_Optical
because the second db_name contains images, whereas the first db_name is a general folder for the specified database.
Traceback (most recent call last): File "/home/lhy/Desktop/ME/pynvml/pynvml.py", line 644, in _LoadNvmlLibrary nvmlLib = CDLL("libnvidia-ml.so.1") File "/usr/lib/python3.6/ctypes/init.py", line 348, in init self._handle = _dlopen(self._name, mode) OSError: libnvidia-ml.so.1: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/lhy/Desktop/ME/main.py", line 62, in
U need to get pynvml, it's in the repo. just put the pynvml in your working directory and see if it works.
Hello, I see a lot of your code. I beg you to briefly introduce your project. How to run your code? What is your final result?