Closed JiajiaStrive closed 6 years ago
You need to make a folder named by sklt_data_all, and move the input csv files to the sklt_data_all folder. Or Set DATA_PATH to the folder consisting of your input csv files. Also, you need to make the folder named by sklt_npy_view for saving npy files.
Thank you very much! I have excute it, but now , I have another question. How can I use the code bellow? I just run the code CS_Ensemble TS-LSTM v1_new.py, but I think maybe I was wrong.
Looking forward to your reply!
Now, I cannot know the error. I think you need debug according to the error message.
I just run the code CS_Ensemble TS-LSTM v1_new.py, is this right?
I think there is config.feature_size error. You need to modify 2*config.feature_size in feature_only_diff_2 into config.feature_size. Please refer to https://github.com/InwoongLee/TS-LSTM/commit/1257f423f5279444e8dd61741effb668d966a75f. Ensemble v1, v2 were modifed.
Can you excute this code in your computer? And can you tell me the versions of your tensorflow and python?
I changed it, but now have the another preblem.
I had no problem when executing the code just before. You need to check the config.feature_size. The config.feature_size and evalconfig.feature_size should be 150 not 75. Please check it.
in feature_only_diff_2, for batch_step in range(len(data)):
use print len(data[batch_step][0]) If the value is 150, it's okay. But if the value is 75, data format has problem.
I have print len(data[batch_step]) and print len(data[batch_step][0]), but I get this I get the data through the files as fellow. I didn't change anything.
I called the function throngh the fellow.
Looking forward to your answer.
I am so sorry, I writed wrong sbout print len(data[batch_step][0]); I excute the code again, and get this
I changed the config.feature_size = 75, and get the above.
config.feature_size is 150, which is right not 75. I'm sorry for our error file. make_csv_action_0149.m is 75 input size. So we need to add zero padding to match 150. We modified the code. Please reuse the make_csv_action_0149.m in https://github.com/InwoongLee/TS-LSTM/commit/bf57d5dedeffee5c50dd6e3e3768e5db201f9522. make_csv_action_5060.m doesn't have any problem.
thank you very much! I wish I run the code success.
I need your help. Can you help me change 4 GPUs to 3 GPUs about NTU_Code->CS_Ensembe_TS-LSTM_v1.py? I realy need your help. I don't have 4 GPUs for it, but I have 3 GPUs.
Looking forward to your answer.
If you see "with tf.device(sw_0):"
sw_0, sw_1, sw_2, and sw_3 are assigned by gpu0, gpu1, gpu2, and gpu3, respectively.
So, you can control the runner assign like this.
sw_0 = runner_assign[0] sw_1 = runner_assign[1] sw_2 = runner_assign[2] sw_3 = runner_assign[3]
->
sw_0 = '/gpu:0' sw_1 = '/gpu:1' sw_2 = '/gpu:2' sw_3 = '/gpu:2'
and another modification is needed like this.
gradient_device = ['/gpu:0','/gpu:1','/gpu:2','/gpu:3']
-> gradient_device = ['/gpu:0','/gpu:1','/gpu:2','/gpu:2']
This is an example.
You can handle it in the way you want.
Thank you very much! I have changed it seccussfully! But I have another question, what is the version of cuda about tensorflow-0.11.0 when you run the code of NTUGRB-D?
maybe it was 7.5.
If you have problem of version, you can upgrade tensorflow and cuda.
And then, edit some of codes like tf,initializer, tf.concat, etc according to new version of tensorflow.
UCLA and UWA is already modified. please refer to that codes on UCLA and UWA datasets.
Thank you very much! I am very glad to have your help!
Where is the path of "DATA_PATH = 'sklt_data_all'"? I can't find it, can you tell me? I don't know if you are willing to tell me how to execute the NTU code. I have get .csv form txt files. Looking forward to your reply.
Now, I face the question is fellowing.