Closed hongge831 closed 5 years ago
Hi, It is the same. The weight processing is separated from the recurrent part as shown in the following line.
Note from cuda_IndRNN_onlyrecurrent import IndRNN_onlyrecurrent as IndRNN
. Here the IndRNN only refers to the recurrent part. Plusing the weight processing with DI
, it the same as the whole IndRNN.
Thanks.
got it
@Sunnydreamrain Thanks for you excellent work. But how can i create .npy files?
Hi,
Generate the data ndarray. Download the NTU RGB+D dataset, save the skeleton into a ndarray, and keep the length and label of each data entry. You can read the data_reader and check which file and which dimension keeps what information. Another way is to use your own datareader. It is only to read the skeleton to the network for processing.
Thanks.
@Sunnydreamrain Hi, thanks for your work. I was redoing the experiment on Google Colab and found many errors about the loaded numpy array and memory exploded when the file is large. When I tried a smaller file the errors were all gone. Can you tell me how big is the GPU memory you used when you trained on all the NTU data?
Hi, it is not very large. As I recall, it only takes around 2GB. The memory may grow if the network is large.
@Sunnydreamrain I think something went wrong during multi-threading because this is what I get after running.
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.target(*self.args, **self.kwargs)
File "
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.target(*self.args, **self.kwargs)
File "
After the exceptions, a key error for the dict occurs. I think the error is caused by the exceptions before.
KeyError: 'data'
The memory of each thread cannot be released, then the program crashed. Do you have any suggestion for this situation?
The code is based on the SRU shown in the following link. The multiple GPUs are not supported yet. If you want to use multiple GPU, please use the pytorch version instead of the CUDA version.
@Sunnydreamrain Another question is about the ndarray data. After I transferred the data to ndarray, the ndarray is actually np.array(list(), list() ...) because of the different number of frames of each file. I was hoping to get a nice multidimensional np.array().
What format should we use in order to make the program run? Like do we make every frame an np.array() or just a list is ok?
@Sunnydreamrain Ah, I found the mistake of the thread exception. It was silly of me to have accidentally put some empty frames in the ndarray while processing the raw data. So some sample will be in shape (20,) instead of (20,50,3). That's why the np.asarray() couldn't transform the type.
Hi .. Thanks alot for your great implementation. I'm still in the process of understanding this. Can you kindly let me know the input dimension of the dataset, and and what the length should be. I really appreciate if you can mention some more details about the dataset.
hello, thanks for your excellent work. I notice that you implementation on action recognition is different from the paper formulated. I want to know why you have those changes in your codebase? ps: your code is
return F.relu(input + hx * self.weight_hh.unsqueeze(0).expand(hx.size(0), len(self.weight_hh)))
can you explain this code ? hope for your reply