Closed ravikirancap closed 5 years ago
Could you upgrade to pytorch 1.0 and let me know if this solves the issue? If so, I have to update the documentation.
On Tue, 9 Apr 2019 at 05:07, ravikirancap notifications@github.com wrote:
Hi I am using torch 0.4.0 as mentioned in the README file. I get the following error. Is this because of a version problem or do I need to install additional dependancies(apart from the ones mentioned in the README)
*Traceback (most recent call last): File "speaker_id.py", line 228, in pout=DNN2_net(DNN1_net(CNN_net(inp))) File "/home/paperspace/anaconda3/envs/sincnet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(input, *kwargs) File "/home/paperspace/SincNet/dnn_models.py", line 448, in forward x = self.drop[i](self.act[i](self.ln[i](F.max_pool1d(torch.abs(self.convi http://x), self.cnn_max_pool_len[i])))) File "/home/paperspace/anaconda3/envs/sincnet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(input,
kwargs) File "/home/paperspace/SincNet/dnn_models.py", line 144, in forward band_pass_right= torch.flip(band_pass_left,dims=[1]) AttributeError: module 'torch' has no attribute 'flip'
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mravanelli/SincNet/issues/30, or mute the thread https://github.com/notifications/unsubscribe-auth/AQGs1sx-SH-C7pNCGMPhfK0YsyIUS5Blks5vfFhjgaJpZM4cj_ZA .
Tried with pytorch 1.0.1 and it is working fine
Ok..thank you! I should change the documentation..
On Apr 9, 2019 12:46 PM, "ravikirancap" notifications@github.com wrote:
Tried with pytorch 1.0.1 and it is working fine
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/mravanelli/SincNet/issues/30#issuecomment-481330941, or mute the thread https://github.com/notifications/unsubscribe-auth/AQGs1neqrvLPbSph3E0JqcENBzXyexTQks5vfMPjgaJpZM4cj_ZA .
Another quick question (let me know if I should post as a different issue instead)
I am currently training a model (been a few hours into training already) and I noticed that the num epochs are too high. Is there a way to just freeze the current state of the model? I have used some your pytorch-kaldi toolkit and it provides checkpointing and restarting from the closest checkpoint. I want to know if this codebase also has that feature
Hi I am using torch 0.4.0 as mentioned in the README file. I get the following error. Is this because of a version problem or do I need to install additional dependancies(apart from the ones mentioned in the README)
Traceback (most recent call last): File "speaker_id.py", line 228, in
pout=DNN2_net(DNN1_net(CNN_net(inp)))
File "/home/paperspace/anaconda3/envs/sincnet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, *kwargs)
File "/home/paperspace/SincNet/dnn_models.py", line 448, in forward
x = self.dropi, self.cnn_max_pool_len[i]))))
File "/home/paperspace/anaconda3/envs/sincnet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(input, **kwargs)
File "/home/paperspace/SincNet/dnn_models.py", line 144, in forward
band_pass_right= torch.flip(band_pass_left,dims=[1])
AttributeError: module 'torch' has no attribute 'flip'