Open harryborison opened 5 years ago
i think it is the same problem i posted on #101 . I think it is because pytorch-gpgpu-sim has some gpu kernels which are not part of cuDNN and gpgpu-sim doesn't support multiple library input. Maybe you can find the kernel name correspond to 0x7f45731b4eb0 in the log file(terminal output) and you can check that the kernel is not a part of cudnn but part of pytorch.
Yes. i see your question too. I really want to solve this problem.... Is there any way to solve the problem? I want to use pytorch-gpgpu-sim..
Hi. This has been a known issue, and it has to do with kernels not being found in libcudnn.so. Please try the instructions in the link below and see if it helps. https://docs.google.com/document/d/17fSM2vrWodP8rWR7ctpgaggVXEw0uD2VCAh0Gi4Gpb4/edit?usp=sharing
Hello, Can share your pytorch-gpgpu-sim Repo?I tried many versions and failed.So I really hope you can help me
When i run GPGPU-SIM based on CUDA it operates well. All things are operate well. But when i run Pytorch-GPGPU-SIM it is not operate well.
I don't know exactly what is wrong
Here is my setting and my code . i use docker
Ubuntu : 16.04 gcc : 4.8.4 cuda : 8.0 cudnn : 6.0![제목 없음](https://user-images.githubusercontent.com/33567924/55298302-e01ef280-5467-11e9-9eae-9e2af8938afa.png)
Finally i get this error and stop. My pytorch code is very simple for test.
import torch from torch.autograd import Variable
a= torch.ones(2,2) b= torch.ones(2,2) print(a)
a=Variable(a, requires_grad=True).cuda() b=Variable(a, requires_grad=True).cuda() b= a+2 print(b)
It operates well without gpgpu-sim but in gpgpu-sim when b=a+2 the error is occur.
In addition to my code, in /pytorch-gpgpu-sim/test/ many sample codes are all generate error like this. What is problem? please help me...