Closed ghost closed 5 years ago
I solved this problem by commenting out this line of code.
# self.res.setTempMemoryFraction(0.1)
RL-GAN-Net/models/lossess.py, line 201
---Original--- From: "rangchu"notifications@github.com Date: Mon, Aug 26, 2019 15:36 PM To: "iSarmad/RL-GAN-Net"RL-GAN-Net@noreply.github.com; Cc: "doudoulaile"2274570131@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [iSarmad/RL-GAN-Net] Training the autoencoder (#3)
I solved this problem by commenting out this line of code.
That's great. Can I ask which file is this line of code located?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
RL-GAN-Net/models/lossess.py, line 201 … ---Original--- From: "rangchu"notifications@github.com Date: Mon, Aug 26, 2019 15:36 PM To: "iSarmad/RL-GAN-Net"RL-GAN-Net@noreply.github.com; Cc: "doudoulaile"2274570131@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [iSarmad/RL-GAN-Net] Training the autoencoder (#3) I solved this problem by commenting out this line of code. # self.res.setTempMemoryFraction(0.1) That's great. Can I ask which file is this line of code located? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
I solved this problem by commenting out this line of code.
# self.res.setTempMemoryFraction(0.1)
Unfortunately, it didn't work for me, I got this error instead
ERROR:visdom:[Errno 111] Connection refused
/home/myname/anaconda3/envs/rlgan-venv/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:82: UserWarning: Detected call of
lr_scheduler.step()before
optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order:
optimizer.step()before
lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) Traceback (most recent call last): File "main2.py", line 484, in <module> main() File "main2.py", line 285, in main train_loss, _, _ = train(train_loader,model,optimizer,epoch,args,chamfer,visualizer,train_writer) File "main2.py", line 444, in train loss_1 = chamfer(trans_input, pc_1) # File "/home/myname/Documents/Git/RL-GAN-Net/models/lossess.py", line 297, in __call__ loss = self.forward(predict_pc, gt_pc) File "/home/myname/Documents/Git/RL-GAN-Net/models/lossess.py", line 256, in forward selected_gt_by_predict = selected_gt_by_predict.to(self.opt.device) RuntimeError: CUDA error: invalid device ordinal
RL-GAN-Net/models/lossess.py, line 201 … ---Original--- From: "rangchu"notifications@github.com Date: Mon, Aug 26, 2019 15:36 PM To: "iSarmad/RL-GAN-Net"RL-GAN-Net@noreply.github.com; Cc: "doudoulaile"2274570131@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [iSarmad/RL-GAN-Net] Training the autoencoder (#3) I solved this problem by commenting out this line of code. # self.res.setTempMemoryFraction(0.1) That's great. Can I ask which file is this line of code located? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
I solved this problem by commenting out this line of code.
# self.res.setTempMemoryFraction(0.1)
Unfortunately, it didn't work for me, I got this error instead
ERROR:visdom:[Errno 111] Connection refused
/home/myname/anaconda3/envs/rlgan-venv/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:82: UserWarning: Detected call of
lr_scheduler.step()before
optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order:
optimizer.step()before
lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) Traceback (most recent call last): File "main2.py", line 484, in <module> main() File "main2.py", line 285, in main train_loss, _, _ = train(train_loader,model,optimizer,epoch,args,chamfer,visualizer,train_writer) File "main2.py", line 444, in train loss_1 = chamfer(trans_input, pc_1) # File "/home/myname/Documents/Git/RL-GAN-Net/models/lossess.py", line 297, in __call__ loss = self.forward(predict_pc, gt_pc) File "/home/myname/Documents/Git/RL-GAN-Net/models/lossess.py", line 256, in forward selected_gt_by_predict = selected_gt_by_predict.to(self.opt.device) RuntimeError: CUDA error: invalid device ordinal
Silly me. I forgot to start the visdom server
have you finished running the code?
---Original--- From: "rangchu"<notifications@github.com> Date: Tue, Sep 24, 2019 14:15 PM To: "iSarmad/RL-GAN-Net"<RL-GAN-Net@noreply.github.com>; Cc: "doudoulaile"<2274570131@qq.com>;"Comment"<comment@noreply.github.com>; Subject: Re: [iSarmad/RL-GAN-Net] Training the autoencoder (#3)
RL-GAN-Net/models/lossess.py, line 201 … ---Original--- From: "rangchu"notifications@github.com Date: Mon, Aug 26, 2019 15:36 PM To: "iSarmad/RL-GAN-Net"RL-GAN-Net@noreply.github.com; Cc: "doudoulaile"2274570131@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [iSarmad/RL-GAN-Net] Training the autoencoder (#3) I solved this problem by commenting out this line of code. # self.res.setTempMemoryFraction(0.1) That's great. Can I ask which file is this line of code located? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
I solved this problem by commenting out this line of code.
Unfortunately, it didn't work for me, I got this error instead ERROR:visdom:[Errno 111] Connection refused /home/myname/anaconda3/envs/rlgan-venv/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:82: UserWarning: Detected call of lr_scheduler.step()beforeoptimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()beforelr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) Traceback (most recent call last): File "main2.py", line 484, in <module> main() File "main2.py", line 285, in main trainloss, , _ = train(train_loader,model,optimizer,epoch,args,chamfer,visualizer,train_writer) File "main2.py", line 444, in train loss_1 = chamfer(trans_input, pc_1) # File "/home/myname/Documents/Git/RL-GAN-Net/models/lossess.py", line 297, in call loss = self.forward(predict_pc, gt_pc) File "/home/myname/Documents/Git/RL-GAN-Net/models/lossess.py", line 256, in forward selected_gt_by_predict = selected_gt_by_predict.to(self.opt.device) RuntimeError: CUDA error: invalid device ordinal
Silly me. I forgot to start the visdom server
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Hello thank you for providing this to accompany your work. Really cool stuff here.
I am trying to train the autoencoder and I encountered this error, I was wondering what could be the solution
ERROR:visdom:[Errno 111] Connection refused Traceback (most recent call last): File "main2.py", line 484, in <module> main() File "main2.py", line 267, in main chamfer = ChamferLoss(args) File "/home/Documents/Git/RL-GAN-Net/models/lossess.py", line 201, in __init__ self.res.setTempMemoryFraction(0.1) File "/home/anaconda3/envs/pytorch-venv/lib/python3.7/site-packages/faiss/swigfaiss.py", line 1195, in <lambda> __getattr__ = lambda self, name: _swig_getattr(self, StandardGpuResources, name) File "/home/paolo/anaconda3/envs/pytorch-venv/lib/python3.7/site-packages/faiss/swigfaiss.py", line 80, in _swig_getattr raise AttributeError("'%s' object has no attribute '%s'" % (class_type.__name__, name)) AttributeError: 'StandardGpuResources' object has no attribute 'setTempMemoryFraction'