After I looked up the relevant information, I found that the error occurred in log_stds.clamp_(-20, 2). According to the information I found, clamp_() is an in-place operation because it is appended with the suffix _. It attempts to modify log_stds, but log_stds is obtained by means, log_stds = self.net(states).chunk(2, dim=-1), which violates PyTorch's rules. When I change clamp_ to clamp, the program can run normally. So why did you originally use clamp_ instead of clamp? What are the considerations here?
When I ran train_expert.py, I got this error:
RuntimeError: Output 1 of SplitBackward0 is a view and is being modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
,this happened in: https://github.com/toshikwa/gail-airl-ppo.pytorch/blob/4e13a23454600a16d5aeeeb4c09338308115455e/gail_airl_ppo/network/policy.py#L49After I looked up the relevant information, I found that the error occurred in
log_stds.clamp_(-20, 2)
. According to the information I found,clamp_()
is an in-place operation because it is appended with the suffix_
. It attempts to modifylog_stds
, but log_stds is obtained by means,log_stds = self.net(states).chunk(2, dim=-1)
, which violates PyTorch's rules. When I changeclamp_
toclamp
, the program can run normally. So why did you originally useclamp_
instead ofclamp
? What are the considerations here?