carpedm20 / ENAS-pytorch

PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
Apache License 2.0
2.69k stars 492 forks source link

RuntimeError: grad can be implicitly created only for scalar outputs #58

Open Shn9909 opened 1 year ago

Shn9909 commented 1 year ago

I encountered this strange error. Here is the output, thank you. Before, it was showing that the error cannot run on CPU and GPU at the same time, I added . cuda() after loss, it starts showing this error.

Traceback (most recent call last): File "D:/xiangmu/ENAS-pytorch-master/main.py", line 56, in main(args) File "D:/xiangmu/ENAS-pytorch-master/main.py", line 35, in main trnr.train() File "D:\xiangmu\ENAS-pytorch-master\trainer.py", line 223, in train self.train_shared(dag=dag) File "D:\xiangmu\ENAS-pytorch-master\trainer.py", line 317, in train_shared loss.backward() File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch\autograd__init.py", line 150, in backward gradtensors = _make_grads(tensors, gradtensors) File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch\autograd\init__.py", line 51, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar outputs