To address this, I modified requirements-torch so that the last line, pytorch=1.12.1 is instead torch==1.12.1 (note that this installs the CPU version, not the GPU version).
It might be better to just direct users to the official pytorch install page and get them to install it that way.
I've addressed this in #38
2) running the example pytorch script
The error I'm encountering isRuntimeError: rnn: hx is not contiguous
and the full stacktrace is
Traceback (most recent call last):
File "/tmp/fixed-point-finder/examples/torch/run_FlipFlop.py", line 140, in <module>
main()
File "/tmp/fixed-point-finder/examples/torch/run_FlipFlop.py", line 125, in main
model, valid_predictions = train_FlipFlop()
File "/tmp/fixed-point-finder/examples/torch/run_FlipFlop.py", line 62, in train_FlipFlop
losses, grad_norms = model.train(train_data, valid_data,
File "/tmp/fixed-point-finder/examples/torch/FlipFlop.py", line 245, in train
avg_loss, avg_norm = self._train_epoch(dataloader, optimizer)
File "/tmp/fixed-point-finder/examples/torch/FlipFlop.py", line 278, in _train_epoch
step_summary = self._train_step(batch_data, optimizer)
File "/tmp/fixed-point-finder/examples/torch/FlipFlop.py", line 305, in _train_step
batch_pred = self.forward(batch_data)
File "/tmp/fixed-point-finder/examples/torch/FlipFlop.py", line 155, in forward
hiddens_bxtxd, _ = self.rnn(inputs_bxtxd, initial_hiddens_1xbxd)
File "/home/iq/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iq/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iq/.local/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 586, in forward
result = _VF.rnn_tanh(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: rnn: hx is not contiguous
OS Information from lsb_release -a:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.4 LTS
Release: 22.04
Codename: jammy
Technically I ran into two issues:
1) installing via torch
To address this, I modified
requirements-torch
so that the last line,pytorch=1.12.1
is insteadtorch==1.12.1
(note that this installs the CPU version, not the GPU version).It might be better to just direct users to the official pytorch install page and get them to install it that way.
I've addressed this in #38
2) running the example pytorch script
The error I'm encountering is
RuntimeError: rnn: hx is not contiguous
and the full stacktrace is
OS Information from
lsb_release -a
: