Open hohe12ly opened 9 months ago
Hey @hohe12ly, I'll look into this soon and get back!
BTW, my software environment hasCentOS 8
and NVIDIA V100S-PCIE-32GB
GPUs, if this info would be helpful.
Hi, sorry for the delay.
Seems like you don't get errors, you only get warnings. Those warnings are normal.
Anyway, I use the following for my requirements.txt
gluonts==0.13.3
numpy==1.23.5
pytorch_lightning==2.0.4
torch==2.0.0+cu118
wandb
scipy
Thanks, Arjun. I tested your configuration. It works. I still see the divide by zero warnings. As you mentioned, it's normal.
Since torch=2.0.0+cu118
no longer works on pip's default index server, I had to modify requirements.txt
for pip install
:
--index-url https://download.pytorch.org/whl/cu118
--extra-index-url https://pypi.org/simple
torch==2.0.0
torchvision==0.15.1
torchaudio==2.0.1
numpy==1.23.5
gluonts==0.13.3
pytorch_lightning==2.0.4
datasets
xformers
git+https://github.com/kashif/hopfield-layers@pytorch-2
etsformer-pytorch
reformer_pytorch
einops
opt_einsum
pykeops
scipy
apex
git+https://github.com/microsoft/torchscale
wandb
orjson
I reported in another issue that the most recent
pytorch-lightning
does not work withlag-llama
. I also tried a few version combinations among pytorch, pytorch-lightning, and gluonts. Eventually I could get the code run for 385 epochs with the followingrequirements.txt
:But the run still failed due to a divide-by-zero error in
gluonts
. Before I try more, I thought it'd be more efficient to ask the question here: could you share a workingrequirements.txt
with version number specified?BTW, the error I got with my
requirements.txt
is:Thanks a lot.
Yan