zjukg / NeuralKG

[Tool] For Knowledge Graph Representation Learning
http://neuralkg.zjukg.org/
Apache License 2.0
338 stars 64 forks source link

torch2.0.0+cu118版本下运行报错 #48

Open abbydev opened 1 month ago

abbydev commented 1 month ago
(base) root@:~/neuralkg/NeuralKG-main# pip show torch
Name: torch
Version: 2.0.0+cu118
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /root/miniconda3/lib/python3.8/site-packages
Requires: filelock, typing-extensions, networkx, triton, jinja2, sympy
Required-by: triton, torchvision, torchmetrics, pytorch-lightning
(base) root@:~/neuralkg/NeuralKG-main# python demo.py 
This demo is powered by NeuralKG 
Global seed set to 321
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Traceback (most recent call last):
  File "demo.py", line 121, in <module>
    main(arg_path = 'config/TransE_demo_kg.yaml')
  File "demo.py", line 99, in main
    trainer.fit(lit_model, datamodule=kgdata)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
    self._call_and_handle_interrupt(
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1145, in _run
    self.accelerator.setup(self)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu.py", line 46, in setup
    return super().setup(trainer)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 93, in setup
    self.setup_optimizers(trainer)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 354, in setup_optimizers
    optimizers, lr_schedulers, optimizer_frequencies = self.training_type_plugin.init_optimizers(
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 245, in init_optimizers
    return trainer.init_optimizers(model)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/optimizers.py", line 44, in init_optimizers
    lr_schedulers = self._configure_schedulers(lr_schedulers, monitor, not pl_module.automatic_optimization)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/optimizers.py", line 192, in _configure_schedulers
    raise ValueError(f'The provided lr scheduler "{scheduler}" is invalid')
ValueError: The provided lr scheduler "<torch.optim.lr_scheduler.MultiStepLR object at 0x7f07e75aad90>" is invalid
hzwy3c commented 1 month ago

您好,

这个报错是因为pytorch版本过高,请尝试降低pytorch版本到1.12.1。可参考issue #38 。