BlinkDL / RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
12.61k stars 859 forks source link

AttributeError: type object 'Trainer' has no attribute 'add_argparse_args' #48

Closed dumpsters closed 1 year ago

dumpsters commented 1 year ago

Using https://github.com/resloved/RWKV-notebooks/blob/master/RWKV_v4neo_Fine_Tuning.ipynb which uses https://github.com/BlinkDL/RWKV-LM/tree/main/RWKV-v4neo gives me an error when it comes to the training part

########## work in progress ##########
Traceback (most recent call last):
  File "/content/RWKV-LM/RWKV-v4neo/train.py", line 109, in <module>
    parser = Trainer.add_argparse_args(parser)
AttributeError: type object 'Trainer' has no attribute 'add_argparse_args'

edit: had to downgrade to pytorch-lightning==1.9.0. On another note, I was under the impression that n_epochs would limit the number of epochs to that number but it just keeps going past it?

exat500g commented 1 year ago

i downgrade to pytorch-lightning==1.9.4 and solved, but then bin\nvcc --generate-dependencies-with-compile --dependency-output wkv_cuda_bf16.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=wkv_512_bf16 -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include\TH -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include\THC -IC:\Users\exat500g\miniconda3\envs\cuda1\include -IC:\Users\exat500g\miniconda3\envs\cuda1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 "-t 4" -std=c++17 -res-usage "--maxrregcount 60" --use_fast_math -O3 "-Xptxas -O3" --extra-device-vectorization -DTmax=512 -c D:\ChatGPT\RWKV-LM\RWKV-v4neo\cuda\wkv_cuda_bf16.cu -o wkv_cuda_bf16.cuda.o nvcc fatal : Unknown option '--maxrregcount 60' ninja: build stopped: subcommand failed.

BlinkDL commented 1 year ago

remove "--maxrregcount 60" (not supported on win10) from src/model.py

BlinkDL commented 1 year ago

On another note, I was under the impression that n_epochs would limit the number of epochs to that number but it just keeps going past it?

Yes. It's just related to LR schedule.