Closed dumpsters closed 1 year ago
i downgrade to pytorch-lightning==1.9.4 and solved, but then
bin\nvcc --generate-dependencies-with-compile --dependency-output wkv_cuda_bf16.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=wkv_512_bf16 -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include\TH -IC:\Users\exat500g\miniconda3\envs\cuda1\lib\site-packages\torch\include\THC -IC:\Users\exat500g\miniconda3\envs\cuda1\include -IC:\Users\exat500g\miniconda3\envs\cuda1\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 "-t 4" -std=c++17 -res-usage "--maxrregcount 60" --use_fast_math -O3 "-Xptxas -O3" --extra-device-vectorization -DTmax=512 -c D:\ChatGPT\RWKV-LM\RWKV-v4neo\cuda\wkv_cuda_bf16.cu -o wkv_cuda_bf16.cuda.o nvcc fatal : Unknown option '--maxrregcount 60' ninja: build stopped: subcommand failed.
remove "--maxrregcount 60" (not supported on win10) from src/model.py
On another note, I was under the impression that n_epochs would limit the number of epochs to that number but it just keeps going past it?
Yes. It's just related to LR schedule.
Using https://github.com/resloved/RWKV-notebooks/blob/master/RWKV_v4neo_Fine_Tuning.ipynb which uses https://github.com/BlinkDL/RWKV-LM/tree/main/RWKV-v4neo gives me an error when it comes to the training part
edit: had to downgrade to pytorch-lightning==1.9.0. On another note, I was under the impression that n_epochs would limit the number of epochs to that number but it just keeps going past it?