Fill-in-the-middle fine-tuning for the Code Llama model 🦙
MIT License
16
stars
3
forks
source link
Issue with implemenRuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): /home/gpuadmin/anaconda3/envs/venv1/lib/python3.11/site-packages/flash_attn_2_cuda.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZN3c1021throwNullDataPtrErrorEvting #4
I am doing a university assignment, I would be very gratefull to you to get any type of assistance from you, contact me on mimalik06@gmail.com