-
Hi,
i m sorry, i don't find how unload model. like i load a model, i delete the object and i call the garbage collector but it does nothing.
How we are suppose to unload model?
I want to load a mo…
-
-
I am trying to finetune llama3-70B on trn132xlarge using distributed training. It failed with following error:
Container image: f"763104351884.dkr.ecr.{region}.amazonaws.com/pytorch-training-neur…
-
conda activate swift
CUDA_VISIBLE_DEVICES=0,1,2,3 swift sft --model_type llava1_6-mistral-7b-instruct --dataset dataset/abc.jsonl \
Command that i am using, dont know whats wrong in it
…
-
Here is the Google Colab link I used for fine-tuning :
[https://colab.research.google.com/drive/1kiALBR1UarPobiftZmiHfwFyk7hTCDnV?usp=sharing](url)
When I fine-tune the LLM-embed for tool retriev…
-
### Your current environment
```text
# Using pip install vllm
vllm==v0.5.1
```
### 🐛 Describe the bug
```text
# My python script to test long text
def run_Mixtral():
tokenizer = AutoTok…
-
-
File "/home/jovyan/test_inference/llama3-main/inference_DDP_table_description.py", line 1, in
from transformers import AutoTokenizer, AutoModelForCausalLM, Accelerator
ImportError: cannot imp…
-
Here's the call I'm using to run the script:
```
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file examples/hf-alignment-handbook/configs/accelerate_configs/deepspeed_zero3.yaml --num_proces…
-
I encountered the following error while training with 'vicuna-13b-v1.1':
File "/root/MiniGPT-4/minigpt4/models/minigpt_base.py", line 41, in __init__
self.llama_model, self.llama_tokenizer = …