-
**命令行:**
/mnt/workspace/Qwen2/examples/sft# bash finetune.sh -m /mnt/workspace/.cache/modelscope/hub/qwen/Qwen2-1___5B-Instruct-GPTQ-Int4 -d ../../../data.jsonl --deepspeed ds_config_zero2.json --use…
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
`CUDA_VISIBLE_DEVICES=0 swift export --ckpt_dir 'output/qwen2-7b-instruct/v2-20240731-10244…
-
I used this code and trained with Korean ko-snil data.
adapter_config.json, adapter_model.safetensors, special_tokens_map.json, tokenizer_config.json, tokenizer.json, tokenizer.model
5 files wer…
-
@nguyenhoanganh2002 @anhnhorai
CUDA_VISIBLE_DEVICES=0 python train_dvae_xtts.py --output_path=checkpoints/ --train_csv_path=datasets/metadata_train.csv --eval_csv_path=datasets/metadata_eval.csv --l…
-
### What happened?
I tried to run this command:
`./llama-cli -m phi3:latest.gguf -p "I believe the meaning of life is" -n 128`
and it fails to load the model with the following error:
`llama_i…
-
I hope this message finds you well. I recently had the opportunity to experiment with the Codellama-7b-Instruct model from GitHub repository and was pleased to observe its promising performance. Encou…
-
### Question
I got loss to be 0 when training on Qwen2 backend,
{'loss': 0.0, 'learning_rate': 0.00015267175572519084, 'epoch': 0.0} …
-
- [ ] [Best way to add knowledge to a llm : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1ao2bzu/best_way_to_add_knowledge_to_a_llm/)
# Best way to add knowledge to a LLM: r/LocalLLaMA…
-
Could the following options be implemented into the following sections?
### Argument parser:
1. All settings in the `Settings` tab
This image can be interpreted as:
```bash
!python …
-
Traceback (most recent call last):
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 319, in
main()
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.…