-
你好,训练tiny时,如果冻结llm有尝试过吗,效果怎么样,还有就是代码里面冻结llm时为什么不包括get_input_embeddings?
-
LLaMA-Factory是不是版本啥的不对了?还是我没安装好?
全部的报错如下:
`
(langchain) zeng@zeng:~/llm/medical-chatbot$ sh run_training.sh
04/29/2024 15:43:19 - WARNING - llmtuner.hparams.parser - We recommend enable mixed p…
-
@Yuliang-Liu Nice work! I run into finetune issue as follows
![e20823216a167f50ed234a1468f8a51](https://github.com/user-attachments/assets/48f6316a-0098-4d9c-a34f-ec32c60a48d1)
2 GPUs of NVIDIA A8…
-
Set up a working session where you discuss how ORCA should adopt and use AI. Questions that might be helpful:
- Should ORCA members use LLM to generate code?
- Should ORCA members get onboarding tra…
-
## Collaboration Call to Action
### Description of Problem:
The AI4Finance team has the domain expertise and resources to train, fine-tune, and benchmark LLMs on CDM, but they do not have domain e…
-
Hi! Thanks for the repository. I made a blog going over some of these papers [here](https://isamu-website.medium.com/understanding-ai-for-stories-d0c1cd7b7bdc) and my general conclusions were
1. LLMs…
-
For #4 (Milestone: 1)
Contribute DevOps Roadmap data in the format of [frontend.json](https://github.com/Open-Source-Chandigarh/sadakAI/blob/main/finetune_data/frontend_data.json), the file should be…
-
Even though I think LLMs generally do not work like this, I still wonder whether we could guard against some - otherwise super dumb - LLM to just learn our repo by heart and then achieve great results…
-
Is the way to use OmniBal in the internvl codebase by adding the use_fast_dataset=True configuration in the bash script? For example, if you add the use_fast_dataset=True configuration in this file: h…
-
Locally deployed on my MACBookPro (M2 chip).
Results:
[cozyvoice_results.zip](https://github.com/user-attachments/files/16685420/cozyvoice_results.zip)
1) Input text:
自民党は、岸田総理大臣の後任を**選ぶ**総裁選挙に…