-
Hello !
Great project, so much easy to understand/hack than the microsoft one
Is there any plan to support the prompt tuning feature ? https://microsoft.github.io/graphrag/posts/prompt_tuning/ov…
-
### Question
Hi, I cannot reproduce MME results following [finetune.sh](https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune.sh) on 665k instruction tuning dataset and evaluation scri…
-
**Describe the bug**
i am run ludwig serve with
'ludwig serve --model_path=/home/ubuntu/ludwig/api_experiment_run/model'
Everything load as expected but when i am trying curl to the api with
'cu…
-
- [ ] [RichardAragon/MultiAgentLLM](https://github.com/richardaragon/multiagentllm)
# RichardAragon/MultiAgentLLM
**DESCRIPTION:** "Multi Agent Language Learning Machine (Multi Agent LLM)
(Update)…
-
### Question
I know there are some issues on Fine-Tuning, but I feel there is still some information lacking.
I have a custom dataset that I curated:
```json
[{
"image_id": "data/com…
-
![Uploading 20240909-104552.jpg…]()
-
A relatively simple question, that I couldn't quite clarify by looking through the tech report...
During your pretraining (report section 3.1) or instruction tuning phases (report section 3.2), any…
-
### Model description
LLaMA-VID is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. LLaMA-VID empowers existing frameworks to support…
-
Thanks for sharing your work on quasi-Givens Orthogonal Fine Tuning! I'm excited to try it out but couldn't find instructions on how to use the code. Could you please provide some guidance on:
1. I…
-
## Explaining Data Patterns in Natural Language with Language Models
2023 BlackboxNLP Workshop at ACL | MSR & Cornell U
不断的生成解释,并进行排序。找出一个最具解释性的prompt。
Explanation: symbolic regression,
## Auto…