-
Thanks for your great work! LLaMA-VID supports single-image input and video input, but does it support multi-image input? What's the quickest way to adapt to this input?
Thanks in advance!
-
### Question
Hello, where can I download the images from in the dataset?
[https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-15…
-
Yair,
Have you released any information about how to use the fine tuned models for inference?
L
-
### **Is your feature request related to a problem? Please describe.**
Cursor IDE is revolutionary in its integrated AI support and functionality. It is unbeaten; all other extensions and add-ons are…
-
Hello, thanks to your great work!
In `blip2_vicuna_instruct.py`, the `bos_token` of LLM has been changed. Originally, it is '< s >' with idx:1. But after the following code:
```
self.llm_tokenize…
-
您好,非常感谢您开源这么优秀的项目。我在复现您论文的实验时Out-of-domain evaluation验证的结果和您论文里的数据接近,但是在验证in-domain supervised时,数据结果差异比较大,英文的平均f1只有70几(论文的结果是83.85),我使用的数据是B2NER_all,语言模型使用的是InternLM2-7b(internlm/internlm2-7b),训练脚本使用的…
-
Hi!
Could you please provide guidelines on using the container, especially for the Module 2 and 3?
I see that you have provided instructions for predictions in the Readme, but not for fine-tuning…
-
**Link to the notebook**
[Fine-tune LLaMA 2 models on SageMaker JumpStart](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/lla…
-
Hello, thanks for your nice work.
I would like to propose some fine tuning, up to you if bring or not:
static int handle_rdtsc(struct kvm_vcpu *vcpu)
{
// Static variables to keep track …
-
After several unsuccessful attempts at fine-tuning where the output was a still frame of noise or a green field, I followed instructions and skipped to the inference to test the base model. It reacted…