-
### Describe the bug
I'm trying to use Flydrive with SvelteKit using the any of the Cloudflare adapters but none of them work. The same imports work with Hono, so I'm guessing there's something going…
-
https://github.com/lm-sys/FastChat/#vicuna-weights
-
Hi:
I try to get the eval results. First, I run the `bash run_gcg_individual.sh vicuna behaviors` and get the result `ndividual_behaviors_vicuna_gcg_offset0_20231007-17:37:56.json`. And after getti…
-
I read Readme.md, then I created a llm folder and downloaded vicuna-7b-v1.1 by using huggingface-cli and renamed it to vicuna-7b like this
However, when I run cells in demo.ipynb file, I got this e…
-
I am trying to calculate the acceptance rate for evaluation. I ran:
`python -m eagle.evaluation.gen_ea_alpha_vicuna --ea-model-path ~/models/EAGLE-Vicuna-7B-v1.3/ --base-model-path ~/models/vicuna-…
-
新手求教。由于我在meta的hugging face页面上没有找到 llama1,只找到了2和3的,便用了 [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) 模型,并对应使用了 [vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)。imagebind…
-
from easyjailbreak.models.huggingface_model import (HuggingfaceModel,
from_pretrained)
from easyjailbreak.models.openai_model import OpenaiModel
…
-
Hi,
Is it possible to load InstructBLIP (Vicuna 13B) across multiple (e.g. 4x16GB) GPUs?
LLaVA (which also uses Vicuna 13B) enables the number of GPUs to be specified. InstructBLIP's load_model_…
-
Hello, when I reproduce the results on Vicuna-13B and Llams2-7B , I can not get any model output, and the code outputs the warning:"Prompt exceeds max length and return an empty string as answer. If t…
-
Hello authors, thanks again for the excellent work. Say that I have a completed model ckpt and want to load it as:
model = LlavaLlamaForCausalLM.from_pretrained("./checkpoints/llava-v1.5-vicuna-13b-v…