-
已连接到 pydev 调试器(内部版本号 221.5787.24)09/02/2024 19:33:36 - INFO - easyeditor.trainer.BaseTrainer - Config: SERACMultimodalTrainingHparams(qformer_name_or_path='bert-base-uncased', state_dict_file='huggi…
-
### Describe the issue
use shape_inference.quant_pre_process to preprocess will result in error even if i set skip_optimization=True
![image](https://github.com/microsoft/onnxruntime/assets/12644192…
-
### System Info
I am trying to run this: **bash decode_wavlm_large_linear_vicuna_7b.sh**
But, not sure, what has to be given for ckpt_path, currently I do not have model.pt? Where do I get this…
-
While CogVLM is trained, LM weights are fronzen.
From my observation however, the LM weights of cogvlm are different with Vicuna
Vicuna: https://huggingface.co/lmsys/vicuna-7b-v1.5/tree/main
Co…
-
Dear @salman-h-khan ,
Thanks for your fantastic work GeoChat, I am really interested in it. And the ckpt provided by you works for me.
However, when I tried to reproduce it as a beginner of the …
-
in demo.py
model_base = 'ckpts/instructblip-vicuna-7b'
sampler_model_base = 'ckpts/bert-base-uncased'
Are these two weight files downloaded from https://huggingface.co/Salesforce/instructblip-vicun…
-
### Describe the issue
Below is a snippet of my code, I want to generate captions for my images.
```python
def gen_image_caption(self, imgs, temperature=0.2, top_p=0.7, num_beams=1, qs=None, ma…
-
when i ask a qestions it is soo slow it is taking forever to write one sentence how can i make it faster btw am using vicuna 7B to make it light wight for me and am using mac OS m2 chip and that doesn…
-
### System Info
CPU X86
GPU A100
OS Redhat
Driver 535.154.05
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] …
-
Hi Authors,
Thank you for sharing your code! However, I run into an OOM error on larger models (such as Llama-7b or Vicuna-7b). I am using 80G A-100 GPUs. Could you share your configurations on the…