-
groq和openai没有问题, 但是gemini会提示错误, 但是使用大佬的api地址确实成功的,测试了5次都是错误的。
![Clip_2024-07-10_22-44-20](https://github.com/ultrasev/llmproxy/assets/170180789/3a3be496-1893-4cfc-aad7-2ca7a70594d2)
-
### The bug
See the title
### The OS that Immich Server is running on
Ubuntu 22.04
### Version of Immich Server
v1.99.0
### Version of Immich Mobile App
nope
### Platform with the issue
- [X]…
-
In SAC.py, SAC_BipedalWalker-v2.py, the codes:
```python
class NormalizedActions(gym.ActionWrapper):
def _action(self, action):
low = self.action_space.low
high = self.action_…
-
Is there a fundamental technical limitation on not being able to support TensorRT for openai/clip-vit-large-patch14-336 ? Just wanna understand why most 768-dim embedding models are not supported acco…
-
Because cannot directly access 'https://huggingface.co/models' to download 'openai/clip-vit-large-patch14'.
I copy 'clip-vit-large-patch14' to the corresponding '.cache/huggingface/transformers' af…
-
throws Exception error Prompt
זה שורת הפקודה
Loading CLIP Interrogator 0.5.4...
load checkpoint from C:\Users\Z5050\Downloads\sd.webui\webui\models\BLIP\model_base_caption_capfilt_large.pth
Loadi…
-
Why is the quality of `stable-ts` transcription much worse than that of `openai/whisper`? New lines of text are added where they should not be, numbers like `0.003` and `0.05` are defined as `0 0 3` a…
qo4on updated
1 month ago
-
Execute line of code:
num_tokens = image_attn_blocks[0].attn_probs.shape[-1]
An error message appears:
torch.nn.modules.module.ModuleAttributeError: 'ResidualAttentionBlock' object has no attribu…
-
Hi,
I hope you are doing fine actually i m confused in one thing regarding the CLIP-G models because as your training data looks more of like G prompt and not L prompt which I like comma separated th…
-
Are there any zero-shot classification results? In addition, are there more VLM evaluation results? Current experimental results seem not convincing enough.