-
Hi @haesleinhuepf , I work at the Max Planck Computing and Data Facility and closely collaborate with @nscherf group. I was wondering if you would be interested in running the benchmark against some b…
-
## ❓Question
I want to use coremltools to help accomplish combining some separate coreml models to form a bigger one.
For example, **I have models: main.mlmodel, if_branch.mlmodel, else_branch.mlm…
-
**Is your feature request related to a problem? Please describe.*
Anthropic models are hard-coded and buried in a source file. Right now the latest models are not listed (big drop in performance). A…
-
### Misc discussion on performance
Hi all, I'm having trouble with maximizing the performance of batch inference of big models on vllm 0.6.3
(Llama 3.1 70b, 405b, Mistral large)
My command…
-
Problem - When you use in bot many different providers with many different models - it is very hard to find needed one without sorting and filtering from one shared big list of models.
Enchanceme…
-
Asking this because you and i both know very well that even if it is possible to run big models with lower vram usage still it is not practical to use the models like that because your brain cells wil…
-
### Current Situation
### Proposed Change
---
This (meshery) code repository is really big. Is there any way to optimize it?
![image](https://github.com/user-attachments/assets/b3ffb815-e8…
-
Hi, I am ex=ncountering this error while stardist image processing. Any idea what might be causing this error?
Traceback (most recent call last):
File "/rsrch5/home/neuro_rsrch/apirani/2024_11_0…
-
### Feature request
when initializing with SentenceTransformers, we can use the `truncate_dim` argument, like below:
`model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1", truncate_dim=d…
-
**Describe the issue**
It looks like currently dbt models are run on all the couchdb databases synced. While there's an issue in cht-sync syncing multiple databases (#165 ), if you specify any databas…