-
(allegro) D:\PyShit\Allegro>python single_inference.py ^
More? --user_prompt "A seaside harbor with bright sunlight and sparkling seawater, with manyboats in the water. From an aerial view, the boats…
-
Recently, I noticed that the `SentenceTransformers` class has gained the ability to use the ONNX backend, which is incredibly beneficial for enhancing performance, especially on CPUs.
I would like …
-
## Model Zoo (we generally first implement USP and then PipeFusion for a new model)
wait for your comments.
## Scheduler
- [ ] Decouple VAE and DiT backbone. They can have different parallel …
-
A lot of the OpenNMT-py ecosystem encourages the use of CTranslate2 downstream for efficient inference. Would really love this to be added to the new eole.
Doing some retraining of some custom multi…
-
Hi @HL-hanlin ,
Thank you for you amazing work of Ctrl-Adapter! I was trying to run the code on a single NVIDIA 3090 GPU, but I came into the OOM error. Could you please enlighten me what GPU resou…
-
### Issue
We are preparing for a TorchGeo tutorial at AGU and need to greatly expand our existing list of tutorials. This issue lists the tutorials that still need to be added and tracks progress tow…
-
**Is your feature request related to a problem? Please describe.**
I’m facing an issue when deploying large models in Kubernetes, especially when the pod’s ephemeral storage is limited. Triton Infere…
-
There is a typo here:
https://github.com/alimama-creative/FLUX-Controlnet-Inpainting/blob/7c00862a8341ab8163e297552cb36627a260fccb/main.py#L17
**The fix**
The fix is to replace `torch_dytpe` wit…
-
I think Intel CPUs/GPUs now support more efficient inference with OpenVINO. See example here with LLAVA: https://docs.openvino.ai/2023.2/notebooks/257-llava-multimodal-chatbot-with-output.html
It …
-
### Description
when attempting to load a GitHub Repo into long term memory after it reading and saving to collections , it doesn't get all the files but somewhere it crashes.
Logs
```
b" Runni…