-
Hi there, thanks for porting this to Cog/Replicate.
Would you be willing to add auto-transcription via Whisper? See this demo on huggingface where it has already been implemented:
https://huggi…
-
Please try https://huggingface.co/RWKV/rwkv-6-world-7b and https://huggingface.co/RWKV/v6-Finch-7B-HF (these are slightly different models)
-
I would be very grateful for the support of the BGE-Multilingual-Gemma2, an LLM-based multilingual embedding model.
https://huggingface.co/BAAI/bge-multilingual-gemma2
-
Hello @aryopg 🤗
I'm Niels and work as ML engineer at Hugging Face. I discovered your work as it got featured in AK's daily papers: https://huggingface.co/papers/2410.18860. The paper page lets peop…
-
Hello,
Thank you for your hard work.
I tried to run the code bench locally (on a RTX 3060 12Gb) but was hitting issues, I know however though that it is possible to use Hugging face hub inferenc…
-
Traceback (most recent call last):
File "app.py", line 79, in
main()
File "app.py", line 56, in main
torch.cuda.init()
File "/home/user/.local/lib/python3.8/site-packages/torch/cud…
-
The control model has been released, can you support it? I really need it,please
canny: https://huggingface.co/XLabs-AI/flux-controlnet-canny-v3
depth: https://huggingface.co/XLabs-AI/flux-controlne…
-
Can be done by adding `"@huggingface/inference": "workspace:^"` in the deps and running `pnpm install`
And then using `@huggingface/inference` everywhere :)
-
| | Name | Publication Date | Model Type | Sizes |URL |
|-----------|------|-----------------|------------|-------|------|
| - | CodeGen | 03/22 | Decoder | 350M, 2B, 6B, 16B |https://hugg…
-
I patched the model meta with the latest model meta with `mteb create_meta --results_folder results/{my model}/{my reivision} --output_path model_card.md --from_existing jina_embeddings-v3.md `, this …