-
any way to get support for XLM-RoBERTA, DeBERTA et similia models? (asking for those specific models mostly because they have 1+ B parameters versions, but could be extended to all BERT and encoder-on…
-
**LocalAI version:**
2.19.3
**Environment, CPU architecture, OS, and Version:**
Win 11, AMD Ryzen 5 4500 6-Core Processor, RTX 3090.
**Describe the bug**
I am trying to use a custom embedding…
-
SeeSR-main/test_seesr.py", line 125, in load_tag_model
model = ram(pretrained='preset/models/ram_swin_large_14m.pth',
SeeSR-main/ram/models/ram_lora.py", line 319, in ram
model = RAMLora(…
-
My server cannot connect to the Hugging Face website, so I manually downloaded the pretrained model used in the code and placed it in the `img2img-turbo-main` folder. After executing the command `pyth…
-
Research and evaluate different LLM models (e.g., BERT, RoBERTa, XLNet) for their suitability in the bioinformatics domain.
-> Research and document the strengths and weaknesses of each model. Crea…
-
**Description**
I deployed a bert_base model from hugging face's transformer library via torchscript and Triton's pytorch backend.
But i found **the GPU utilization is around 0**, and performance is…
-
Hi,
Currently trying to use SOAP for fine-tuning HF base model, but compilation takes too long. Is this expected?
-
### Feature request
Currently, Transformers.js V3 defaults to use CPU (WASM) instead of GPU (WebGPU) due to lack of support and instability across browsers (specifically Firefox and Safari, and Chrom…
-
I try to plot bert model using this package. But I unable to do it.
Code:
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("bert-base-uncased")
to…
-
I have gone through the example: opensearch-py-ml/examples/demo_deploy_cliptextmodel.html
Model is correctly registered in opensearch cluster but the final command of the example:
ml_client.depl…