-
Hello,
I want to run:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("NVEagle/Eagle-X5-13B-Chat")
But I get:
ValueError: The checkpoint you are tr…
flehn updated
2 weeks ago
-
Hi TensorRT-LLM team, Your work is incredible.
By following the READme file for [multi-modeling](https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/multimodal/README.md), we were sucess to run…
-
Is it possible to use the `musicgen-melody model` in the [Transformers library](https://github.com/huggingface/transformers) like the [`musicgen-small model`](https://github.com/facebookresearch/audio…
-
## 🐛 Bug
The issue looks related to **lifted constants** during `torch.export`, I found a commit https://github.com/pytorch/xla/commit/d8d7e58b78664aff2713e5f25adb3d61c42d44e7 might be related, but…
-
https://outlines-dev.github.io/outlines/reference/models/transformers/
```python
outlines_tokenizer = outlines.models.TransformerTokenizer(
transformers.AutoTokenizer.from_pretrained(model_…
-
What should I specify as the `model_type` in the JSON file?
from transformers import AutoModel
model = AutoModel.from_pretrained("zxhezexin/openlrm-obj-base-1.1")
ValueError: Unrecogniz…
-
We are trying to perform fine tuning with the Gliner model. However, we get an error when loading the model because the model cannot see the config.json file. When we load the config.json file manuall…
-
`(ht240815) PS G:\project\ht\240815\LongWriter> python .\trans_web_demo.py
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████…
-
The loading of Hugging Face models adheres to the following logic: models using device_map must forcibly enable low_cpu_mem_usage. (Or a user might have manually enabled low_cpu_mem_usage in from_pret…
-
### Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md)…