-
My code is throwing the error below:
```
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/net/scratch/user/miniconda3/envs/vl…
-
Traceback (most recent call last):
File "F:\ComfyUI\ComfyUI\nodes.py", line 2012, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", li…
-
Hello,
It would be helpful to include documentation on how to trace a decoder-only transformer model for hosting on Inferentia. Currently, the only documentation that exists is for Encoder-Decoder …
-
**Is your feature request related to a problem? Please describe.**
It would be nice to intergrate https://llama-cpp-python.readthedocs.io/en/stable/#embeddings because of the speed of default `senten…
-
### System Info
latest commit
### Environment/Platform
- [ ] Website/web-app
- [ ] Browser extension
- [ ] Server-side (e.g., Node.js, Deno, Bun)
- [ ] Desktop app (e.g., Electron)
- [ ] Other (e.g…
-
I want to run [sft](https://github.com/huggingface/peft/tree/main/examples/sft) example and I get some erros, Can you help me to find the problem?
I run [run_peft_fsdp.sh](https://github.com/huggin…
-
Hello,
I successfully downloaded the model to this directory /root/.llama/checkpoints/Llama3.2-1B-Instruct
When I launch the AutoModelForCausalLM.from_pretrained passing the path above I got the f…
-
### Description
We would like to add some frontend developer content for the "VA Forms Library - How to work with Pre-Fill" page of the developer docs.
Within the 'Vets-Website work', and probably b…
-
### System Info
When using DeepSpeed, the RLOOTrainer reports an error: "ValueError: Please make sure to properly initialize your accelerator via accelerator = Accelerator() before using any function…
-
I want to know how we can run the speculative decoding(Assisted Generation) to increase the token/sec for llama2 based model for optimum.neuron to run on inf2. Similar to what transformers have done f…