-
### Bug Description
I tried to use sample code from the doc [sample](https://docs.llamaindex.ai/en/stable/examples/customization/llms/SimpleIndexDemo-Huggingface_camel.html)
However, I encountered t…
-
@shelby3 [wrote](https://github.com/keean/traitscript/issues/2#issuecomment-248102324):
> Also, the compiler can use a much simpler form of path analysis (independent of run-time state) to do that pe…
-
# 🚀 Feature request
This is a discussion issue for training/fine-tuning very large transformer models. Recently, model parallelism was added for gpt2 and t5. The current implementation is for PyTor…
-
# _Note (Jan 2023):_
_This thread is mainly the **very early research notes** and discussions that lead to the development of certain perceptually uniform contrast methods. While of interest from a h…
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
![image](https://github.com/user-attachments/assets/0790fb8f-e403-4fce-9cb0-1ecaf16bb57e)
**…
-
This issue shall be used to log (weekly) meeting agendas and the corresponding resolution of the topics addressed (minutes). This will help us maintain a record of topics discussed and the resolution …
-
**Describe the bug**
https://docs.nvidia.com/nemo-framework/user-guide/latest/playbooks/llama2sft.html
docker image: nvcr.io/nvidia/nemo:24.01.01.framework
converted llama2-70B hf model to Nemo…
-
### 🐛 Describe the bug
Code to reproduce
``` python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
path = "gpt2" # any LM would result the same
tokenizer = AutoTok…
-
### System Info
```
- `transformers` version: 4.41.0.dev0
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.22.2
- Safetensors vers…
-
Now consider if we are executing the proof of this statement right now or not and we have constructed a neat logical box that can be true or false in a very self referential manner!