-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-compass/opencompass/issues/) and [Discussions](https://github.com/open-compass/opencompass/discussions) but cannot get the expe…
-
### System Info
Hello, we are using the TR-OCR model exported to Onnx. We notice a problem with the large checkpoints for both printed and handwritten; when we run inference using the onnxruntime j…
-
I created a NPZ file via this site
https://huggingface.co/spaces/fffiloni/clone-voice-for-bark
Then I put it in the /assets/prompts/v2/ as ragesh.npz
Then I loaded it like this
audio_array …
-
Hello. I just simplily run the example.py and met the error in the "=====**SelfExtend using Torch**======" part:
```
Traceback (most recent call last):
File "./LongLM/example.py", line 112, in …
-
I am using sentence-transformers-2.2.2.tar.gz while it pulls the following nvidia packages
nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl
nvidia_cuda_cupti_cu12-12.1.105-py3-none-manyl…
-
I found that the current version of LongLM can not load Gemma 1 or Gemma 2 model successfully. I wrote a minimum test to help reproduce the issue:
```python
# transfromers version 4.38.2
# this exa…
-
Installing collected packages: pytz, mpmath, xxhash, urllib3, tzdata, tqdm, sympy, safetensors, regex, pyyaml, python-dateutil, pyarrow-hotfix, pyarrow, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia…
-
Could you please release the example script for Phi-2? Thanks.
-
I met following situation when training T5. "ValueError: bfloat16.enabled not found in kwargs. Please specify bfloat16.enabled without auto(set to correct value) in the DeepSpeed config file or pass i…
-
I'm getting inconsistent results between HF and vllm with llama2-7b running greedy decoding:
HF version:
```
from transformers import LlamaForCausalLM, LlamaTokenizer
MODEL_DIR = '/home/owne…