-
`[nltk_data] Downloading package wordnet to /home/ahojel/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package omw-1.4 to /home/ahojel/nltk_data...
[nltk…
-
Flash attn 2.5.7 always complains about the input data type even when it's clearly a correct one.
I'm using the base image `nvcr.io/nvidia/pytorch:24.03-py3`
```
>>> import torch, flash_attn
>>>…
-
## 🐛 Bug
**12 cell:**
%%time
max_epochs = 50
metric = dc.metrics.Metric(dc.metrics.score_function.rms_score)
step_cutoff = len(train)//12
def val_cb(model, step):
if step%step_cutoff!=0…
-
View details in Rollbar: [https://rollbar.com/WikiWatershed/ModelMyWatershed/items/9/](https://rollbar.com/WikiWatershed/ModelMyWatershed/items/9/)
```
Traceback (most recent call last):
File "…
-
View details in Rollbar: [https://rollbar.com/WikiWatershed/ModelMyWatershed/items/44/](https://rollbar.com/WikiWatershed/ModelMyWatershed/items/44/)
```
Traceback (most recent call last):
File "/u…
-
Hi.
i have a issue in loading the model which is cause this error:
WARNING:auto_gptq.nn_modules.fused_llama_mlp:skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without…
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
Thanks for your published code.
I encounter one problem when running the code as described in the Usage.
My code is as follows
```python
import torch
from transformers import AutoTokenizer,…
-
When I execute the command `bash scripts/gsm8k/generate.sh`, I used `set_trace` to debug the `_sample_tokens_with_calculator` function. An error occurs when executing the following line:
```
../at…
-
I am using trtllm 0.8.0 (added moe support following llama's implementation). we serve models with trtllm_backend (docker images triton-trtllm-24.02)
[qwen2-moe-57B-A14B](https://huggingface.co/Qwe…