-
I use V100, so I only use float32 model, but I get this error, can you solve this problem?
![Snipaste_2024-02-22_23-24-48](https://github.com/Stability-AI/StableCascade/assets/134043397/4bb73ac8-a165…
-
Platforms: asan
This test was disabled because it is failing on master ([recent examples](http://torch-ci.com/failure/test_jit_fuser_te.py%3A%3ATestNNCOpInfoCPU%3A%3Atest_nnc_correctness_frac_cpu_b…
-
Instructlab backend currently focuses on mistral fine tuning and I'm trying to maximize throughput for that. If anyone notices anything obvious or has any suggestions I'd truly appreciate it. @raghuki…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
Just want to say thanks for this! I've been trying to use other peoples' code that all use the `transformer_lens` library and it has a bug that stops you loading models in 4bit, seems to have loads of…
-
I used the following code to sft llama3:
```
import os
import wandb
os.environ["WANDB_PROJECT"] = "unsloth-mimic-20240814" # name your W&B project
os.environ["WANDB_LOG_MODEL"] = "checkpoint" …
-
```python
TensorBase.bfloat16
_set_grad_enabled of torch._C
_VariableFunctionsClass.empty of torch
TensorBase.long
TensorBase.type
TensorBase.__setitem__
_VariableFunctionsClass.lerp of torch
…
-
Hi guys:
I tried to run with following codes, but couldn't load pipline components and there was none error.
`import torch
from diffusers import FluxPipeline
device = (
"mps"
if torch…
-
## ❓ Questions and Help
When doing torch.matmul(in, other, out=c) with dtype of c is not is only respected on torch but not on XLA.
Is this the expected behavior or a bug?
### Example
```
…
HahTK updated
5 months ago
-
When I was trying to evaluate HellaSwag using:
`
lm_eval --model hf
--model_args pretrained=HuggingFaceH4/zephyr-7b-beta,dtype="bfloat16"
--tasks hellaswag
--device cuda:0
--…