-
# ❓ Questions and Help
Hi! Thanks for the cool library, I get the following error when I enable xformers in my code:
```
FATAL: kernel `fmha_cutlassF_f32_aligned_32x128_gmem_sm80` is for sm80-sm90,…
-
# ❓ Questions and Help
Hello, thanks for developing this great library!
I'm currently trying to install xformers on an AWS Deep Learning Container, but when running `python -m xformers.info` to …
-
Trying to run `app_flux.py` results in the following error:
```
$ python3.12 app_flux.py --offload --fp8
INFO:albumentations.check_version:A new version of Albumentations is available: 1.4.18 (yo…
-
# 🐛 Bug
## Command
start wunjo V2
## To Reproduce
Steps to reproduce the behavior:
briefcase dev # starts wunjo AI V2
1.go to generation tab
2.start image generation
3.in the c…
-
# 🚀 Feature
I was sad to see how many things in python -m xformers.info weren't enabled on windows so I set out to do something about it.
Literally all that needs to be done is an expansion of…
-
Hi Dustin,
thanks for your great work.
I was trying to run [mistral-7b](https://github.com/mistralai/mistral-src) on Jetson ORIN with Jetpack (# R35 (release), REVISION: 4.1, GCID: 33958178, B…
cj401 updated
10 months ago
-
(controlgif) G:\controlGIF>python app.py
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.13.1+cu117 with CUDA 1107 (you have 1.13.1+cpu)
Python…
-
# ❓ Questions and Help
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 257, 6, 64) (torch.float32)
key : shape…
-
I want to convert this small 1.1B llama2 architecture model [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b) to llama2.c vers…
-
Issue:
SPHINX Tiny 1k seems to output gibberish responses for any image. Raising temperature allows for even more gibberish while lowering lessens gibberish.
Initially tried all the model checkpoi…