issues
search
VectorSpaceLab
/
OmniGen
OmniGen: Unified Image Generation. https://arxiv.org/pdf/2409.11340
MIT License
2.89k
stars
228
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
feat: rm unused character
#95
a1364533187
closed
3 weeks ago
2
Add Replicate demo and API
#94
chenxwh
closed
3 weeks ago
1
Are there any ways to make this run faster on low-end GPUs?
#93
runshouse
opened
3 weeks ago
0
Stuck on either loading safetensors or fetching the files.
#92
tedcar
opened
3 weeks ago
2
OmniGenCache makes Omnigen CUDA only is there an option to not use it.
#91
Vargol
opened
3 weeks ago
7
CUDA out of memory
#90
TanvirHafiz
opened
3 weeks ago
2
RuntimeError: OffloadedCache can only be used with a GPU
#89
TanvirHafiz
opened
3 weeks ago
2
1Torch was not compiled with flash attention.
#88
ialhabbal
opened
3 weeks ago
4
The performance on DreamBooth benchmark
#87
XavierCHEN34
opened
3 weeks ago
3
Excellent Work - Made auto installers for Windows (run locally - using 5.5 GB VRAM), RunPod and Massed Compute (run on cloud) - installs into venv , Python 3.10
#86
FurkanGozukara
opened
3 weeks ago
5
allow custom device specification inside OmniGenPipeline
#85
bghira
closed
3 weeks ago
1
update error message when attention mask is not supplied
#84
bghira
closed
3 weeks ago
1
gradio is missing , but installed
#83
anigno
opened
3 weeks ago
5
Is it possible to save the image in any format other than WEBP?
#82
amoebatron
opened
3 weeks ago
2
remove duplicate from requirements
#81
Tialo
closed
3 weeks ago
1
cannot import name 'OmniGenPipeline'
#80
runshouse
closed
3 weeks ago
0
Multi gpu feasible?
#79
matbee-eth
opened
3 weeks ago
3
Update PEFT to v0.13.2
#78
bghira
closed
3 weeks ago
1
Python version and conda environment?
#77
PixelArmony
opened
3 weeks ago
5
app.py Typo on Line 365
#76
diegonunez77
opened
3 weeks ago
1
app.py gives RuntimeError: Numpy is not available
#75
rzoun
opened
3 weeks ago
2
OmniGen would be more useful if it only editted images instead of regenerating them
#74
fluthru
opened
3 weeks ago
4
I am getting Error OffloadedCache can only be used with a GPU
#73
chnisar515
opened
3 weeks ago
10
Extremely slow while loading OmniGen
#72
zc1023
opened
3 weeks ago
4
ValueError: Default process group has not been initialized, please make sure to call init_process_group.
#71
WuYeeh
opened
3 weeks ago
4
Output image error
#70
vcoopers
opened
3 weeks ago
5
delete npu
#69
staoxiao
closed
3 weeks ago
0
After pip install -e . I'm getting this Error, I have python in path, and I'm using miniconda
#68
LeoRibkin
opened
3 weeks ago
1
NameError: name 'is_torch_npu_available' is not defined. Did you mean: 'is_torch_xla_available'?
#67
fbauer-kunbus
opened
3 weeks ago
21
NameError: name 'is_torch_npu_available' is not defined. Did you mean: 'is_torch_xla_available'?
#66
larini
closed
3 weeks ago
1
What does the num_cfg do?
#65
MoonBlvd
opened
3 weeks ago
2
8bit
#64
werruww
opened
3 weeks ago
1
great job, but why not vae of sd3
#63
Robootx
opened
3 weeks ago
1
Image editing loss function
#62
brycegoh
opened
3 weeks ago
9
How to deal with "Numpy is not available"
#61
freemank1224
opened
3 weeks ago
6
fix: typo on saving of ema state dict
#60
brycegoh
closed
3 weeks ago
1
Saving of EMA state dict in train.py
#59
brycegoh
closed
3 weeks ago
2
Is it possible to try to support llm of other architectures as the backbone?
#58
win10ogod
opened
3 weeks ago
1
Open weights or frozen?
#57
DarkAlchy
closed
3 weeks ago
2
How/where to change host IP settings?
#56
NicodemPL
opened
3 weeks ago
4
Errors when running on Macos Sonoma - M1 RuntimeError: OffloadedCache can only be used with a GPU
#55
adamreading
opened
3 weeks ago
7
Why is my image output just a bunch of noise? I'm sure the settings are correct, and the GPU is indeed running.
#54
dzbb2
opened
3 weeks ago
7
Add project page
#53
staoxiao
closed
4 weeks ago
0
Would it be possible to do OmniGen type prompting with FLUX?
#52
nikshepsvn
closed
4 weeks ago
1
Thank you
#51
nitinmukesh
closed
3 weeks ago
1
Update Readme
#50
staoxiao
closed
4 weeks ago
0
new inference code
#49
staoxiao
closed
4 weeks ago
0
please allow fp16 in model to reduce VRAM as option
#48
gavytron
opened
4 weeks ago
3
Please improve the fine-tuning script!
#47
win10ogod
opened
4 weeks ago
10
Can I replace phi3 llm?
#46
win10ogod
opened
4 weeks ago
3
Previous
Next