-
### Describe the issue
FP16 model inference is slower compared to FP32. Does FP16 inference require additional configuration or just need to convert the model to FP16
### To reproduce
convert onnx …
-
Dear auhtors,
I am trying to run image-sculpting on my own images. It seems we need LoRA finetuned models based on the editing script, however in readme, there is no instruction on how to proceed suc…
-
Hi,
I am reading your codes and have problem in ```evaluate_actions``` when updating ppo:
+ https://github.com/electronicarts/character-motion-vaes/blob/main/algorithms/ppo.py#L95
I notice that y…
-
it appears that the app wants 2 tokenizer_config.json in the same folder, which is impossible. On presentation/src/main/java/com/shifthackz/aisdv1/presentation/screen/setup/ServerSetupScreen.kt line 6…
Ar57m updated
9 months ago
-
Thanks for the code! I'm trying to learn Julia and Flux.jl but I'm having trouble finding an example of a VAE with gumbel-softmax trick. Chances are that you have already done this. Could you provide …
-
Currently, the inference code in the codebase has not yet been updated to include the extraction process for text prompts and T5 features. The current code only supports super-resolution from 256 to 1…
-
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused b…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Contact Details
_No response_
### What should this feature add?
There are two associated features …
-
Greetings to the team!
Your code is beautiful, but I'm lost in the implementation of it.
I have the last version of PyTorch: https://pytorch.org/get-started/locally/ & Python
- Stable (1.8.1)
- …