-
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
We need integrate in c++ environment.
**…
-
Add the option to load models in bfloat16 and float16. Esp important for large models like GPT-J and GPT-NeoX.
Ideally, load from HuggingFace in this low precision, do weight processing on the CPU,…
-
In the file Oryx/oryx/model/oryx_arch.py
for idx in range(len(modalities)):
img_feat_highres, img_size_highres = self.get_model().vision_resampler(highres_img_features[…
-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
### Model Input Dumps
_No response_
### 🐛 Describe the bug
…
-
Hi, I want to inference my image using your pretrained weight, but it seems that the website https://drive.google.com/drive/folders/168ijUQyvGLhHoQUQMlFS2fVt2p5ZV2bD?usp=sharing is wrong, I can’t open…
-
In Scala3 we have a type inference system, that makes the mui facades more ergonomic.
Its looks like this
```scala
type Elem[X] = X match
case String => Char
case Array[t] => t
case Iterab…
elgca updated
2 weeks ago
-
It seems that when model inference finished, it will offload.
Is there a way to configure that let the model last for a while, if there is no other requests, then offload?
-
### System Info / 系統信息
I wrote the I2V fully finetune training script according to your T2V script and trained for 8000 steps but the inference results are worse.
https://github.com/user-attachm…
-
Tasks may want to have their output exposed as a stream, and in some case exposed as public APIs.
In those scenario (and probably other), we might want to expose our output in an openAI compatible f…
-
Traceback (most recent call last):
File "F:\AI\ComfyUI-aki-v1.4\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_…