-
Thank you for your great work on this project. I am currently working on my own indoor dataset in USDZ format, and I'd like to test it with your pre-trained model. However, I'm not sure what the next …
-
### 🐛 Describe the bug
This problem only occurs when I use RTX4090.
```python
import torch
a = torch.tensor([2, 2, 3]).cuda(0)
print(a.prod())
```
```
Traceback (most recent call last):
…
-
### Describe the bug
A800 80G
内存32G
已经部署了一个Qwen-57B-q5的模型用了40G显存,再部署deepseek-coder-6.7b-instruct 就报资源不足
![deeseek38](https://github.com/xorbitsai/inference/assets/36878412/6fa0e127-94ea-4a2d-b7…
xxch updated
4 months ago
-
In [this colab](https://colab.research.google.com/drive/17XEqL1JcmVWjHkT-WczdYkJlNINacwG7?usp=sharing#scrollTo=2QK51MtdsMLu) you show how to load adapter and merge it with initial model. Notice it loa…
-
@honghuis @SkalskiP thanks for sharing the source code , Just wante dto knw can we convert this model to Tensorrt or ONNX format ? if so please share the conversion and inference script
Thanks i…
-
I've been trying to build a docker image by following the steps from INSTALL.md, but I'm stuck on this:
```
# Setup MSDeformAttn
cd oneformer/modeling/pixel_decoder/ops
sh make.sh
```
I trie…
-
I follow the description on your paper, using the OneFormer model trained on COCO with DiNAT-L backbone to obtain the segmentation map, but the results is different from the answer your provide. Can y…
-
### Expected behavior
大佬感谢您做出的贡献!我这里有几个问题求解答~
使用 AutoSAM 报错
![微信截图_20230626161952](https://github.com/continue-revolution/sd-webui-segment-anything/assets/131261520/5e11ab87-f81f-4bbb-b62f-1662a428…
-
(IMPORT FAILED) [ComfyUI's ControlNet Auxiliary Preprocessors](https://github.com/Fannovel16/comfyui_controlnet_aux) This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary …
-
I have let's say a set of 1,000 heavily annotated panoptic domain specific images that I would like to fine-tune.
I have seen that the training of this model was based on a large number of a100's (8 …