-
Not really an issue, I just want to share my training code since some people still have some difficulties to write the training code. Just modify the code to suit your usage.
Feel free to ask or poi…
-
我想了解一下vicuna的json中添加的pretrain_graph_model_path和graphgpt_stage1中的pretra_gnn是不是指同一个路径?也就是clip_gt_arxiv的路径?按照其他issue提到的做法,我在GraphGPT下直接新建了clip_gt_arxiv文件夹存放模型。
![image](https://github.com/HKUDS/GraphGPT…
-
Is it possible to reduce the memory usage from spiking to 15ish GB when doing text2img? I'm currently following [this guide](https://github.com/leejet/stable-diffusion.cpp/blob/master/docs/flux.md) an…
-
once i get "you don't have state dict", i can't generate an image with the sd model that is set, even if i complete the state dict, due to "'NoneType' object has no attribute 'sd_checkpoint_info'"
on…
-
Hello, it seems that I couldn't find any explicit information about the backbone architecture used for CLIP in the papers. I'm uncertain whether it is based on ViT or ResNet, and which specific model …
-
It would be good to add gradient clipping to the trainers created by `create_supervised_trainer`. This is already provided by `torch.nn.utils.clip_grad_norm_`.
One possible implementation could be:…
-
how to fix it?
PLEASE HELP ME :)
# ComfyUI Error Report
## Error Details
- **Node Type:** LLavaSamplerSimple
- **Exception Type:** OSError
- **Exception Message:** exception: access violatio…
-
when I run the following command
“python inference.py \
--input inputs/demo/general \
--config configs/model/cldm.yaml \
--ckpt weights/general_full_v1.ckpt \
--reload_swinir --swinir_ckpt weight…
-
The code shows it loads the visual encoder from a CLIP model (clip-vit-b16.pth). I did not find anything mentioned where it comes from. I tried to load clip-vitb16 from OpenAI huggingface, but it has …
-
Start with the `jina-clip-v1-api` model.