-
Hello,
Excellent work! I'm trying to adapt it to another dataset, but I am struggling to find out how to load pretrained weights.
Specifically, I'm looking at the fully fined-tuned weights on …
-
Hello and thank you for sharing your work!
Could you please provide the pretrained model for your work?
Thank you in advance!
-
Hi, I was wondering whether reseting the model weights was available.
-
I noticed that `FullLaplace` produces float results even when only objects of double precision are being used. Here is a quick example
```
import torch
import torch.nn as nn
from torch.utils.data…
-
I already did all the things on readme but still when I run video.py it always stuck on "loadnet = torch.load(model_path)"
Here is the full command line
C:\Users\ellie\Downloads\GFPGAN-master\GFPG…
ell28 updated
3 weeks ago
-
Thanks for your great work, I would like deploy Qwen2-7B-Instruct in vllm, my current command is:
`python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--trust-remote-code \
…
-
Following sequence tagging Chemprot example gives errors on newer versions of HF transformers (currently on 4.41.2). Apparently `model.save_pretrained('folder/') ` now saves model as `.safetensors` ra…
-
### Describe the issue
Issue:
I used finetune_lora.sh to finetune vicuna-v1.5-13b with custom data. After it, I got a folder with adapter_model.safetensors, non_lora_trainables.bin. Then I merged it…
-
Thanks for making the code available!
I was wondering, apart from the Resnet-18 checkpoint you mentioned in README (the link seems failing by the way), are any of the model weights in your experime…
-
import torch
from optimum.quanto import qint2, qint4, qint8, quantize, freeze
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/sta…