-
### Describe the issue
I tried to run an LLM on my Arc 770 16gb (with a i5-13500T cpu) and stumbled upon this error.
From the stacktrace and the error message alone I'm not totally sure if this is…
-
hello,
When I'm done executing QAT, I call the function `fold_all_batch_norms_to_scale` to fold the BN and then export it, but there is a WARNING, the exported model is not folded.
I've tried tha…
-
My model training appears to be poor in covering to decreased validation loss, while the validation accuracy gets to 0.75-0.8 after just a couple of epochs, and then just bouncing around that range wi…
-
Hi,when I run the train codes:
python tools/train_net.py --name=/mnt/codes/ckpts/trains --model=ir_csn_152 --resume_from_model=/mnt/codes/weights/pre_trained_weights/irCSN_152_ig65m_from_scratch_f1…
-
**What are you trying to do?**
I trained my own data by using following command
```
chemprop train --data-path ../train.csv \
--task-type regression \
--output-dir test_fp_k5_drop02_scaf_…
-
## 🐞Describe the bug
when converting the attached model, xcode complains about duplicate output names during validation:
validator error: Layer '27_ip_expand' produces an output named '27_expand…
-
-
Great job, you give the best quantization accuracy as I know.
I'm very interested in your paper and code, but I have some issue about this paper.
As far as my knowledge.
For a full quantized model,…
-
Hello @fox0430,
I follow this project religiously, because I want to use a Nim-focused vim-like editor that is written in Nim, and hopefully can be extended with Nim.
I made a cheatsheet for mys…
-