-
Marbles parameterizes its X outputs using "bias and spread", and I assume it's using a Beta distribution. Whatever it uses, I suggest you create a module that breaks this out into a smaller panel allo…
-
Hi everyone,
I finetune the base model using the bash script provided here `peft/finetune.sh` using these parameters,
```
python3 finetune.py \
--model-name="google/timesfm-1.0-200m" \
…
-
Hello,
thank you so much for the great work! I have tried to resume training using the previous checkpoints :
python3 -m piper_train --dataset-dir /home/dataset/fr --accelerator 'gpu' -…
-
### Checklist
- [X] I have searched related issues but cannot get the expected help.
- [X] 2. I have read the [FAQ documentation](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/faq.md) but …
-
I found strange use of `Conv2D` in its test.
https://github.com/tensorflow/swift-apis/blob/3304db3e728120b55674cca06894b6ea5083b5e8/Tests/TensorFlowTests/LayerTests.swift#L101-L111
The filter has …
-
Hi Team,
I have successfully finetuned a QLoRA adapter on a custom dataset. When I try to load it in full precision, it gets loaded and works well
But this takes too much time and GPU memory to …
-
## 🐞Describing the bug
I'm experiencing extremely long loading times when using the MLModel API to load a converted Core ML model. The loading process hangs indefinitely. When changing compute_units …
-
## Instructions To Reproduce the Issue:
1. Full runnable code or full changes you made:
```
no changes
```
2. What exact command you run:
`tools/lazyconfig_train_net.py --config-file project…
-
I have managed to get a model converted using the conversion script that I modified:
```py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_fun…
-
Hello, thanks for your great work. When I use the training code with depth condition, I download the depth checkpoint from [dpt_swin2_large_384](https://github.com/isl-org/MiDaS/releases/download/v3_1…