-
**Describe**
Model I am using: LayoutReader
Firstly, thank you for outstanding contribution in define reading order. Currently, I am trying to fine-tuning the pre-train model. So is there any recomm…
-
I am using my own dataset and reading the repository issues I saw people saying that they got better results without loading the model coco_dla_1x or 2x
I did indeed train with and without fine-tunn…
-
I have a question. I can't find the function called "patch_generation_fine_tunning"
-
### Describe the issue
Issue:
While running the fine tune script its not able to import the llava file when running the train_mem.py file.
Command:
```
deepspeed llava/train/train_mem.py \
-…
-
**FICHIER DE CONFIGURATION**
-
-
## Feature request
After semi-supervised pretraining, can we do light-weighted fine-tunning or few-shot learning instead of classification?
**What is the expected behavior?**
Instead of fine-…
-
-
The software package already has minimum cross validation strategies implemented but there are a few adjustments to make.
- There should be the option to split the data into train, validation and t…
-
In the paper not many details are given regarding the autoencoder training fot txt-to-image, and those would be very helpful! Can we get some answers?
- Which dataset the autoencoder is trained on?…