-
**Version**
PyABSA = 2.3.4rc0
Torch = 2.1.1
Transformers = 4.35.2
**Describe the bug**
I've fine-tuned a model with config FAST_LSA_S_V2 using the same dataset using the APCTrainer. In one of …
-
First of all, thank you for sharing such an excellent work !!
The pretrained weight of PointNext is released, where '-- use_height' is used. However, during finetune on ModelNet40 or ScanObjectNN, …
-
Hello. Your research is very intriguing and has been a great inspiration for me as a researcher in the field of human pose estimation. I'm writing because I encountered an issue when trying to run the…
-
- [ ] LoRA - Prince/Punith/Lavrenti
- [ ] Train-last layer - CLIPAdapter - Mihir
-
一般的新闻、小说等语料的话,应该不是QA的模式,那么利用这类的数据进行继续的finetuning呢?
-
まずは通常のKinetics datasetでの学習のコマンドについて。
`--cfg configs/Kinetics/c2/SLOWFAST_8x8_R50.yaml DATA.PATH_TO_DATA_DIR dataset/ TRAIN.CHECKPOINT_FILE_PATH ./checkpoints/SLOWFAST_8x8_R50.pkl TRAIN.ENABLE True …
-
### Describe the feature
Dear,
I assume many developers have the same request as I had, that is to finetune Llama2 70b beyond a single node's capability.
I wonder do you have a full example …
-
Great works! I have some questions about fine-tuning on ImageNet 1K. In the paper, you claimed 384^2 input models are obtained by fine-tuning as also pointed by #24:
>For other resolutions such as…
-
Hi @gy20073 ,
I am trying to test the output of the compact bilinear layer but can't get the result right. I have used this for training and the result is very good. This means the layer should wor…
-
I want to use the Class "ParallelSentencesDataset" to load my very big parallel data to fine tune the pre-trained model "LaBSE". But when I used it , it seems that this Class "ParallelSentencesDatase…