Hello, I wanted to work on the pretraining+fine-tuning part and had some questions along the way:
It had a warning and an error like these:
I did not have any kinds of error on the baseline datasets used for the pretraining. Is there anyway I could try to fix this?
When i perform just classification and pretraining+fine-tuning, it seems like the dataset input sizes get different. For instance, if I were to perform just classification on the SpokenArabicDigits, the shape looks like this:
The two pictures are showing different sample sizes being inserted when doing different tasks. What is the logic behind this difference? It would be greatly appreciated if you could answer these questions. Thank you
Hello, I wanted to work on the pretraining+fine-tuning part and had some questions along the way:
I did not have any kinds of error on the baseline datasets used for the pretraining. Is there anyway I could try to fix this?
The two pictures are showing different sample sizes being inserted when doing different tasks. What is the logic behind this difference? It would be greatly appreciated if you could answer these questions. Thank you