It took 2 ~ 3 hours to train even if it is the fastest one (but the performance does not good in beomi/KcELECTRA-base and DistilBERT)
I think I should increase the number of epoch for the performance, but I have not enough time to process this experiment (the baseline code tries 1000 epochs for the performance)
2. Ainize Preemptive-T4, 50GB
Common Computer provides Free GPU VM (I guess it's a little bit faster than the Colab free version and Backend.ai)
I can use this only 24 hours, then the VM will be locked for a day (I can save it all unlike the colab free version, though)
3. Backend.ai GPU VM
Faster than Colab Free but the slowest one among colab pro & Ainize.
4. Model's performance issue
All of the VMs not so fast to train the model, and only 5 epochs not enough to make good performance.
(Why I am using M1 Mac? The silicon chip is useless until ML opensource officially supports)
I still have a time for the project deadline, try the experiments until 16th December and finish it.
GPU VM's performance is always big issue to make a good & time-efficient model...
1. Google Colab Pro
Colab Pro is the fastest VM Machine that I have.
It took 2 ~ 3 hours to train even if it is the fastest one (but the performance does not good in
beomi/KcELECTRA-base
andDistilBERT
)I think I should increase the number of epoch for the performance, but I have not enough time to process this experiment (the baseline code tries 1000 epochs for the performance)
2. Ainize Preemptive-T4, 50GB
3. Backend.ai GPU VM
4. Model's performance issue
All of the VMs not so fast to train the model, and only 5 epochs not enough to make good performance.
(Why I am using M1 Mac? The silicon chip is useless until ML opensource officially supports)
I still have a time for the project deadline, try the experiments until 16th December and finish it.
GPU VM's performance is always big issue to make a good & time-efficient model...