Open pranavguru opened 1 month ago
Similar mechanism as https://github.com/ManifoldRG/MultiNet/issues/71#issuecomment-2266368447 But only inference cost is published, so I included a multiplication factor (e.g. 4) to the inference time.
Here is the calculation algorithm one training epoch per dataset:
Then we calculate the inference cost with the validation set for one epoch with the same algorithm.
Finally, we can get the total fine-tuning cost of one dataset: (Number of epochs) * (training cost per epoch + inference cost per epoch)
Here is the codes: https://colab.research.google.com/drive/14j5DBPpgk9-Z-h8kQ6lMiiIyilxs1zko?usp=sharing
Is a multiplication factor of 4 to the inference time a common rule of thumb when estimating fine-tuning time (secs per iter)?
Look into: