Closed Mimocro closed 3 years ago
Hi, welcome!
As early as 2017, I participated in the open test of TPU, when TPU was only used as a quick verification of the feasibility of the model. Now with the support of ONNX, XLA, and PyTorch-lightning, cross-device training and inference become more convenient.
But I have to admit that the entire pipeline support of this project on TPU equipment is not complete, considering that using TPU to inference small batch-size can not gain its real performance, we did not imagine the scenario of running the project on the TPU hardware initially.
BTW, the TPU unit is a self-consistent CPU/GPU/TPU core combination. Perhaps the result we want is not saved on colabVM.
Thanks for your feedback :)
I try to run MVIMP_Waifu2x-ncnn-Vulkan_Demo.ipynb on Google Colab with TPU and it's so much faster than GPU (5.00s/it vs 15.00it/s) but after that Output directory is empty. Is there a way to run it normally on TPU?