Thank you for making this work open source, and let us glimpse some of Google's achievements on the NAS platform. , I have successfully run feature data, binary and multi-classes image network search after checking materials and issues.
The current problem: model_search is only trained on a single GPU card, and the computational efficiency is not satisfactory. Is there a way to modify model_search (via distributing configs?) for multi-GPU card synchronous/asynchronous training?
Thank you for making this work open source, and let us glimpse some of Google's achievements on the NAS platform. , I have successfully run feature data, binary and multi-classes image network search after checking materials and issues.
The current problem: model_search is only trained on a single GPU card, and the computational efficiency is not satisfactory. Is there a way to modify model_search (via distributing configs?) for multi-GPU card synchronous/asynchronous training?