While looking through Nvidia's NGC catalog where the ESS DNN resides, I started reading about the TAO Toolkit and the ability to take pre-trained models and augment them with additional data, perform pruning, and other such operations in order to fully optimize them.
In my desired use case, I'd like to train further on a custom dataset, one that is captured on the stereo camera hardware I intend to later perform inference on. In theory, this would allow the network to perform even better. The ability to prune and thus further decrease the computational cost is also appealing.
My problem is that I'm having a difficult time reading about whether this is possible with .etlt files. There are Jupyter notebooks available to perform this type of work on more commonly used deep learning tasks (object detection, semantic segmentation), and while there isn't one for the ESS DNN, I'm wondering if it's even possible based on resources currently available, and if not, if that is something that intends to be supported in the future.
While looking through Nvidia's NGC catalog where the ESS DNN resides, I started reading about the TAO Toolkit and the ability to take pre-trained models and augment them with additional data, perform pruning, and other such operations in order to fully optimize them.
In my desired use case, I'd like to train further on a custom dataset, one that is captured on the stereo camera hardware I intend to later perform inference on. In theory, this would allow the network to perform even better. The ability to prune and thus further decrease the computational cost is also appealing.
My problem is that I'm having a difficult time reading about whether this is possible with .etlt files. There are Jupyter notebooks available to perform this type of work on more commonly used deep learning tasks (object detection, semantic segmentation), and while there isn't one for the ESS DNN, I'm wondering if it's even possible based on resources currently available, and if not, if that is something that intends to be supported in the future.
Thanks!