sacmehta / ESPNet

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation
https://sacmehta.github.io/ESPNet/
MIT License
538 stars 111 forks source link

How to optimize the model and reduce the memory? #52

Closed berylyellow closed 5 years ago

berylyellow commented 5 years ago

Hi,@sacmehta I encounter a problem when I ran your code in Jetson TX2. I find it costs about 1.5G memory, how can I optimize the code to minimize the memory while ensuring accuracy?

sacmehta commented 5 years ago

Could you please provide more details, eg Jetson pack details?

berylyellow commented 5 years ago

@sacmehta I need to run a semantic segmentation model on Jetson TX2 with other algorithms running together. Total amount of global memory are only 7851 MBytes,the resource is very tight, the TX2 cannot do other things when all algorithms are running. ESPNet takes up about 1.5G-2G memory space. Are there any lighter models running better on TX2?

sacmehta commented 5 years ago

As far as I know, NO. What you could do is use half precision floating points instead of full precision. That will reduce the memory and increase speed too. Also, you can try ESPNetv2. We provide models at different complexities, so you can use smaller complexity model (s=0.5).

BTW, TX2 is not designed for running multiple applications like we do in Desktop.

sacmehta commented 5 years ago

Also, you can try to Binarize it, something like XNORNet. That will reduce memory requirements by large but it will drop accuracy too. And you need to be careful while binarizing it

berylyellow commented 5 years ago

Thanks for your advice. I tested ESPNetv2 both s=0.5 and s=1 in my desktop, but they take up the same memory space anyway.