infinitered / nsfwjs

NSFW detection on the client-side via TensorFlow.js
https://nsfwjs.com/
MIT License
7.94k stars 529 forks source link

Compiling model #342

Open creativenoobie opened 4 years ago

creativenoobie commented 4 years ago

Hey,

Thank you for all the efforts on this library.

So I downloaded the models from https://github.com/GantMan/nsfw_model/releases/tag/1.1.0 and was able to use them with nsfwjs using graph option.

I just wanted to know how can I compile these models from graph to image (299x299)? I know I can just download from your S3 bucket but I don't want to increase your network costs since I will be deploying the same to cluster of servers so I was looking for an optimum solution that could help me.

Also, I just noticed that you've deployed a new model (mobilenetMid) with similar accuracy but decreased size, any tips on how to compile new model for backend (nodejs) use?

GantMan commented 4 years ago

Hi! I'm not sure I understand the question.

I just wanted to know how can I compile these models from graph to image (299x299)? I know I can just download from your S3 bucket but I don't want to increase your network costs since I will be deploying the same to cluster of servers so I was looking for an optimum solution that could help me.

Can you elaborate?

creativenoobie commented 4 years ago

Basically, I am using nsfwjs with type: graph: nsfwjs.load('/path/to/different/model/', { type: 'graph' })

This process takes upto 150MB/worker in my nodejs application which is a lot tbh.

I just want to know how can I load model using size: 299x299 option. I am hoping that this will significantly reduce the memory usage. I do understand that size option won't work with https://github.com/GantMan/nsfw_model/releases/tag/1.1.0 models directly and I might need to convert these models before making it work so I just wanted to know how can I convert these models from graph to image?

GantMan commented 4 years ago

The size option depends on the model you load. You won't be able to use the graph with 299, because it was trained on 224 data.

That is a lot of space. I wonder what's really taking up that process space. Have you experimented on loading smaller models?

You can create a 48kb model using this page I made: https://rps-tfjs.netlify.com/ I'd be interested in seeing the process memory for that.