GantMan / nsfw_model

Keras model of NSFW detector
Other
1.8k stars 279 forks source link

The names of the requested model outputs. #47

Closed ghost closed 4 years ago

ghost commented 4 years ago

Quick question for any ML expert out there,

I'm writing a C# program and would love to use this model in my program, in order to bind the model to C#, I need the names of the requested model outputs (in Netron, when you click on softMax there usually is an ID for the output data). Unfortunately, outputs is empty in Netron, is there a default output ID I can use?

(see outputColumnName at https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.transforms.tensorflowmodel.scoretensorflowmodel?view=ml-dotnet#Microsoft_ML_Transforms_TensorFlowModel_ScoreTensorFlowModel_System_String_System_String_System_Boolean_ )

TechnikEmpire commented 4 years ago

You don't want the outputs, you want the name of the final softmax later. That should be it. That layer's output is an array of length 5 representing the 5 classes

ghost commented 4 years ago

Thanks @TechnikEmpire for your reply. I'm aware that I need the softmax layers name but unfortunately, I can't seem to find it. Usually it appears under output but for some reason output is completely empty. Should I be using a different value as its name?? Thanks image

ghost commented 4 years ago

should I be using dense_3 as it's name instead?

ghost commented 4 years ago

This is the value I usually take. image

TechnikEmpire commented 4 years ago

@Nicolas-Connor No, you use the "name" field here. What these frameworks are doing is parsing the graph and looking for nodes with the names you provide to "input" and "output" and then it knows what are the nodes to feed data into and pull data out of.

In your case, "dense_3/Softmax" is what you'd take.

In the case of the newest model(s) (on the releases page here), the output would be "sequential/prediction/Softmax".

image

TechnikEmpire commented 4 years ago

@Nicolas-Connor Btw you might want to edit the picture you posted, I can see the name and path of your repository. From one guy to another developing content filtering software, let me share this with you:

You can convert the latest model (on the releases page) to Intel DLDT (compile openCV with Inference Engine support) with this command:

python mo_tf.py --input_model frozen_graph.pb --model_name [NEW_MODEL_FILE_NAME] --data_type FP16 --mean_values=[0,0,0] --input_shape=[1,224,224,3] --scale_values=[255,255,255] --enable_concat_optimization --reverse_input_channels

And then you can run inference in milliseconds on average hardware. I scan every image that comes across the network wire, including GIFS and you can't even observe any delay.

ghost commented 4 years ago

Wow, thanks a million @TechnikEmpire . Thanks for helping me out with trying to get my content filtering going, it looks like you've been down this road already with Technik Empire. And thanks for mentioning Intel DLDT, I'll be checking that out down the road.

I'm hitting another roadblock unfortunately but this seems to be an issue with the framework not supporting QuantizeV2 (whatever that is) rather than a issue with this model. I'll have to try and find more robust framework to integrate newer TF models in WPF C# applications.

Microsoft.ML.Transforms.TensorFlow.TFException: 'No OpKernel was registered to support Op 'QuantizeV2' used by {{node block_4_expand/convolution_eightbit/block_4_expand/kernel/read/quantize}}with these attrs: [T=DT_QUINT8, round_mode="HALF_AWAY_FROM_ZERO", mode="MIN_FIRST"]