infinitered / nsfwjs

NSFW detection on the client-side via TensorFlow.js
https://nsfwjs.com/
MIT License
7.97k stars 532 forks source link

Different nsfw detection from nsfw_model #593

Open flashultra opened 2 years ago

flashultra commented 2 years ago

I run nsfw_model , as separate module, using latest model and compare the result with nsfwjs.com and there are some big differences. At example from nsfw_model I've got the following values:

"drawings": 0.041909363120794296,
"hentai": 0.36491256952285767,
"neutral": 0.41387316584587097,
"porn": 0.13507777452468872,
"sexy": 0.04422718659043312

, but from nsfwjs.com I've got:

Porn - 93.89%
Neutral - 3.93%
Drawing - 0.89%
Hentai - 0.83%
Sexy - 0.46%

As you can see for same image once is 13% and in other case is 93%. Here is the link to the image: image

Is this model (quant_mid)[https://github.com/infinitered/nsfwjs/tree/master/example/nsfw_demo/public/quant_mid] different from nsfw_model .And if so, how can I loaded in nsfw_model ?

GantMan commented 2 years ago

The NSFW model release sync up with NSFWJS.com has been a bit murky. I'm not surprised you're getting different results. I think the NSFW model repo is a demo of HOW to build the model, and the NSFWJS.com is how to wrap the model for consumption in JS.

If I circle back to creating a new model in Python, I'll be sure to keep them in lockstep going forward. But as a free open source exploratory software project, it's not as organized as I wished it were.

flashultra commented 2 years ago

I've got different results for nsfwjs when create simple test and compare against nfswjs.com: Here is my example

const axios = require('axios') 
const tf = require('@tensorflow/tfjs-node')
const nsfw = require('nsfwjs')
async function fn() {
  const pic = await axios.get('https://i.ibb.co/zJz0192/17771214.jpg', {
    responseType: 'arraybuffer',
  })
  const model = await nsfw.load('file://C:/nsfw/model/',{ type: 'graph' }) 
  const image = await tf.node.decodeImage(pic.data,3)
  const predictions = await model.classify(image)
  image.dispose()
  predictions.map(prediction => (
            console.log(prediction.className+" "+(prediction.probability * 100).toFixed(2))
            ))
 }
fn()

where the /model directory is the same as quant_mid My results are : Porn - 92.78% Neutral - 4.60% Hentai - 1.01% Drawing - 0.97% Sexy - 0.64%

Results from nsfwjs.com ( for same picture are) : Porn - 93.89% Neutral - 3.93% Drawing - 0.89% Hentai - 0.83% Sexy - 0.46%

Any idea why the results are different ? The image is that: https://i.ibb.co/zJz0192/17771214.jpg