Trained on 60+ Gigs of data to identify:
drawings
- safe for work drawings (including anime)hentai
- hentai and pornographic drawingsneutral
- safe for work neutral imagesporn
- pornographic images, sexual actssexy
- sexually explicit images, not pornographyThis model powers NSFW JS - More Info
93% Accuracy with the following confusion matrix, based on Inception V3.
See requirements.txt.
For programmatic use of the library.
from nsfw_detector import predict
model = predict.load_model('./nsfw_mobilenet2.224x224.h5')
# Predict single image
predict.classify(model, '2.jpg')
# {'2.jpg': {'sexy': 4.3454722e-05, 'neutral': 0.00026579265, 'porn': 0.0007733492, 'hentai': 0.14751932, 'drawings': 0.85139805}}
# Predict multiple images at once
predict.classify(model, ['/Users/bedapudi/Desktop/2.jpg', '/Users/bedapudi/Desktop/6.jpg'])
# {'2.jpg': {'sexy': 4.3454795e-05, 'neutral': 0.00026579312, 'porn': 0.0007733498, 'hentai': 0.14751942, 'drawings': 0.8513979}, '6.jpg': {'drawings': 0.004214506, 'hentai': 0.013342537, 'neutral': 0.01834045, 'porn': 0.4431829, 'sexy': 0.5209196}}
# Predict for all images in a directory
predict.classify(model, '/Users/bedapudi/Desktop/')
If you've installed the package or use the command-line this should work, too...
# a single image
nsfw-predict --saved_model_path mobilenet_v2_140_224 --image_source test.jpg
# an image directory
nsfw-predict --saved_model_path mobilenet_v2_140_224 --image_source images
# a single image (from code/CLI)
python3 nsfw_detector/predict.py --saved_model_path mobilenet_v2_140_224 --image_source test.jpg
Please feel free to use this model to help your products!
If you'd like to say thanks for creating this, I'll take a donation for hosting costs.
https://github.com/GantMan/nsfw_model/releases/tag/1.1.0
Kudos to the community for creating a PyTorch version with resnet! https://github.com/yangbisheng2009/nsfw-resnet
Simple description of the scripts used to create this model:
inceptionv3_transfer/
- Folder with all the code to train the Keras based Inception v3 transfer learning model. Includes constants.py
for configuration, and two scripts for actual training/refinement.mobilenetv2_transfer/
- Folder with all the code to train the Keras based Mobilenet v2 transfer learning model.visuals.py
- The code to create the confusion matrix graphicself_clense.py
- If the training data has significant inaccuracy, self_clense
helps cross validate errors in the training data in reasonable time. The better the model gets, the better you can use it to clean the training data manually.e.g.
cd training
# Start with all locked transfer of Inception v3
python inceptionv3_transfer/train_initialization.py
# Continue training on model with fine-tuning
python inceptionv3_transfer/train_fine_tune.py
# Create a confusion matrix of the model
python visuals.py
There's no easy way to distribute the training data, but if you'd like to help with this model or train other models, get in touch with me and we can work together.
Advancements in this model power the quantized TFJS module on https://nsfwjs.com/
My Twitter is @GantLaborde - I'm a School Of AI Wizard New Orleans. I run the twitter account @FunMachineLearn
Learn more about me and the company I work for.
Special thanks to the nsfw_data_scraper for the training data. If you're interested in a more detailed analysis of types of NSFW images, you could probably use this repo code with this data.
If you need React Native, Elixir, AI, or Machine Learning work, check in with us at Infinite Red, who make all these experiments possible. We're an amazing software consultancy worldwide!
@misc{man,
title={Deep NN for NSFW Detection},
url={https://github.com/GantMan/nsfw_model},
journal={GitHub},
author={Laborde, Gant}}
Thanks goes to these wonderful people (emoji key):
Gant Laborde 💻 📖 🤔 |
Bedapudi Praneeth 💻 🤔 |
---|
This project follows the all-contributors specification. Contributions of any kind welcome!