Open yingshaoxo opened 3 years ago
It turns out to be the problem of input.
Here I got two functions, only the second function works well, it is strange:
The first method work but will get very bad accuracy.
def detect(self, numpyImage):
numpyImage = keras.preprocessing.image.smart_resize(numpyImage, size=(IMAGE_DIM, IMAGE_DIM), interpolation="nearest")
numpyImage = numpyImage.astype(np.float)
numpyImage /= 255
probs = classify_nd(self.model, np.asarray([numpyImage]))
return probs
def classify(self, imagePath):
image = keras.preprocessing.image.load_img(imagePath, target_size=(IMAGE_DIM, IMAGE_DIM))
image = keras.preprocessing.image.img_to_array(image)
image /= 255
probs = classify_nd(self.model, np.asarray([image]))
return probs
So is there a way to use the classify
function without writing a .jpg file to disk first?
Help, please.
So is there a way to use the
classify
function without writing a .jpg file to disk first?Help, please.
By replacing it with io.BytesIO
Same with me - trying to use Keras Inception and Mobilenet models provided by the repo author - getting overwelming amount of "porn" classes, even for random pics like dogs, cars etc. Tried both using "from tensorflow.keras.applications.inception_v3 import preprocess_input" and same with Mobilenet. Also tried manually dividing arrays by 255. Still no luck. Help is much appreciated!
Tried resizing by different methods as well (embedded resize functions in tensorflow, PIL). Also tried loading imgs with cv2, PIL, Keras.
I'm also running into this problem, works fine from file on disk, does not work starting from bytearrays after resizing. My hypothesis is that it's something in what keras does when it loads an image, which is different: https://github.com/GantMan/nsfw_model/blob/master/nsfw_detector/predict.py#L42
To be honest, the input should be a vector extracted from the last layer of inceptionv3.
Only in this way, the model can get random sized image information right.
at the current version, even a woman that not porn, will be tagged as a porn picture with 0.9 confidence...