Closed editrodeveloper closed 6 months ago
Thank you for bringing this issue to our attention. The project, SafeVision, is designed to detect nudity and inappropriate exposure in images. It uses a model to identify and blur specific body parts that are exposed.
However, the model's primary goal is not to distinguish between male and female but rather to identify exposed body parts that may need to be blurred based on predefined labels. The incorrect labeling of male parts as female-specific parts suggests a limitation in the model's accuracy and precision.
The model does attempt to distinguish between genders when features are clear, as seen in the attached picture. However, it may sometimes make mistakes, particularly with images of males, incorrectly identifying them as females.
This image successfully identifies the gender as male.
Failure to determine the gender as male.
Ensure Correct Exception Rules Setup:
BlurException.rule
) is correctly configured. This file handles cases where certain labels should not be blurred or are detected incorrectly.Adjust Detection Thresholds:
_postprocess
function. This fine-tuning can help improve the accuracy of the model.def _postprocess(output, resize_factor, pad_left, pad_top, score_threshold=0.5):
outputs = np.transpose(np.squeeze(output[0]))
rows = outputs.shape[0]
boxes = []
scores = []
class_ids = []
for i in range(rows):
classes_scores = outputs[i][4:]
max_score = np.amax(classes_scores)
if max_score >= score_threshold: # Use the score_threshold parameter here
class_id = np.argmax(classes_scores)
x, y, w, h = outputs[i][0], outputs[i][1], outputs[i][2], outputs[i][3]
left = int(round((x - w * 0.5 - pad_left) * resize_factor))
top = int(round((y - h * 0.5 - pad_top) * resize_factor))
width = int(round(w * resize_factor))
height = int(round(h * resize_factor))
class_ids.append(class_id)
scores.append(max_score)
boxes.append([left, top, width, height])
indices = cv2.dnn.NMSBoxes(boxes, scores, score_threshold, 0.45)
detections = []
for i in indices:
box = boxes[i]
score = scores[i]
class_id = class_ids[i]
detections.append(
{"class": __labels[class_id], "score": float(score), "box": box}
)
return detections
Retrain or Fine-tune the Model:
We are aware that the model can sometimes misclassify body parts, and this is an area we are continually working to improve. For now, please note that the current version of SafeVision might not perfectly distinguish between male and female body parts, especially in borderline cases or due to inherent biases in the training data.
We appreciate your feedback and are working on improving the model to better handle such cases. If you have any further questions or need assistance with configuring the exception rules, please let us know.
Thank you for helping us improve SafeVision.
[{'class': 'FACE_FEMALE', 'score': 0.8430059552192688, 'box': [200, 91, 91, 96]}, {'class': 'FEMALE_BREAST_EXPOSED', 'score': 0.554330050945282, 'box': [126, 277, 114, 90]}, {'class': 'BELLY_EXPOSED', 'score': 0.5395718216896057, 'box': [177, 388, 136, 124]}, {'class': 'FEMALE_BREAST_COVERED', 'score': 0.5144925117492676, 'box': [249, 278, 115, 84]}, {'class': 'ARMPITS_EXPOSED', 'score': 0.29887616634368896, 'box': [354, 283, 40, 53]}] i have used male photo