Historically, we only reported NSFW status by sending the "this image was censored" image back when the user requested NSFW checks (or if the worker was nsfw: false but incidentally generated an NSFW image). This dates back to a time when the NSFW and CSAM checks were separate. This is no longer the case as they have been combined and as such both always run for every generation.
This may assist us in the future when compiling shared images and may serve some use to end users and/or service operators.
I propose:
A new gen metadata type/field is added, indicating that the generation was detected as being NSFW. This would be informative only and would not necessarily reflect that the image was censored.
Historically, we only reported NSFW status by sending the "this image was censored" image back when the user requested NSFW checks (or if the worker was
nsfw: false
but incidentally generated an NSFW image). This dates back to a time when the NSFW and CSAM checks were separate. This is no longer the case as they have been combined and as such both always run for every generation.This may assist us in the future when compiling
shared
images and may serve some use to end users and/or service operators.I propose:
Todo: