Our research group is studying security implications of large scale generative models and creating defenses to detect their outputs. We came across your paper and realize that such structural changes in neural architectures could make defenses difficult. Therefore, we would like to study how to create more robust defenses so that we can prevent bad actors from using your methodology to spread misinformation online.
If you could please provide an output data set from your generative model trained/fine-tuned on the unlikelihood objective, that would really help us out. We know you have released the script, but given the time constraint and resource limitation on our end, we are unable to fine-tune GPT-2.
Our research group is studying security implications of large scale generative models and creating defenses to detect their outputs. We came across your paper and realize that such structural changes in neural architectures could make defenses difficult. Therefore, we would like to study how to create more robust defenses so that we can prevent bad actors from using your methodology to spread misinformation online.
If you could please provide an output data set from your generative model trained/fine-tuned on the unlikelihood objective, that would really help us out. We know you have released the script, but given the time constraint and resource limitation on our end, we are unable to fine-tune GPT-2.
Thank you!