Closed 727yubin closed 1 year ago
Yubin, you indeed tried that. Interesting output.
But, this means that even such a naive encryption scheme is not fully breakable by ChatGPT. So GPT does not become a threat in this case (i.e., human wins). Can you make any harmful effects using this result? Then it would be very interesting.
Description
I don't know if this is HackGPT-worthy, nevertheless it seems interesting.
After the lecture today, I tried decrypting the message #209. It failed, but in a quite interesting way? It seems that ChatGPT recognizes the strings "noM" and ".lla iH", and tries to fit it in with something that it has seen.