The report is quite straightforward to follow and I enjoyed reading it.
The observation of the tiger-lion weird image is certainly interesting and leaves a lot of questions open.
The way of adding noise is very similar to how some of the adversarial attacks on vision models are executed and it was paired with good explanations.
Areas of Improvement:
As you mentioned that you did a manual search and multi-line editing. Python offers very simple commands for such text parsing, so you can look into that.
While designing a noise to fool the network certainly adds to the evaluation of the robustness of one model but I think it would have been more effective to see if the identified noise was general enough when applied to other pretrained models. Additionally, I wonder how this method performs for models trained with adding noise as its one augmentation.
Thanks for the feedback! Yeah I don't think the noise would work to fool other models, as I really exploited having information about this model in order to generate it
Superpowers:
Areas of Improvement:
Price: A