Open ericksonalves opened 4 years ago
Neurify can analyze images as inputs. To do so, they need to be transformed to text files (containing their respective pixel black/white values). For MNIST, this generates those files: https://github.com/tcwangshiqi-columbia/Neurify/tree/master/general/images
Hi @ChristopherBrix ,
Thank you for replying.
I see. Is it possible to run Neurify without giving it such images inputs? If so, how can I do that?
I'm not sure I understand - Neurify always needs a trained model and a concrete input for this model. You can substitute the image with basically anything else, you just have to transfer it to the same text format. But you cannot run Neurify without a specific input. What would you expect the output to be?
From here, it seems that we can run Neurify with a trained model and the input intervals for the model, not specific inputs for the model. Is it correct?
Ok, I see what's confusing here. In addition to the concrete input, Neurify has a INF
value (I think by default it's 10), that's used to define the maximal change to the input. So if the input is 42, the resulting interval would be [32,52].
If you don't want to have the same delta for all pixels, you have to adapt the code (at the beginning of the main function in network_test.c)
In the architecture presented in "Efficient Formal Safety Analysis of Neural Networks" does not mention images as inputs, but in the README.md, test images are mentioned, as well as in the source-code.