Open paulsushmita opened 4 months ago
Hi @paulsushmita , thanks for your interests in the tool.
When I run ./build/Marabou --input-query failedMarabouQuery.ipq
with the latest master branch, I got UNSAT.
Could you answer the following questions so that I can better diagnose the issue:
network.saveQuery(filename="test.ipq")
right before where you called network.solve()
and share the test.ipq
with me? I think using Marabou non-incrementally is fine for the network size that you send.
Hi @wu-haoze, thanks for your reply.
To answer your questions:
ERROR
value and thus, landed to non-incremental calls.) Additionally,
failedMarabouQuery.ipq
returns you unsat
, I made an additional call to the Marabou solver after an ERROR
, with the same queries (assuming it would return ERROR
). To my surprise, it returned an unsat
.Thanks.
Update to 5.
When I receive an ERROR
, I reuse the queries to call the solver non-incrementally again (100 times) and get the following values for each iteration:
unsat
unsat
unsat
ERROR
unsat
ERROR
ERROR
unsat
unsat
ERROR
.
.
.
So, it is not hard to reproduce it as well. I was assuming it can be reproduced when the solver is called multiple times.
ERROR
?@paulsushmita thanks for the update, I’ll take a look at the new query file you sent ASAP. And just to be clear, you called Marabou 100 times on the same query, and it non-deterministically returns ERROR? That would be very surprising.
The solver can return error for different reasons, but generally it should be derterministic.
@wu-haoze yes, that is what I am trying to do.
[At each iteration below, Marabou is called non-incrementally (deleted, loaded with network file and populated with input queries again).] I have all the input values fixed initially (added to a fixed pixel set. I free each pixel (add it to a free pixel set, remove from the fixed pixel set) to an epsilon ball area at each iteration (while other pixels take values according to the set they are in) and check its robustness. If it is robust (unsat), it returns and continues for the next pixel; else remove it from the free set and add it to the fixed pixel set until all pixels are iterated.
I added an else if part above, when an ERROR is encountered, I call the Marabou solver again and it returns non-deterministic solutions (unsat and ERROR).
Let me know if you want me to attach the code file just to be sure the use of Marabou is in the correct manner.
Thanks.
Hi @wu-haoze. Did you get a chance to look into the issue?
Hi @paulsushmita , I’ll get back to you in a day.
@paulsushmita , I'm able to reproduce the error on my end. The crashes stem from the implementation of the Max constraint. I think it will take some additional time for me to pose a fix, but there are two solutions for now:
When Gurobi is built, I would further try setting the option
options = Marabou.createOptions(verbosity = 0, solveWithMILP=True, numWorkers=[# of available processors])
network.solve(options = options)
to see if this improves runtime efficiency.
You're probably already using the Marabou.createOptions
method? See https://github.com/NeuralNetworkVerification/Marabou/blob/master/maraboupy/examples/2_ONNXExample.py for an example.
Thanks for the suggestions @wu-haoze. I have modified my network and not using Max pool currently. It works fine now without throwing ERROR
.
Also, thanks for the suggestion of using options
(I was using Marabou.solve(verbose=False)
previously and that did not really work).
Leaving this issue open for you to close:
The crashes stem from the implementation of the Max constraint.
Hi,
I was trying to call the Marabou solver in my code to explain NNs. It returns sat/unsat values for a smaller bound of input queries. However, it threw an error value at a later stage where the bounds were increased a bit for the many of input queries.
The error reads:
Can you please help me understand the reason of it throwing an error (attached the failed query file)? failedMarabouQuery.ipq.zip
I am trying to use Marabou non-incrementally as it has the limitation as of now. Will it be possible to generate an explanation or is there any scalability restriction that I must take care of which can resolve this?