I found the symmetric of the previous bug reported in Issue #5.
I have now a satisfiable formula where the MUS() query does not answer an empty set.
Again, you can download the formula here: Download here.
That being said, here are the logs of my claim that the formula is indeed satisfiable and thus that the MUS should be empty.
Obviously, a satisfiable formula cannot have an unsatisfiable subset, by definition.
As in Issue #5, here are some logs from state-of-the-art SAT solvers:
#### cadical:2019 ####
c --- [ banner ] -------------------------------------------------------------
c
c CaDiCaL Radically Simplified CDCL SAT Solver
c Copyright (c) 2016-2019 Armin Biere, JKU Linz
c
c Version sr2019 9362f4982cb613599837677566b6c35c7ac9175e
c g++ (Debian 8.3.0-6) 8.3.0 -Wall -Wextra -O3 -DNDEBUG
c Sat Mar 7 17:16:00 UTC 2020 Linux abe47dad2b3d 5.0.0-37-generic x86_64
c
c --- [ config ] -------------------------------------------------------------
c
c using 'default' configuration (should work in most situations)
c
c --- [ parsing input ] ------------------------------------------------------
c
c reading DIMACS file from '/data/impossible.cnf'
c opening file to read '/data/impossible.cnf'
c found 'p cnf 8924 12908' header
c parsed 12908 clauses in 0.00 seconds process time
c
c --- [ options ] ------------------------------------------------------------
c
c all options are set to their default value
c
c --- [ solving ] ------------------------------------------------------------
c
c time measured in process time since initialization
c
c seconds reductions redundant irredundant
c MB restarts trail variables
c level conflicts glue remaining
c
c * 0.00 5 0 0 0 0 0 0% 0 11117 815 9%
c l 0.01 6 0 0 0 0 0 0% 0 11117 815 9%
c 1 0.01 6 0 0 0 0 0 0% 0 11117 815 9%
c
c --- [ result ] -------------------------------------------------------------
c
s SATISFIABLE
v 1 2 3 4 5 6 7 8 9 10 11 -12 13 -14 -15 -16 -17 -18 19 20 21 -22 23 -24 -25 [ ... ]
c
c --- [ run-time profiling ] -------------------------------------------------
c
c process time taken by individual solving procedures
c (percentage relative to process time for solving)
c
c 0.00 69.39% search
c 0.00 69.19% lucky
c 0.00 0.00% simplify
c =================================
c 0.00 16.42% solve
c
c last line shows process time for solving
c (percentage relative to total process time)
c
c --- [ statistics ] ---------------------------------------------------------
c
c fixed: 1071 12.00 % of all variables
c lucky: 1 100.00 % of tried
c propagations: 1071 1.04 M per second
c
c seconds are measured in process time for solving
c
c --- [ resources ] ----------------------------------------------------------
c
c total process time since initialization: 0.01 seconds
c total real time since initialization: 0.01 seconds
c maximum resident set size of process: 5.50 MB
c
c --- [ shutting down ] ------------------------------------------------------
c
c exit 10
The models seem to match a real solution to my input problem (even though I did not fully verify them with an SAT-checker).
Nevertheless, the only incorrect result here to the best of my knowledge is that the MUS answered by GopherSAT should be empty and is, therefore, incorrect.
Hello GopherSAT team.
I found the symmetric of the previous bug reported in Issue #5. I have now a satisfiable formula where the MUS() query does not answer an empty set. Again, you can download the formula here: Download here.
That being said, here are the logs of my claim that the formula is indeed satisfiable and thus that the MUS should be empty.
And when I try to obtain a MUS from this formula, here is the log:
Obviously, a satisfiable formula cannot have an unsatisfiable subset, by definition. As in Issue #5, here are some logs from state-of-the-art SAT solvers:
Glucose 4.2.1 (2019):
CaDiCal 2019:
MiniSAT 2.2.0:
The models seem to match a real solution to my input problem (even though I did not fully verify them with an SAT-checker). Nevertheless, the only incorrect result here to the best of my knowledge is that the MUS answered by GopherSAT should be empty and is, therefore, incorrect.