potassco / clasp

⚙️ A conflict-driven nogood learning answer set solver
https://potassco.org/clasp/
MIT License
116 stars 16 forks source link

models during enumeration omitted #80

Closed rkaminsk closed 2 years ago

rkaminsk commented 2 years ago

The following program run with queens.aspif.gz sometimes omits models. Unfortunately, it's a bit hard to reproduce. It happens quite frequently in my Ubuntu 20.04 virtual machine with 2 cores. I managed to trigger the problem using between 3 and 64 threads. The higher the number of threads the higher the chance that solutions are omitted.

I'll try to dig a bit deeper. Just wanted to put the example somewhere safe.

#include <clasp/clasp_facade.h>
#include <clasp/solver.h>

#include <iostream>
#include <fstream>

int main() {
    while (true) {
        Clasp::ClaspConfig config;
        config.solve.numModels = 0;
        config.solve.setSolvers(64);
        Clasp::ClaspFacade libclasp;

        std::ifstream input("queens.aspif");
        libclasp.start(config, input);

        size_t num_models[] = { 1, 0, 0, 2, 10, 4, 40, 92, 352, 724 };
        for (size_t i = 0; libclasp.read(); ++i) {
            libclasp.prepare();
            libclasp.solve();
            size_t n = libclasp.summary().numEnum;
            if (num_models[i] != n) {
                std::cerr
                    << "error expected " << num_models[i]
                    << " models in call " << (i + 1)
                    << " but got " << n << std::endl;
            }
            assert(num_models[i] == n);
        }
    }
}
rkaminsk commented 2 years ago

The problem seems to be somehow related to splitting. For example, competition + recording works. Splitting with either backtracking or recording fails randomly. This happens in configs using splitting right from the start or ones that switch from competition to splitting for enumeration. One gets relatively short failing runs using

#include <clasp/clasp_facade.h>
#include <clasp/solver.h>

#include <iostream>
#include <fstream>

int main() {
    while (true) {
        Clasp::ClaspConfig config;
        config.solve.enumMode = Clasp::EnumOptions::enum_record;
        config.solve.numModels = 0;
        config.solve.setSolvers(4);
        config.solve.algorithm.mode = Clasp::mt::ParallelSolveOptions::Algorithm::mode_split;
        Clasp::ClaspFacade libclasp;

        std::ifstream input("queens.aspif");
        libclasp.start(config, input);

        size_t num_models[] = { 1, 0, 0, 2, 10, 4, 40, 92, 352, 724 };
        for (size_t i = 0; libclasp.read() && i < 6; ++i) {
            libclasp.prepare();
            for (size_t j = 0; j < 1; ++j) {
                libclasp.solve();
                size_t n = libclasp.summary().numEnum;
                if (num_models[i] != n) {
                    std::cerr
                        << "error expected " << num_models[i]
                        << " models in call " << (i + 1)
                        << " but got " << n << std::endl;
                }
                assert(num_models[i] == n);
            }
        }
    }
}
rkaminsk commented 2 years ago

I added a few tracing commands and it seems like there is a terminate event send even though there are still solvers not having exhausted their search:

  thread[1]: basic solve!!!
  thread[1]: splitting work from solver!
  thread[1]: split off/continuing to work on:  123 87 ~5 ~38 ~18 ~4 6 39/~39
  thread[0]: got work of size:  123 87 ~5 ~38 ~18 ~4 6 39
  thread[1]: finished with result false
  thread[3]: no work left
  thread[1]: no work left
  thread[0]: got a terminate event
  thread[4]: stopping
  thread[2]: splitting work from solver!
  thread[2]: split off/continuing to work on:  123 87 ~5 ~38 ~18 ~58 ~4 ~6 ~2 ~17 20/~20
  thread[2]: got a terminate event
  thread[2]: finished with result false

Let's see if I can find the reason. :smile:

BenKaufmann commented 2 years ago

@rkaminsk Thanks for looking into this. Unfortunately, I don't have time this and next week to investigate further :( But I'll look into this asap.

rkaminsk commented 2 years ago

@rkaminsk Thanks for looking into this. Unfortunately, I don't have time this and next week to investigate further :( But I'll look into this asap.

No worries. I'll continue looking a bit. And post here if I find something.

rkaminsk commented 2 years ago

I dug a little further and it seems like in the ParallelSolve::SharedData::requestWork function the counter of the BarrierSemaphore gets decreased by calling down more than once for the same solver within one request. This results in a terminate event even though there is still a solver that is working.

Question: Could spurious wake ups in BarrierSemaphore::unsafe_wait be a problem?

It seems like the problem is indeed caused by spurious wake ups. With the patched condition variable below, all models are enumerated. I opened #82 to handle this problem right in the BarrierSemaphore class.

  class condition_variable {                                                    
  public:
      void wait(std::unique_lock<std::mutex> &lock) {
          {
              std::lock_guard<std::mutex> guard_{mut_};
              wait_ += 1;
          }
          var_.wait(lock, [this]() {
              std::lock_guard<std::mutex> guard_{mut_};
              if (notify_ > 0) {
                  --notify_;                                                                                                                         
                  --wait_;                                                                                                                           
                  return true;                                                                                                                       
              }                                                                                                                                      
              return false;                                                                                                                          
          });                                                                                                                                        
      }                                                                                                                                              
      bool wait_for(unique_lock<mutex>& lock, double timeInSecs) {                                                                                   
          // TODO: spurious wakeups should be suppressed here, too.
          return var_.wait_for(lock, std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::duration<double>(timeInSecs)))               
              == std::cv_status::no_timeout;                                                                                                         
      }                                                                                                                                              
      void notify_one() noexcept {                                                                                                                   
          std::lock_guard<std::mutex> guard_{mut_};                                                                                                  
          if (notify_ < wait_) {                                                                                                                           
              notify_ += 1;
          }
          var_.notify_one();
      }
      void notify_all() noexcept {
          std::lock_guard<std::mutex> guard_{mut_};                                                                                                                                           
          notify_ = wait_;                             
          var_.notify_all();
      }
  private:
      std::mutex mut_;
      std::condition_variable var_;
      size_t notify_ = 0;
      size_t wait_ = 0;
  };
BenKaufmann commented 2 years ago

Many thanks for your help. Hopefully fixed in https://github.com/potassco/clasp/tree/dev