coseal / aslib-spec

4 stars 5 forks source link

Feature step dependencies are a pain to work with #2

Open larskotthoff opened 9 years ago

larskotthoff commented 9 years ago

The way feature steps are currently implicitly encoded is a pain. First, you have to read the spec very carefully to understand the semantics (which are the opposite of what at least I would intuitively expect), and modifying the features/feature steps (e.g. for feature filtering) is a complex and error-prone operation.

In particular, to remove a feature step, you have to check all the other feature steps if they contain features that are also provided by the feature step that was removed and if so remove those.

Another (albeit minor niggle) is that the format of description.txt is unnecessarily hard to parse and write because the key-value convention is broken for the feature steps (the key is not a primitive value but constructed from other things).

I propose two changes. First, use YAML for description.txt, which will introduce only minor changes but allow us to use off-the-shelf libraries for parsing and writing rather than having to write custom code. Second, encode feature step dependencies explicitly through requires and provide keys.

Example:

scenario_id: SAT11-HAND
performance_measures: runtime
maximize: false
performance_type: runtime
algorithm_cutoff_time: 5000
algorithm_cutoff_memory: ?
features_cutoff_time: 5000
features_cutoff_memory: ?
features_deterministic: nvarsOrig,nclausesOrig,nvars,nclauses,reducedVars,reducedClauses,vars_clauses_ratio,POSNEG_RATIO_CLAUSE_mean,POSNEG_RATIO_CLAUSE_coeff_variation,POSNEG_RATIO_CLAUSE_min,POSNEG_RATIO_CLAUSE_max,POSNEG_RATIO_CLAUSE_entropy,VCG_CLAUSE_mean,VCG_CLAUSE_coeff_variation,VCG_CLAUSE_min,VCG_CLAUSE_max,VCG_CLAUSE_entropy,UNARY,BINARYp,TRINARYp,VCG_VAR_mean,VCG_VAR_coeff_variation,VCG_VAR_min,VCG_VAR_max,VCG_VAR_entropy,POSNEG_RATIO_VAR_mean,POSNEG_RATIO_VAR_stdev,POSNEG_RATIO_VAR_min,POSNEG_RATIO_VAR_max,POSNEG_RATIO_VAR_entropy,HORNY_VAR_mean,HORNY_VAR_coeff_variation,HORNY_VAR_min,HORNY_VAR_max,HORNY_VAR_entropy,horn_clauses_fraction,VG_mean,VG_coeff_variation,VG_min,VG_max,CG_mean,CG_coeff_variation,CG_min,CG_max,CG_entropy,cluster_coeff_mean,cluster_coeff_coeff_variation,cluster_coeff_min,cluster_coeff_max,cluster_coeff_entropy,DIAMETER_mean,DIAMETER_coeff_variation,DIAMETER_min,DIAMETER_max,DIAMETER_entropy,cl_num_mean,cl_num_coeff_variation,cl_num_min,cl_num_max,cl_num_q90,cl_num_q10,cl_num_q75,cl_num_q25,cl_num_q50,cl_size_mean,cl_size_coeff_variation,cl_size_min,cl_size_max,cl_size_q90,cl_size_q10,cl_size_q75,cl_size_q25,cl_size_q50,SP_bias_mean,SP_bias_coeff_variation,SP_bias_min,SP_bias_max,SP_bias_q90,SP_bias_q10,SP_bias_q75,SP_bias_q25,SP_bias_q50,SP_unconstraint_mean,SP_unconstraint_coeff_variation,SP_unconstraint_min,SP_unconstraint_max,SP_unconstraint_q90,SP_unconstraint_q10,SP_unconstraint_q75,SP_unconstraint_q25,SP_unconstraint_q50,saps_BestSolution_Mean,saps_BestSolution_CoeffVariance,saps_FirstLocalMinStep_Mean,saps_FirstLocalMinStep_CoeffVariance,saps_FirstLocalMinStep_Median,saps_FirstLocalMinStep_Q10,saps_FirstLocalMinStep_Q90,saps_BestAvgImprovement_Mean,saps_BestAvgImprovement_CoeffVariance,saps_FirstLocalMinRatio_Mean,saps_FirstLocalMinRatio_CoeffVariance,gsat_BestSolution_Mean,gsat_BestSolution_CoeffVariance,gsat_FirstLocalMinStep_Mean,gsat_FirstLocalMinStep_CoeffVariance,gsat_FirstLocalMinStep_Median,gsat_FirstLocalMinStep_Q10,gsat_FirstLocalMinStep_Q90,gsat_BestAvgImprovement_Mean,gsat_BestAvgImprovement_CoeffVariance,gsat_FirstLocalMinRatio_Mean,gsat_FirstLocalMinRatio_CoeffVariance,lobjois_mean_depth_over_vars,lobjois_log_num_nodes_over_vars
features_stochastic: 
algorithms_deterministic: MPhaseSAT_2011-02-15,Sol_2011-04-04,QuteRSat_2011-05-12_fixed_,CryptoMiniSat_Strange-Night2-st_fixed_,PicoSAT_941,glucose_2,clasp_2.0-R4092-crafted,SAT07referencesolverminisat_SAT2007,jMiniSat_2011,RestartSAT_B95,SAT09referencesolverclasp_1.2.0-SAT09-32,sathys_2011-04-01,SApperloT2010_2011-05-15_fixed_,sattime+_2011-03-02,sattime_2011-03-02 
algorithms_stochastic: 
number_of_feature_steps: 10
default_steps: Pre, Basic, KLB, CG
feature_steps:
  - name: Pre

  - name: Basic
    provides: vars_clauses_ratio,POSNEG_RATIO_CLAUSE_mean,POSNEG_RATIO_CLAUSE_coeff_variation,POSNEG_RATIO_CLAUSE_min,POSNEG_RATIO_CLAUSE_max,POSNEG_RATIO_CLAUSE_entropy,VCG_CLAUSE_mean,VCG_CLAUSE_coeff_variation,VCG_CLAUSE_min,VCG_CLAUSE_max,VCG_CLAUSE_entropy,UNARY,BINARYp,TRINARYp
    requires: Pre

  - name: KLB
    provides: VCG_VAR_mean,VCG_VAR_coeff_variation,VCG_VAR_min,VCG_VAR_max,VCG_VAR_entropy,POSNEG_RATIO_VAR_mean,POSNEG_RATIO_VAR_stdev,POSNEG_RATIO_VAR_min,POSNEG_RATIO_VAR_max,POSNEG_RATIO_VAR_entropy,HORNY_VAR_mean,HORNY_VAR_coeff_variation,HORNY_VAR_min,HORNY_VAR_max,HORNY_VAR_entropy,horn_clauses_fraction,VG_mean,VG_coeff_variation,VG_min,VG_max
    requires: Pre

  - name: CG
    provides: CG_mean,CG_coeff_variation,CG_min,CG_max,CG_entropy,cluster_coeff_mean,cluster_coeff_coeff_variation,cluster_coeff_min,cluster_coeff_max,cluster_coeff_entropy
    requires: Pre

  - name: DIAMETER
    provides: DIAMETER_mean,DIAMETER_coeff_variation,DIAMETER_min,DIAMETER_max,DIAMETER_entropy
    requires: Pre

  - name: cl
    provides: cl_num_mean,cl_num_coeff_variation,cl_num_min,cl_num_max,cl_num_q90,cl_num_q10,cl_num_q75,cl_num_q25,cl_num_q50,cl_size_mean,cl_size_coeff_variation,cl_size_min,cl_size_max,cl_size_q90,cl_size_q10,cl_size_q75,cl_size_q25,cl_size_q50
    requires: Pre

  - name: sp
    provides: SP_bias_mean,SP_bias_coeff_variation,SP_bias_min,SP_bias_max,SP_bias_q90,SP_bias_q10,SP_bias_q75,SP_bias_q25,SP_bias_q50,SP_unconstraint_mean,SP_unconstraint_coeff_variation,SP_unconstraint_min,SP_unconstraint_max,SP_unconstraint_q90,SP_unconstraint_q10,SP_unconstraint_q75,SP_unconstraint_q25,SP_unconstraint_q50
    requires: Pre

  - name: ls_saps
    provides: saps_BestSolution_Mean,saps_BestSolution_CoeffVariance,saps_FirstLocalMinStep_Mean,saps_FirstLocalMinStep_CoeffVariance,saps_FirstLocalMinStep_Median,saps_FirstLocalMinStep_Q10,saps_FirstLocalMinStep_Q90,saps_BestAvgImprovement_Mean,saps_BestAvgImprovement_CoeffVariance,saps_FirstLocalMinRatio_Mean,saps_FirstLocalMinRatio_CoeffVariance
    requires: Pre

 - name: ls_gsat
   provides: gsat_BestSolution_Mean,gsat_BestSolution_CoeffVariance,gsat_FirstLocalMinStep_Mean,gsat_FirstLocalMinStep_CoeffVariance,gsat_FirstLocalMinStep_Median,gsat_FirstLocalMinStep_Q10,gsat_FirstLocalMinStep_Q90,gsat_BestAvgImprovement_Mean,gsat_BestAvgImprovement_CoeffVariance,gsat_FirstLocalMinRatio_Mean,gsat_FirstLocalMinRatio_CoeffVariance
   requires: Pre

  - name: lobjois
    provides: lobjois_mean_depth_over_vars,lobjois_log_num_nodes_over_vars
    requires: Pre

This makes it intuitively clear what Pre does and that it doesn't actually provide any features on its own. It also makes the number_of_feature_steps attribute redundant and it could be removed.

@mlindauer @berndbischl

berndbischl commented 9 years ago

i guees i agrree. how do we parse yaml in r?

larskotthoff commented 9 years ago

https://cran.r-project.org/web/packages/yaml/index.html

berndbischl commented 9 years ago

ok sorry :)

larskotthoff commented 9 years ago

No worries :) As far as I can see at the moment this would give us exactly the same data structure as we have at the moment (apart from the feature steps of course), so any changes should be minimal.

mlindauer commented 9 years ago

Hi Lars,

I agree that the current format of the feature groups is an issue. I also like the idea of "provide" and "requires".

However, please note first of all, that your example is wrong. Pre provides the following features: "reducedVars,nvars,nclausesOrig,nvarsOrig,nclauses,reducedClauses".

Furthermore, I don't like YAML so much. We use it for one of our homepages and it is always a pain to edit the yaml files. Aren't there any better alternatives?

Cheers, Marius

larskotthoff commented 9 years ago

Ok, that the example is wrong would have been much clearer in the new format :)

I don't see how editing YAML is more painful than editing a non-standard format.

mlindauer commented 9 years ago

In the end, I can live with YAML. however, there is no way to specify this with arff, or? If possible, I would like to prevent to use two different standard formats.

larskotthoff commented 9 years ago

Again, I don't see how using two different standard formats is worse than using a standard and a non-standard format. In principle I don't have a problem with using YAML for everything.

mlindauer commented 9 years ago

How would one of the other files look like in YAML? I read in wikipedia that each JSON file is also valid YAML (>=1.2) file. I like JSON but I don't know whether this is really user-friendly.

larskotthoff commented 9 years ago

Hmm, I guess something like

- instance_id: bla
  repetition: 1
  feature_x: foo

I don't really see a problem with being user friendly -- you're not supposed to edit/write those files manually.

mlindauer commented 9 years ago

such a format would blow up our files by more than factor 2 I guess.

The description.txt is a file I always write manually.

berndbischl commented 9 years ago

you can forget arff for such files immediatly

larskotthoff commented 9 years ago

Yes, everything would be much larger. But as I said, I'm not opposed to keeping everything but description.txt in arff. We also have citation.bib which is in yet another standard format.

mlindauer commented 9 years ago

OK.

I also asked Matthias whether he likes this new format. and he agreed. so, please go on and make the changes.

Cheers, Marius

larskotthoff commented 9 years ago

Ok, what's your feeling on making the lists proper YAML lists as well? I.e. instead of comma-separated they would be

provides:
  - CG_mean
  - CG_coeff_variation
  - etc.
mlindauer commented 9 years ago

I like the comma-separated more since I can look up the corresponding feature step to a feature by looking one line above (and not n lines). To have a proper YAML (1.2), which is similiar to right now, we could use

[CG_mean, CG_coeff_variation,...]

However, we should change the entire file. So for example also algorithms_deterministic.

larskotthoff commented 9 years ago

Ok, but presumably you're not going to parse the YAML yourself but use a library? And yes, that would apply for everything -- if the data structure is serialized by a YAML library we may not even be able to control which type of list we get (and don't need to care).

So I guess my real question is whether you're planning to use a library to parse/write the YAML.

berndbischl commented 9 years ago

parsing: of course.

but i would prefer it if people could still manually write (smaller) files without programing.

can we do that?

mlindauer commented 9 years ago

I often have a look into the description.txt files to get a better feeling for the scenarios, e.g., which algorithms are used; how many feature are used and how are the feature distributed in the feature groups. I could write scripts for such things, but looking into the files is often faster. So I would prefer that I can easily read the files.

berndbischl commented 9 years ago

well that argument i find slightly strange? why not use the eda overview?

larskotthoff commented 9 years ago

Of course you can still read/write the files manually and that shouldn't even be much more difficult than it is now. But it would be much easier to parse/write programmatically because we can just use YAML libraries.

berndbischl commented 9 years ago

i meam we invested lots of time to write exactly scripts for that purpose.... web based.....

larskotthoff commented 9 years ago

Which, come to think of it, we should rerun to update the web pages at some point.

berndbischl commented 9 years ago

Proposal: Use travis for that. People do PRs for a new scenario. Then travis builds all EDA stuff. This even checks the validity of the scenario files. Only then we merge. The only thing we then have to run manually might be the selector benchmarks.

larskotthoff commented 9 years ago

+1

mlindauer commented 9 years ago

i meam we invested lots of time to write exactly scripts for that purpose.... web based.....

  1. I'm not always online.
  2. I'm faster with my local files than finding the URL and then clicking through the web interface.
larskotthoff commented 9 years ago

Ok, so you think that

- name: Basic
  provides:
    - vars_clauses_ratio
    - POSNEG_RATIO_CLAUSE_mean
    - POSNEG_RATIO_CLAUSE_coeff_variation
    - POSNEG_RATIO_CLAUSE_min
    - POSNEG_RATIO_CLAUSE_max
    - POSNEG_RATIO_CLAUSE_entropy
    - VCG_CLAUSE_mean
    - VCG_CLAUSE_coeff_variation
    - VCG_CLAUSE_min
    - VCG_CLAUSE_max
    - VCG_CLAUSE_entropy
    - UNARY
    - BINARYp
    - TRINARYp
  requires: Pre

is harder to read than

- name: Basic
  provides: vars_clauses_ratio,POSNEG_RATIO_CLAUSE_mean,POSNEG_RATIO_CLAUSE_coeff_variation,POSNEG_RATIO_CLAUSE_min,POSNEG_RATIO_CLAUSE_max,POSNEG_RATIO_CLAUSE_entropy,VCG_CLAUSE_mean,VCG_CLAUSE_coeff_variation,VCG_CLAUSE_min,VCG_CLAUSE_max,VCG_CLAUSE_entropy,UNARY,BINARYp,TRINARYp
  requires: Pre
mlindauer commented 9 years ago

Yes, but in the end, I don't feel strongly about this. So, I can also live with the first format if we don't have a nice way to automatically generate the second format.

larskotthoff commented 9 years ago

Ok, I've updated the spec, converted all the scenarios and updated the R code.

@mlindauer Could you please update the Python code/checker?

mlindauer commented 9 years ago

I'm on vacation for the next two weeks. I will do it afterwards.

larskotthoff commented 9 years ago

Ok, thanks. No rush :)

larskotthoff commented 9 years ago

It just occurred to me that we should also have a look at the feature_runstatus.arff files for instances that are presolved. The spec doesn't say what should happen to dependent feature steps in this case and the data is inconsistent. For example for ASP, feature steps that depend on one that presolved seem to be listed as presolved" as well but the costs aren't given, implying that they weren't actually run. For the SAT data sets, the runstatus of feature steps that depend on one that presolved are listed as unknown (which probably makes more sense in this case).

mlindauer commented 9 years ago

Hi,

I started to implement the new description.txt parser and I found an issue. According to the spec, "performance_measures" specifies a list. But looking at some of the description.txt files, e.g., ASP-POTASSCO, it is only a string: performance_measures: runtime

So, the format according to YAML should be: performance_measures:

The same issue holds for "maximize" and "performance_type".

mlindauer commented 9 years ago

The same issue applies to feature_step->"requires" in same senarios. In ASP-POTASSCO it is fine:

    Dynamic-1:
        requires:
            - Static

IN SAT11-HAND it is not OK:

    Basic:
      requires: Pre
mlindauer commented 9 years ago

I updated the checker tool (and flexfolio). Right now, the checker tools complains about the issues raised above.

larskotthoff commented 9 years ago

Thanks, good catch. Could you fix the files please?

mlindauer commented 9 years ago

Hi, I fixed it. All scenarios in the master branch are now compatible with the checker tool again.

However, I found another issue. At some point, we agreed that we need an order of the feature steps. This was implicitly given by the order of the feature steps in the description.txt. Since, we use YAML now, we encode the "feature_steps" as dictionaries:

feature_steps:
    Pre:
      provides:
        - nvarsOrig
       [...]
    Basic:
      requires: 
        - Pre

Parsing this file (at least with Python) will give you a dictionary without a defined order of the feature steps. So, we either have to change "feature_steps" to list (which would look unintuitively and ugly imho) or we add another list, such as "feature_step_order". What do you think?

Cheers, Marius

larskotthoff commented 9 years ago

Just remind me what the order is needed for? You can derive any ordering constraints from the provides/requires right?

mlindauer commented 9 years ago

If I correctly remember, the problem was the presolved feature steps.

  1. The features were computed in an (unknown) order and if a feature step pre-solved an instance, the remaining feature steps were not computed (at least true for ASP and SAT scenarios).
  2. We discussed the definition of the oracle at some point. If we want to include the feature steps as a possible algorithm to solve an instance (important for some scenarios) in the oracle defintion, we have to know the right order of the feature steps, or else we have to solve an NP-hard problem (i.e., all possible orders of feature steps) to find an optimal order. (3. there is more than one possible order, if we only consider "requires")
larskotthoff commented 9 years ago
  1. Sounds to me like we should have a feature runstatus "not computed" then -- using the order to derive this is quite similar to how the dependencies were encoded. Not at all obvious and intuitive and bound to trip somebody up.
  2. I remember -- what conclusion did we come to? It seems fair enough to me that oracle would be able to change the ordering of feature steps.
  3. I don't see that as a disadvantage.
mlindauer commented 9 years ago
  1. Saying some features are "not computed" sounds even more unintuitive. Without an order of the feature steps, it is not explained why they are not computed. And we assume so far that the data is complete as long as there are no good reasons for missing data.
  2. I think we postponed the discussion to later and used simply the old definition of the oracle without consideration of feature steps.
larskotthoff commented 9 years ago

Ok, so let's have a feature status "not computed because instance presolved by previous feature step". We don't need to know what that feature step was, do we?

mlindauer commented 9 years ago

OK, I agree that we should have something like "not computed because instance presolved by previous feature step". However, if we have such a status, I still think we should have some more information about the order of the feature steps - at least how they were generated; the user can still decide to use another order. The arguments for such information are:

  1. We would know which step was responsible for this new status
  2. The optimal order of the feature steps (-> presolved status) is exactly the order in which the features were generated. I don't see an argument why the users should try to figure this out by themselves if we already know it. (In the same way, it is also important for a new oracle definition as mentioned before.)
larskotthoff commented 9 years ago

Should the order of the feature steps used when generating the data for the scenarios be part of the metadata?

mlindauer commented 9 years ago

Yes?

larskotthoff commented 9 years ago

Ok, then let's do that.