Open larskotthoff opened 9 years ago
i guees i agrree. how do we parse yaml in r?
ok sorry :)
No worries :) As far as I can see at the moment this would give us exactly the same data structure as we have at the moment (apart from the feature steps of course), so any changes should be minimal.
Hi Lars,
I agree that the current format of the feature groups is an issue. I also like the idea of "provide" and "requires".
However, please note first of all, that your example is wrong. Pre provides the following features: "reducedVars,nvars,nclausesOrig,nvarsOrig,nclauses,reducedClauses".
Furthermore, I don't like YAML so much. We use it for one of our homepages and it is always a pain to edit the yaml files. Aren't there any better alternatives?
Cheers, Marius
Ok, that the example is wrong would have been much clearer in the new format :)
I don't see how editing YAML is more painful than editing a non-standard format.
In the end, I can live with YAML. however, there is no way to specify this with arff, or? If possible, I would like to prevent to use two different standard formats.
Again, I don't see how using two different standard formats is worse than using a standard and a non-standard format. In principle I don't have a problem with using YAML for everything.
How would one of the other files look like in YAML? I read in wikipedia that each JSON file is also valid YAML (>=1.2) file. I like JSON but I don't know whether this is really user-friendly.
Hmm, I guess something like
- instance_id: bla
repetition: 1
feature_x: foo
I don't really see a problem with being user friendly -- you're not supposed to edit/write those files manually.
such a format would blow up our files by more than factor 2 I guess.
The description.txt is a file I always write manually.
you can forget arff for such files immediatly
Yes, everything would be much larger. But as I said, I'm not opposed to keeping everything but description.txt
in arff. We also have citation.bib
which is in yet another standard format.
OK.
I also asked Matthias whether he likes this new format. and he agreed. so, please go on and make the changes.
Cheers, Marius
Ok, what's your feeling on making the lists proper YAML lists as well? I.e. instead of comma-separated they would be
provides:
- CG_mean
- CG_coeff_variation
- etc.
I like the comma-separated more since I can look up the corresponding feature step to a feature by looking one line above (and not n lines). To have a proper YAML (1.2), which is similiar to right now, we could use
[CG_mean, CG_coeff_variation,...]
However, we should change the entire file. So for example also algorithms_deterministic.
Ok, but presumably you're not going to parse the YAML yourself but use a library? And yes, that would apply for everything -- if the data structure is serialized by a YAML library we may not even be able to control which type of list we get (and don't need to care).
So I guess my real question is whether you're planning to use a library to parse/write the YAML.
parsing: of course.
but i would prefer it if people could still manually write (smaller) files without programing.
can we do that?
I often have a look into the description.txt files to get a better feeling for the scenarios, e.g., which algorithms are used; how many feature are used and how are the feature distributed in the feature groups. I could write scripts for such things, but looking into the files is often faster. So I would prefer that I can easily read the files.
well that argument i find slightly strange? why not use the eda overview?
Of course you can still read/write the files manually and that shouldn't even be much more difficult than it is now. But it would be much easier to parse/write programmatically because we can just use YAML libraries.
i meam we invested lots of time to write exactly scripts for that purpose.... web based.....
Which, come to think of it, we should rerun to update the web pages at some point.
Proposal: Use travis for that. People do PRs for a new scenario. Then travis builds all EDA stuff. This even checks the validity of the scenario files. Only then we merge. The only thing we then have to run manually might be the selector benchmarks.
+1
i meam we invested lots of time to write exactly scripts for that purpose.... web based.....
- I'm not always online.
- I'm faster with my local files than finding the URL and then clicking through the web interface.
Ok, so you think that
- name: Basic
provides:
- vars_clauses_ratio
- POSNEG_RATIO_CLAUSE_mean
- POSNEG_RATIO_CLAUSE_coeff_variation
- POSNEG_RATIO_CLAUSE_min
- POSNEG_RATIO_CLAUSE_max
- POSNEG_RATIO_CLAUSE_entropy
- VCG_CLAUSE_mean
- VCG_CLAUSE_coeff_variation
- VCG_CLAUSE_min
- VCG_CLAUSE_max
- VCG_CLAUSE_entropy
- UNARY
- BINARYp
- TRINARYp
requires: Pre
is harder to read than
- name: Basic
provides: vars_clauses_ratio,POSNEG_RATIO_CLAUSE_mean,POSNEG_RATIO_CLAUSE_coeff_variation,POSNEG_RATIO_CLAUSE_min,POSNEG_RATIO_CLAUSE_max,POSNEG_RATIO_CLAUSE_entropy,VCG_CLAUSE_mean,VCG_CLAUSE_coeff_variation,VCG_CLAUSE_min,VCG_CLAUSE_max,VCG_CLAUSE_entropy,UNARY,BINARYp,TRINARYp
requires: Pre
Yes, but in the end, I don't feel strongly about this. So, I can also live with the first format if we don't have a nice way to automatically generate the second format.
Ok, I've updated the spec, converted all the scenarios and updated the R code.
@mlindauer Could you please update the Python code/checker?
I'm on vacation for the next two weeks. I will do it afterwards.
Ok, thanks. No rush :)
It just occurred to me that we should also have a look at the feature_runstatus.arff files for instances that are presolved. The spec doesn't say what should happen to dependent feature steps in this case and the data is inconsistent. For example for ASP, feature steps that depend on one that presolved seem to be listed as presolved" as well but the costs aren't given, implying that they weren't actually run. For the SAT data sets, the runstatus of feature steps that depend on one that presolved are listed as unknown (which probably makes more sense in this case).
Hi,
I started to implement the new description.txt parser and I found an issue. According to the spec, "performance_measures" specifies a list. But looking at some of the description.txt files, e.g., ASP-POTASSCO, it is only a string: performance_measures: runtime
So, the format according to YAML should be: performance_measures:
The same issue holds for "maximize" and "performance_type".
The same issue applies to feature_step->"requires" in same senarios. In ASP-POTASSCO it is fine:
Dynamic-1:
requires:
- Static
IN SAT11-HAND it is not OK:
Basic:
requires: Pre
I updated the checker tool (and flexfolio). Right now, the checker tools complains about the issues raised above.
Thanks, good catch. Could you fix the files please?
Hi, I fixed it. All scenarios in the master branch are now compatible with the checker tool again.
However, I found another issue. At some point, we agreed that we need an order of the feature steps. This was implicitly given by the order of the feature steps in the description.txt. Since, we use YAML now, we encode the "feature_steps" as dictionaries:
feature_steps:
Pre:
provides:
- nvarsOrig
[...]
Basic:
requires:
- Pre
Parsing this file (at least with Python) will give you a dictionary without a defined order of the feature steps. So, we either have to change "feature_steps" to list (which would look unintuitively and ugly imho) or we add another list, such as "feature_step_order". What do you think?
Cheers, Marius
Just remind me what the order is needed for? You can derive any ordering constraints from the provides/requires right?
If I correctly remember, the problem was the presolved feature steps.
Ok, so let's have a feature status "not computed because instance presolved by previous feature step". We don't need to know what that feature step was, do we?
OK, I agree that we should have something like "not computed because instance presolved by previous feature step". However, if we have such a status, I still think we should have some more information about the order of the feature steps - at least how they were generated; the user can still decide to use another order. The arguments for such information are:
Should the order of the feature steps used when generating the data for the scenarios be part of the metadata?
Yes?
Ok, then let's do that.
The way feature steps are currently implicitly encoded is a pain. First, you have to read the spec very carefully to understand the semantics (which are the opposite of what at least I would intuitively expect), and modifying the features/feature steps (e.g. for feature filtering) is a complex and error-prone operation.
In particular, to remove a feature step, you have to check all the other feature steps if they contain features that are also provided by the feature step that was removed and if so remove those.
Another (albeit minor niggle) is that the format of description.txt is unnecessarily hard to parse and write because the key-value convention is broken for the feature steps (the key is not a primitive value but constructed from other things).
I propose two changes. First, use YAML for description.txt, which will introduce only minor changes but allow us to use off-the-shelf libraries for parsing and writing rather than having to write custom code. Second, encode feature step dependencies explicitly through
requires
andprovide
keys.Example:
This makes it intuitively clear what
Pre
does and that it doesn't actually provide any features on its own. It also makes thenumber_of_feature_steps
attribute redundant and it could be removed.@mlindauer @berndbischl