Closed OliverMa1 closed 1 year ago
@OliverMa1 I just enabled CI for this PR. Can you fix the reported errors?
Thank you for your benchmark submission. Before we can merge this pull request some small edits should be performed:
Target solver
line in the metadata to the line after Application
. This ensures that Description
and everything else is at the end(get-model)
(but keep (exit)
).I am performing the edits I mention above on my side and will push them to the pull request.
We are a bit worried about the general number of benchmarks in the benchmark set. Do you know if there are some benchmarks in there that are more "interesting" than others?
You also mention that the benchmarks fall into three categories. Right now they are all in one folder and the file names are not descriptive. It would be great if we could categorize the benchmarks into subfolder. How could we do that?
Thank you for doing the edits. There should be subdirectories in the QF_SLIA folder. The benchmarks in [20230331-transducer-plus], [20230403-lan-rep-all], ([20230403-str-rep-all] are the more interesting ones. They contain replace all operation (str-rep-all has the str.replace_all and lan-rep-all has str.replace_re_all) and encoding of transducers.
The benchmarks in [20230329-denghang] consists of intersection of real world and hand crafted regexes with an additional length constraint. All the benchmarks in this folder are similar in their nature.
Ahh sorry my fault. I only looked at the 20230329-denghang
folder. I mistook this folder for the entire benchmark set, because it already contains almost 40000 benchmarks.
As for the other benchmarks, I believe we should move at least the four folders with *rep*
into one common folder, since they are all about string replacement.
Another thing that I noticed: the Generated by
field mentions either OSTRICH
or Stranger
. This field is intended for the the person, or organization that generated the benchmark. The tool should be given in Generator
.
We would like to reduce the number of benchmarks in the 20230329-denghang
folder to less than 1000 benchmarks. Since you know the benchmarks better than us, can you use your judgement to decide which benchmarks to retain?
I reduced the folder to less than 1000 benchmarks, let me know if there is something else to be done.
Thank you.
We still have to know what we should insert for the Generated by:
line, since the generating system can't stay there. This should be a person or organization. Should I write "Oliver Markgraf" ?
I added/modified the generated by line. The ones in webapp are generated by Oliver Markgraf and the ones in transducer-plus by Matthew Hague.
Thank you! I will merge the benchmarks now. We will run some more tests on our side, but if everything goes well, the benchmarks will be included in the next release.
This adds 3 different sets of QF_SLIA benchmarks.