Closed JanMerlinV closed 7 years ago
Short explanation: The problem there was that 'benchmark.gurobi.out' doesn't appear in your data. It would also have sufficed if you had set defaultgroup accordingly. Fix: In such a case now an automatically generated defaultgroup is set and used.
The problem is rather that "scripts/evaluation.xml" should represent an evaluation that works out of the box, which it doesn't.
As that is not his fault i thought that i at least explain to him what happened. I changed this file too...
The explanation did help with understanding what went wrong, thanks!
I think all three of my issues can be closed once the evaluation file is fixed. Turns out they were all very similar problems.
This should be resolved in PR #30.
Running
ipet-evaluate -t check.mipdev-complete.scip-4.0.0.2.linux.x86_64.gnu.opt.spx2.none.M610.default.trn -e evaluation.xml
leads to:
Traceback (most recent call last): File "/nfs/OPTI/bzfviern/workspace/ipet/venv/bin/ipet-evaluate", line 4, in <module> __import__('pkg_resources').run_script('ipet==0.0.9', 'ipet-evaluate')
[...]
File "/nfs/OPTI/bzfviern/workspace/ipet/venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/ipet/evaluation/IPETEvalTable.py", line 728, in addComparisonColumns compcol = dict(list(grouped))[self.defaultgroup] KeyError: 'benchmark.gurobi.out'
However, replacing the line
Evaluation defaultgroup="benchmark.gurobi.out" index="ProblemName LogFileName"
from evaluation.xml with
Evaluation comparecolformat="%.3f" defaultgroup="default" evaluateoptauto="True" groupkey="Status" index="ProblemName Settings" indexsplit="-1" sortlevel="0"
resolves the issue.
@GregorCH @fschloesser