Open profmadden opened 9 months ago
Hi,
I'd like to double check that whether you are using the latest version of .cap files? We've increased the overflow weights defined in the .cap files. With the latest .cap file, I would expect the reference example outputs have higher overflow costs.
Also, I would suggest to use evaluator.cpp, which is the latest version, though in most cases evaluator_v2.cpp and evaluator.cpp should give similar scores.
Ahh -- I didn't realize that the cap files had changed as well. Here's what I'm getting for the reference designs, using what I think is the latest evaluator. Let me know if these look horribly wrong!
design | WL cost | via cost | overflow cost | total cost |
---|---|---|---|---|
Ariane133_51 | 9296578 | 3358316 | 10826548 | 23481442 |
Ariane133_68 | 9447807 | 3301284 | 17345476 | 30094568 |
BlackParrot(AKA BSG_chip?) | 58096126 | 20876700 | 47862659041789100032 | 47862659041868070912 |
Nvdla | 21278324 | 5049112 | 212639152264839296 | 212639152291166752 |
MemPool-Tile | 8410248 | 3805264 | 42624780325 | 42636995837 |
MemPool-Group | 262678902 | 88644880 | 524157262855231 | 524157614179013 |
Hi,
It looks correct to me. Thanks!
I'd like to confirm that I've got the scoring metrics right. We're using evaluator_v2.cpp. The posted beta submission results for some of the benchmarks are these:
The reference example outputs have these results:
If I'm understanding things correctly -- the beta submissions for the Ariane benchmarks have high overflow (compared to the reference examples), resulting in overflow cost dominating the total cost. The example outputs for the Ariane benchmarks have better scores. For the remaining benchmarks, the beta submissions have lower overflow, and better total cost than the reference examples. Wire lengths and via costs are similar in both. For the final evaluation, the run times of the various tools will be used to scale the scores.
Does this all look correct?