Closed xThaid closed 1 month ago
aha-mont64 | crc32 | minver | nettle-sha256 | nsichneu | slre | statemate | ud |
---|---|---|---|---|---|---|---|
▲ 0.426 (+0.019) | ▼ 0.513 (-0.014) | ▲ 0.351 (+0.030) | ▲ 0.655 (+0.004) | ▲ 0.364 (+0.019) | ▲ 0.293 (+0.010) | ▲ 0.329 (+0.012) | ▲ 0.435 (+0.030) |
You can view all the metrics here.
Device utilisation: (ECP5) | LUTs used as DFF: (ECP5) | LUTs used as carry: (ECP5) | LUTs used as ram: (ECP5) | Max clock frequency (Fmax) |
---|---|---|---|---|
▼ 22370 (-408) | ▲ 5730 (+169) | ▲ 802 (+32) | ▼ 896 (-108) | ▼ 51 (-1) |
Device utilisation: (ECP5) | LUTs used as DFF: (ECP5) | LUTs used as carry: (ECP5) | LUTs used as ram: (ECP5) | Max clock frequency (Fmax) |
---|---|---|---|---|
▲ 33515 (+3572) | ▲ 8983 (+180) | ▼ 1944 (-20) | ▼ 1076 (-108) | ▲ 40 (+0) |
The performance loss on crc32
is worrying. Did you investigate why that happened? Is this because of clear
conflicts?
The performance loss is caused by the increased size of the fetch queue (the instruction buffer). I need to investigate further why that is.
It seems that the performance loss is not directly related to this change. I opened an issue #702 for that.
CoreCounter
behind the frontend, so in particular behind all of its FIFOs. As the result, the flushing process should take a few cycles less.decode_fifo
is now aPipe
edit:
I also had to renamedebug_signals
in CoreConfiguration asauto_debug_signals
incorrectly assumed that this field is of protocolHasDebugSignals
Metric
backend.retirement.trap_latency/sum
:Depends on #698