Closed jtuyls closed 1 month ago
Not sure whether this makes sense or there might be an underlying issue?
I'm trying to think how these behave differently - looking at the code closely I don't actually see how this PR handles congestion differently? It's still basically
double history = 1.0 + OVER_CAPACITY_COEFF * overCapacity[&ch];
double congestion = 1.0 + USED_CAPACITY_COEFF * usedCapacity[&ch];
demand[&ch] = history * congestion;
right?
Not sure whether this makes sense or there might be an underlying issue?
I'm trying to think how these behave differently - looking at the code closely I don't actually see how this PR handles congestion differently? It's still basically
double history = 1.0 + OVER_CAPACITY_COEFF * overCapacity[&ch]; double congestion = 1.0 + USED_CAPACITY_COEFF * usedCapacity[&ch]; demand[&ch] = history * congestion;
right?
It might just be the coefficients? I do see the load being spread over more connections.
Pulls in changes from: https://github.com/Xilinx/mlir-aie/pull/1643:
NOTE: I had to bump the
DEMAND_COEFF
from 1.1 to 1.5 to find a solution for theunit_vecmul_4x4.mlir
testcase. Not sure whether this makes sense or there might be an underlying issue? cc @Yu-Zhewen