Closed GoogleCodeExporter closed 9 years ago
Why would the timing analysis in the packer be different than the final timing
analysis on these edges?
Original comment by Jonathan...@gmail.com
on 6 Jul 2012 at 2:26
[deleted comment]
The packer requires that the netlist atoms have their criticalities assigned
before general packing starts. At this stage, the physical primitive that a
netlist atom resides in has not yet been determined. For example, if an FPGA
has both low-power slow-speed LUTs and high-power fast-speed LUTs, which timing
number do we pick for the LUTs in the netlist? Because of this ambiguity, I
chose to stick with the old T-VPack timing engine (architecture independant
logical depth-based timing engine) until we can think of a better solution.
Post-packing, all netlist atoms have been mapped to physical primitives so this
information is available and used by the timing engine.
There are several options we can do here for packing. A simple heuristic is to
assign each netlist atom to a "best-fit" primitive in the architecture, ignore
interconnect delay, then do timing analysis on that resulting netlist.
Original comment by JasonKai...@gmail.com
on 6 Jul 2012 at 2:48
This got fixed when the packer timing graph got modified to support Michael's
multi-clock work. We now model the right timing graph structure in the netlist
by mapping atoms in the blif netlist to a "best-fit" architecture primitive.
Now that the timing graph structure matches what a user would expect, we need
to have a discussion about how to model the actual delays pre-packing. We may
also want to talk about unifying the packer timing graph with the VPR timing
graph.
Original comment by JasonKai...@gmail.com
on 14 Aug 2012 at 8:52
Original comment by JasonKai...@gmail.com
on 14 Aug 2012 at 8:52
Original issue reported on code.google.com by
JasonKai...@gmail.com
on 5 Jul 2012 at 6:49