Closed daveshah1 closed 4 years ago
Actually, I think this might be due to different device variants rather than Vivado versions (possibly pcie4c_uscale_plus is for the HBM devices?)
Thanks, i indeed also saw this while trying to build the xcu1525
target yesterday. Just for info, for now the Ultrascale+ port is a bit hacky and only the MMAP has been tested (i adapted it very quickly to do some tests on the FK33 to ease debug) but we are working on this and DMA should be tested soon and speed increase to Gen3 X4 or even X8.
Excellent! Looking forward to trying it out.
WIth https://github.com/enjoy-digital/litepcie/commit/15868f071bb0ba58f87f6abb18a07018bdaed2e7, it should now build correctly.
@daveshah1: just for info, the Gen3 X4/X8/X16 work has been merged and tested on SQRL FK33/BCU1525. There are still some bottlenecks to identify at X8/X16 speed since we don't get the max expected throughput (but that's probably on the machine or software) and some false paths to apply on the clocks to speed up P&R, but the initial support is there and should already be usable in Gen3 X4 and less.To build the Gen3 X4 variant of the xcu1525 example:./xcu1525.py --speed=gen3 --nlanes=4 --build --driver --load
.
Using Vivado 2019.2, it seems like the PCIe block name has changed from pcie4c_uscale_plus to pcie4_uscale_plus, so the build fails during the IP generation until this is manually changed (still waiting for the build to finish so I haven't tested functionality yet).