UCLA-VAST / tapa

TAPA is a dataflow HLS framework that features fast compilation, expressive programming model and generates high-frequency FPGA accelerators.
https://tapa.rtfd.io
MIT License
144 stars 27 forks source link

Implementation failed with vadd example in Vitis 2022.1 and latest platform #118

Open ueqri opened 1 year ago

ueqri commented 1 year ago

Environment

Situation

When generating bitstream, we got this error at the stage of implementing dynamic region.

[14:07:56] Run vpl: Step synth: Completed
[14:07:56] Run vpl: Step impl: Started
[14:27:55] Run vpl: Step impl: Failed
[14:27:58] Run vpl: FINISHED. Run Status: impl ERROR

===>The following messages were generated while processing /path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/prj/prj.runs/impl_1 :
ERROR: [VPL 101-2] ERROR: [Vivado 12-1433] Expecting a non-empty list of cells to be added to the pblock.  Please verify the correctness of the <cells> argument.
ERROR: [VPL 101-3] sourcing script /path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/scripts/impl_1/_full_opt_pre.tcl failed
ERROR: [VPL 60-773] In '/path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/runme.log', caught Tcl error:  problem implementing dynamic region, impl_1: opt_design ERROR, please look at the run log file '/path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/prj/prj.runs/impl_1/runme.log' for more information
WARNING: [VPL 60-732] Link warning: No monitor points found for BD automation.
ERROR: [VPL 60-704] Integration error, problem implementing dynamic region, impl_1: opt_design ERROR, please look at the run log file '/path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/prj/prj.runs/impl_1/runme.log' for more information
ERROR: [VPL 60-1328] Vpl run 'vpl' failed
ERROR: [VPL 60-806] Failed to finish platform linker
INFO: [v++ 60-1442] [14:28:08] Run run_link: Step vpl: Failed
Time (s): cpu = 00:00:58 ; elapsed = 00:58:07 . Memory (MB): peak = 2317.180 ; gain = 0.000 ; free physical = 121651 ; free virtual = 328819
ERROR: [v++ 60-661] v++ link run 'run_link' failed
ERROR: [v++ 60-626] Kernel link failed to complete
ERROR: [v++ 60-703] Failed to finish linking
INFO: [v++ 60-1653] Closing dispatch client.

And here is the detailed vpl runme.log. For full log of impl_1 part, please see here.

WARNING: [Vivado 12-180] No cells matched 'pfm_top_i/dynamic_region/.*/inst/.*/control_s_axi_U_slr_0'.
WARNING: [Vivado 12-180] No cells matched 'pfm_top_i/dynamic_region/.*/inst/.*/tapa_state.*'.
ERROR: [VPL_TCL 101-2] ERROR: [Vivado 12-1433] Expecting a non-empty list of cells to be added to the pblock.  Please verify the correctness of the <cells> argument.
ERROR: [VPL_TCL 101-3] sourcing script /path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/scripts/impl_1/_full_opt_pre.tcl failed
INFO: [Common 17-206] Exiting Vivado at Wed Sep 21 14:26:38 2022...
[Wed Sep 21 14:26:39 2022] impl_1 finished
WARNING: [Vivado 12-8222] Failed run(s) : 'impl_1'
wait_on_runs: Time (s): cpu = 00:00:11 ; elapsed = 00:18:40 . Memory (MB): peak = 6322.973 ; gain = 0.000 ; free physical = 109716 ; free virtual = 316835
INFO: [OCL_UTIL] internal step: log_generated_reports for implementation '/path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/output/generated_reports.log'
INFO: [OCL_UTIL] internal step: problem implementing dynamic region, impl_1: opt_design ERROR
INFO: [OCL_UTIL] status: fail (opt_design ERROR)
INFO: [OCL_UTIL] log: /path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/prj/prj.runs/impl_1/runme.log
ERROR: caught error: problem implementing dynamic region, impl_1: opt_design ERROR, please look at the run log file '/path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/prj/prj.runs/impl_1/runme.log' for more information
[14:27:55] Run vpl: Step impl: Failed
INFO: [OCL_UTIL] current step: vpl.impl failed. To rerun the existing project please use --from_step vpl.impl
problem implementing dynamic region, impl_1: opt_design ERROR, please look at the run log file '/path/to/tapa/apps/vadd/run/vitis_run_hw/VecAdd_xilinx_u250_gen3x16_xdma_4_1_202210_1.temp/link/vivado/vpl/prj/prj.runs/impl_1/runme.log' for more information
INFO: [Common 17-206] Exiting Vivado at Wed Sep 21 14:27:55 2022...

Investigation

After comparing the impl configurations (of one previous project) with Vitis 2020.2 and 2022.2, I found the aforementioned bug might result from the renaming of pfm_top_i/dynamic_region to level0_i/ulp in latest tools and platforms. (diff file is here)

But even simply fixing the TCL script for floorplanning (i.e., change that scope name), it still doesn't work. I guess there must be more behavior changes in the latest Vivado especially.

Not very experienced in the codes of tapa and its workflow of floorplan, but if you can give me more hints on it, I'd be really glad to help solve the problem! 😃

Licheng-Guo commented 1 year ago

Hello, current AutoBridge only supports an older version of the U250 platform (xilinx_u250_xdma_201830_2) Can you try with that version? Thank

ueqri commented 1 year ago

Thank you for your reply!

We've tested the old platform before, and it works perfectly with TAPA! But we have to stick on the newest platforms (both U250 and U280) because of some reason.

So I was wondering if there is any plan (or timeline) to support the latest platform (xilinx_u250_gen3x16_xdma_4_1_202210_1 and xilinx_u280_gen3x16_xdma_1_202211_1). Thank you!

Licheng-Guo commented 1 year ago

Sorry, we currently don't have the bandwidth to keep up with the latest platform. If the only issue is that you don't have the physical board with the old platform, maybe we could share our boards.

ueqri commented 1 year ago

Sure, thank you! We will try to find a workaround to deal with that issue.