Open vaniaprkl opened 10 months ago
Hi @vaniaprkl
I have just built successfully using the command you provided with both vivado 22.1 and 23.1
A response to this post seems to think it could be that you havent installed the Ultrascale+ MPSoC support: https://support.xilinx.com/s/question/0D52E00006xpnUeSAI/issue-in-adding-clocking-wizard-ip-in-design?language=en_US
This should be a checkbox option when you install Vivado.
Hi wnew, Thanks for running the build test and responding quickly.
I reinstalled vivado ml standard 2022.1 with all the checkboxes ticked and it still failed. Screen of the install options attached. Going to test out 2023.1 next.
Hi @wnew 2023.1 works !! so it is something to do with 2022.1 environment.
so I loaded the .mcs that the build created using hardware manager && "add configuration memory" and also put it through a cold reboot cycle.
The hardware device 04:00.0
came back after the cold reboot as this:
04:00.0 Network controller: Xilinx Corporation Device 903f
Subsystem: Xilinx Corporation Device 0007
Physical Slot: 3
Flags: fast devsel, IRQ 5, NUMA node 0
Memory at c5800000 (64-bit, non-prefetchable) [disabled] [size=256K]
Memory at c5400000 (64-bit, non-prefetchable) [disabled] [size=4M]
Capabilities: [40] Power Management version 3
Capabilities: [60] MSI-X: Enable- Count=10 Masked-
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [1c0] Secondary PCI Express
Capabilities: [1f0] Virtual Channel
Does this look right? I'm hoping to get this loaded against the DPDK drivers but the bringup there (https://github.com/Xilinx/open-nic-dpdk Section7 ) mentions this need to show up at a memory controller.
Good news, that is strange to me why it would only work in 2023.1 but maybe someone else can provide insights to that.
The one issue you might have with a 2023.1 build is the changed to the register mapping between the QDMA IP versions.
See @cneely-amd's answer to my question here: https://github.com/Xilinx/open-nic-shell/discussions/46
If you are interested in assisting a port to 2023.1 I am busy trying to figure out the difference in the mapping and we would need to update the driver.
Re the DPDK drivers I am only starting to look at those now.
Thanks @wnew for your feedback. I am trying to get opennic running with 2022.1 since Xilinx recommended it as per their response to you. Might decide to go 2023 only if they are not able to provide a fix... hoping they'll.
Hi @vaniaprkl
Sorry for the delayed reply. I experimentally ran the build for U55C version through Vivado this afternoon using Vivado 2021.2.1 and with also making one modification to src/system_config/vivado_ip/clk_wiz_50Mhz.tcl
.
For the modification, I edited the Tcl to remove "-version 6.0", for example, like the following:
set clk_wiz_50Mhz clk_wiz_50Mhz
#create_ip -name clk_wiz -vendor xilinx.com -library ip -version 6.0 -module_name $clk_wiz_50Mhz -dir ${ip_build_dir}
create_ip -name clk_wiz -vendor xilinx.com -library ip -module_name $clk_wiz_50Mhz -dir ${ip_build_dir}
set_property -dict {
CONFIG.PRIMITIVE {Auto}
I only did one run to test this, and it looks like it is working. My Vivado run has finished place_design and is on route_design as I'm writing this reply without receiving any errors.
Best regards, --Chris
My post above was using: vivado -mode batch -source build.tcl -tclargs -board au55c
to generate the project.
Hi, I was trying to build OpenNic targeting the U55C board using Vivado 2022.1 version and seeing this strange issue. I was hoping to get both ports going with DPDK driver so I gave the following build command with
-num_phys_func 2 -num_cmac_port 2
stated in DPDK build suggestion.am I sending in wrong parameters for a 2 port 100G OpenNIC pipeline?