aws / aws-fpga

Official repository of the AWS EC2 FPGA Hardware and Software Development Kit
Other
1.51k stars 517 forks source link

Debug embedded microblaze using XVC JTAG in AWS FPGA shell #641

Open augierg opened 7 months ago

augierg commented 7 months ago

Is there any undocumented flow for embedded FW development using a MicroBlaze inside the CL, with the AWS-FPGA HDK?

In my on-premise environment, using u200 card, and following the instructions from aws-fpga-f1-u200/Virtual_JTAG_XVC.md, I am able to

  1. launch the XVC PCIe driver on the host with the U200 card
    
    Description:
    Xilinx xvc_pcie v2018.3
    Build date : Apr 25 2024-12:18:59
    Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.

INFO: XVC PCIe Driver character file - /dev/xil_xvc/cfg_ioc0 INFO: XVC PCIe Driver configured to communicate with Debug Bridge IP in AXI mode (PCIe BAR space). INFO: PCIe BAR index=0x0002 and PCIe BAR offset=0x0000 INFO: XVC PCIE Driver Loopback test successful.

INFO: xvc_pcie application started INFO: Use Ctrl-C to exit xvc_pcie application

INFO: To connect to this xvc_pcie instance use url: TCP:fpga:10201

INFO: xvcserver accepted connection from client 192.168.8.104:43220


2. then connect to the MDM from XVC virtual cable, and control the MicroBlaze, in  XSDB console

xsdb% connect -xvc-url tcp:fpga:10201
tcfchan#1 xsdb% targets
1 debug_bridge 2 Legacy Debug Hub 3 Legacy Debug Hub 4 MicroBlaze Debug Module at USER1.2.2 5 MicroBlaze #0 (Running) xsdb% jtag servers
digilent-ftdi cables 0 xilinx-ftdi cables 0 digilent-djtg cables 0 bscan-jtag cables 0 xilinx-xvc:fpga:10201 cables 1 xsdb% jtag targets
1 Xilinx Virtual Cable fpga:10201 2 debug_bridge (idcode 0a003093 irlen 6 fpga) 3 bscan-switch (idcode 04900102 irlen 1 fpga) 4 debug-hub (idcode 04900220 irlen 1 fpga) 5 bscan-switch (idcode 04900102 irlen 1 fpga) 6 debug-hub (idcode 04900220 irlen 1 fpga) 7 mdm (idcode 04900500 irlen 1 fpga) xsdb% targets
1 debug_bridge 2 Legacy Debug Hub 3 Legacy Debug Hub 4 MicroBlaze Debug Module at USER1.2.2 5 MicroBlaze #0 (Running) xsdb% target 5
xsdb% targets
1 debug_bridge 2 Legacy Debug Hub 3 Legacy Debug Hub 4 MicroBlaze Debug Module at USER1.2.2 5* MicroBlaze #0 (Running) xsdb% rst
xsdb% Info: MicroBlaze #0 (target 5) Stopped at 0x0 (External debug request)


However, we porting the exact same CL to the AWS F1, using the instructions from [aws-fpga/Virtual_JTAG_XVC.md](https://github.com/aws/aws-fpga/blob/master/hdk/docs/Virtual_JTAG_XVC.md)

ubuntu@ip-172-31-41-160:~$ sudo /usr/local/bin/fpga-start-virtual-jtag -S 0 Starting Virtual JTAG XVC Server for FPGA slot id 0, listening to TCP port 10201. Press CTRL-C to stop the service.

The xvc fails to identify the valid targets, including the MDM and microblaze, as shown below

xsdb% jtag servers
digilent-ftdi cables 0
xilinx-ftdi cables 0 digilent-djtg cables 0 bscan-jtag cables 0 xilinx-xvc:ec2-52-34-30-133.us-west-2.compute.amazonaws.com:10201 cables 1 xsdb% jtag targets
8 Xilinx Virtual Cable ec2-52-34-30-133.us-west-2.compute.amazonaws.com:10201
9 debug_bridge (idcode 0a003093 irlen 6 fpga) 10 bscan-switch (idcode 04900102 irlen 1 fpga) 11 unknown (idcode 09200204 irlen 1 fpga) 12 unknown (idcode 09200440 irlen 1 fpga) xsdb% targets
1 debug_bridge
2 09200204 3 09200440 xsdb%


I even tried using the xilinx xvc driver, similar to the on-premise u200 flow, but that leads to errors indicating incompatibility with the PCIe BAR space

ubuntu@ip-172-31-41-160:~$ sudo /home/ubuntu/xvc/xvcserver/bin/xvc_pcie -s TCP::10201

Description: Xilinx xvc_pcie v2018.3 Build date : Apr 25 2024-12:18:59 Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.

INFO: XVC PCIe Driver character file - /dev/xil_xvc/cfg_ioc0 INFO: XVC PCIe Driver configured to communicate with Debug Bridge IP in AXI mode (PCIe BAR space). INFO: PCIe BAR index=0x0002 and PCIe BAR offset=0x0000 Loopback test length: 32, pattern abcdefgHIJKLMOP FAILURE Byte 0 did not match (0x61 != 0x01 mask 0xFF), pattern abcdefgHIJKLMOP ERROR: XVC PCIE Driver Loopback test failed. Error: Success Exiting xvc_pcie application.



Help on the suggested flow to debug embedded FW using the XVC virtual JTAG cable on the F1 instance would be appreciated at this point
czfpga commented 6 months ago

Hi,

Thank you for reaching out. We're currently investigating this issue with AMD. We'll keep you updated.

s03311251 commented 6 months ago

I hope this is the right place to post here, as I am expericing a similar problem:

Right now I'm working to put a MicroBlaze core with Vivado IP Integrator flow, however, I can't connect to the MicroBlaze Debug Module (MDM) when I deployed in on an EC2 F1 instance.

My design is as following: cl
design files: aws_mb_example.zip

I have followed https://github.com/aws/aws-fpga/issues/507, which mentioned to use "EXTERNAL HIDDEN" for BSCAN in MDM: MDM settings

However, it seems failed to identify MDM, as when I connect it with XSCT, I got the following: 2024-05-16 12_33_24-Window
Top left is the SSH session of EC2 F1, which was running hw_server and sudo fpga-start-virtual-jtag -P 10201 -S 0
Bottom left is the Vivado tcl shell, connected to the Virtual JTAG on F1
Bottom right is the XSCT, also connected to the Virtual JTAG on F1
The commands are run in this order: SSH > Vivado > XSCT

The targets command in XSCT gives 8-digit numbers only, but supposedly it should gives something like "MicroBlaze Debug Module".

May I ask if there is any problem with my design with a MicroBlaze? Is there an alternative method to connect to MDM, or is there an example design for MicroBlaze that worked with AWS EC2?

Thank you.

jameslxilinx commented 6 months ago

I imagine the first use case is using HDK flow and the second is the HLx flow. In both cases we need to make sure the debug_hub and MDM is connected properly.

Doing a first pass with 2021.2 with HLx flow with MDM/MicroBlaze similar to the testcase, I noticed that the connections to MDM were not correct. Can the post opt of both designs be open and can the following connections be verified(first snap shot)? If you can either post the DCPs or snapshots of the post opt connection like below (note no connects make sense for the below and the debug icon is for an ILA test I was doing).

image

Below make sure the debug bridge connections go to the shell (second snapshot).

image

augierg commented 6 months ago

@jameslxilinx

In response to your inquiry, I confirm that for my case (first one), the mdm1 in the CL is connected to the boundary scan MUX locate in the STATIC_SH logic, per attached snapshot

image

jameslxilinx commented 6 months ago

@augierg , the above looks like the U200 shell. I would expect the name to say hidden or something like the below. Can you confirm this is the F1 design?

static_sh/SH_DEBUG_BRIDGE/inst/bsip/inst/USE_SOFTBSCAN.U_BSCAN_TAP

augierg commented 6 months ago

@jameslxilinx you are correct, this was generated from our on prem aws-f1-u200 flow. I'm going to provide you the similar for the actual F1 shell

augierg commented 6 months ago

@jameslxilinx I was able to screen capture. It wasn't really obvious, as everything in the hierarchy shows as hidden, until you descend in the right tree where the CL instance is located

image

jameslxilinx commented 6 months ago

While I look at the diagram, looks like lab tools is 2018.3. What version of the developer kit (Developer AMI) and version of vivado tools are being used?

augierg commented 6 months ago

While I look at the diagram, looks like lab tools is 2018.3. What version of the developer kit (Developer AMI) and version of vivado tools are being used?

2021.2, most recent supported version in current master, and head of master branch in repo for the HDK

The shell version downloaded in hdk/common is shell_v04261818

augierg commented 5 months ago

@jameslxilinx : is there anything else I can provide to help with this ?

jameslxilinx commented 5 months ago

Working on reproducing the issue in F1.

augierg commented 2 months ago

@jameslxilinx : any luck on reproducing this issue at your end ?