Closed DvdBerg closed 3 years ago
My code can be found here in case anyone wants to reproduce the issue: DvdBerg@11799a6a082788255ec76ef3e8cb0c341cfff4b2
My first assumption would be that you have a problem with your ID widths. The interconnect determines where to route the response based on the MBSs of the ID field. It does so by appending $clog2(NrMasters)
bits to the master's id field. Depending on which master performs the access a static ID gets prepended to the id.
In the example, you posted your the id of the master would be 0
since the MSB is set to 0
. Can you double-check that this is indeed the case and make sure that you have enough id bits to accommodate all masters? Unfortunately, the version of the interconnect in this repo is an old one that does not have the proper assertions to catch those cases.
I have indeed incremented the number of masters (NBSlave
inside ariane_xilinx.sv
), after which the ID width is calculated as you describe: https://github.com/DvdBerg/cva6/blob/11799a6a082788255ec76ef3e8cb0c341cfff4b2/fpga/src/ariane_xilinx.sv#L161. But I'll try to double-check the extended ID values inside the interconnect to ensure they are different for the different masters.
In the meantime, I have also replaced the old interconnect with the latest version of the axi_xbar
from the pulp-platform/axi project in an attempt to fix the issue. However, I run into a similar issue there, with both masters stalling once the first request from my test module is made (while everything works correctly with my test module removed). So it really seems to be an issue with my own module or configuration.
I'll post the results once I'm able to view the extended IDs inside the interconnect.
These are the waveforms at the AXI interface of the DRAM:
Originating from Ariane:
Originating from my test module:
As you can see, the IDs are correctly extended with different prefixes that equal the respective indices of the masters in the slaves
array of ariane_xilinx.sv
. So it seems something else is wrong instead.
Hello,
I am facing a similar issue in B response channel in the old version AXI crossbar (used in the cva6 repo) while connecting a master module to a slave port of the AXI crossbar (let's say slave[2]). I have also properly set the ID (aw_id) and also traced it, which in turn properly are responded to by the DRAM). The problem is the B response of the DRAM (master port of axi crossbar) is not propagated to the issuing master module (slave port of axi crossbar). Here is the captured signals: (SLOT 10: it is connected to the Crossbar(Master)/DRAM(Slave), and SLOT 9: it is connected to the MyModule(Master)/Crossbar(Slave) ports.
It is worthwhile to be said that the data is correctly written into the DRAM, however, since my custom module (In which the Xilinx DMA IP is responsible to manage the master port) doesn't receive any response from the AXI crossbar, it shows an OVERFLOW state in its write transaction ( I am not sure if the OVERFLOW comes from this reason, it might arise also from the ILA buffers, not the AXI crossbar), here:
And, after the next write transaction, the AXI crossbar freezes!
I wish I am missing something here. I look forward to hearing from you. Thank you.
Hello Again,
I tried also to capture the ARIANE RISCV master port (slave[0] of the crossbar), and found that the unique AW_ID which is generated only by my custom module (sitting on the slave[2] of the crossbar) is wrongly redirected to the ARIANE RISCV in B response as shown also below: (SLOT 8; is ILA port connected to the slave[0] port of crossbar). I am not sure why the response is wrongly redirected to the RISCV instead of to my custom IP, where the AW and W are originated.
:(
With more investigation on the issue, I monitored the B_ID coming from the DRAM on the master port (AXI crossbar). Here is the AW_ID which is correctly passed from the slave[2] (my custom) to the master[0] (DRAM): I changed the master AW_ID to the 0xC, and the crossbar extend it with other two bits and considered the origin of the ID which is slave[2], and passed this AW_ID to the master[0], with AW_ID = 0x2 C.
We expect the response to this write request is reflected from the DRAM with the same ID as the B_ID, right? Here is the capture B response from the DRAM:
Apparently, it doesn't reflect the 0x2C, and just the LSB, 0xC! The width of the B_ID is also 6 and it is correct (We have the ID width of slaves equal to 4 and the ID width of the masters ( AxiIdWidthMaster+ $clog2(NBSlave)), which is 6 bits.
Did you re-generate the MIG to accomodate for the wider id filed? I guess because you added a master the id width changed and this needs to be reflected by the MIG (and other generated IPs). Unfortunately, as it is down right now there isn't a way to automatically adjust that :(
Dear Florian
Many thanks for your precise solution. The problem was exactly what you mentioned. It is solved :) Here is the B_ID reflected by the DRAM on the crossbar:
And here is the correctly redirected AXI ID to my custom module (on DMA AXI master port).
:)
Awesome!
Your solution also solved it for me, thank you! For anyone that stumbles on the same problem in the future, you'll need to modify and regenerate the following ip's (in the fpga/xilinx
folder):
xlnx_mig_7_ddr3
: change the C0_S_AXI_ID_WIDTH
in the mig_*.prj
file(s).xlnx_axi_clock_converter
: change the CONFIG.ID_WIDTH
in the tcl/run.tcl
script.xlnx_protocol_checker
(if the protocol checker is enabled with the PROTOCOL_CHECKER
define): change the CONFIG.ID_WIDTH
in the tcl/run.tcl
script.xlnx_axi_dwidth_converter
: change the CONFIG.SI_ID_WIDTH
in the tcl/run.tcl
script. This ip doesn't need to be changed for its current use cases, but it might be needed in the future.Maybe it would be a good idea to add a comment near the NBSlave
value in fpga/src/ariane_xilinx.sv
that mentions that you'll need to modify and regenerate the ip's if you change that value? In hindsight, this is very obvious, but it wasn't before. The fact that it's parameterized makes it seem as if no additional changes are required.
Agree, please feel free to open a PR! That would be more than welcome :-)
Hello,
For the sake of achieving Direct Memory Access, I have created a small test module that reads a single memory address (
0xBC000000
) every 100 ms. This module is connected as master on the interconnect in the same way that the Ariane core is connected (using areq_t
andresp_t
connected to anaxi_master_connect
).When trying to read a DRAM address, a response never arrives (
r_valid
never becomes high). I assumed the problem was in my test module, but when I change the address to one inside the bootrom instead of the DRAM, the response arrives correctly. To further inspect the issue, I have added debug probes to the AXI signals of both my own module and the DRAM. This results in the following waveforms:Request in the test module:
Request to the DRAM:
Response from the DRAM:
So the request arrives correctly at the DRAM, and the response is correctly returned to the interconnect. However, this result never propagates back to the test module.