While working on a target using Zynq7000 cpu I stumbled upon a problem with decoding the address space / selecting suitable bus to perform operation on CSRs.
I’m trying to add a single non-LiteX peripheral to a Zynq7000 SoC, behind AXI2Wishbone bridge of base address 0x43c0 0000. The peripheral is connected via wishbone bus that follows to the core’s decoder, where the control is delegated further. It also provides two AXI slaves and these are too connected in the target but they’re not the focus of the problem.
From what I’ve been able to trace back there were some changes made regarding the csr_decode property in Zynq7000 with commit f46d1c1. And I do understand it was introduced in order to be able to decode the addresses correctly while using the bridge, however, disabling the decoding SoCRegion in Zynq7000 seems to cause an issue where the csr_bridge the LiteX is placing (while finalizing the SoC) is always selected. Therefore, always being granted access to the data channel. I.e. rdata for Zynq7000.
always @(*) begin
slave_sel <= 2'd0;
slave_sel[0] <= (shared_adr[29:15] == 1'd0);
slave_sel[1] <= 1'd1;
end
Where wb_control_dat_r is assigned to my core’s dat_o and the basesoc_wishbone_dat_r is from various data sources generated by LiteX infrastructure.
When simply enabling default decoding in Zynq7000 cpu in target:
self.cpu.csr_decode = True
The generated verilog changes to:
always @(*) begin
slave_sel <= 2'd0;
slave_sel[0] <= (shared_adr[29:15] == 1'd0);
slave_sel[1] <= (shared_adr[29:14] == 15'd17344);
end
(the rest stays the same)
In which case, the data passed by my core doesn’t get or’ed with the data passed from base soc.
Now, the way I constructed my target is I placed the bridge at 0x43c0 0000 and I placed my core at the SoCRegion of origin 0x0, this way it’s address is properly decoded even if zynq's csr_decode = True.
I don’t know why exactly changing the decoding in zynq impacts the selection this greatly. Looks like it got optimised this way.
While working on a target using Zynq7000 cpu I stumbled upon a problem with decoding the address space / selecting suitable bus to perform operation on CSRs. I’m trying to add a single non-LiteX peripheral to a Zynq7000 SoC, behind AXI2Wishbone bridge of base address
0x43c0 0000
. The peripheral is connected via wishbone bus that follows to the core’s decoder, where the control is delegated further. It also provides two AXI slaves and these are too connected in the target but they’re not the focus of the problem. From what I’ve been able to trace back there were some changes made regarding thecsr_decode
property in Zynq7000 with commit f46d1c1. And I do understand it was introduced in order to be able to decode the addresses correctly while using the bridge, however, disabling the decoding SoCRegion in Zynq7000 seems to cause an issue where thecsr_bridge
the LiteX is placing (while finalizing the SoC) is always selected. Therefore, always being granted access to the data channel. I.e. rdata for Zynq7000.Where then
slave_sel
is assigned toslave_sel_r
.Where
wb_control_dat_r
is assigned to my core’sdat_o
and thebasesoc_wishbone_dat_r
is from various data sources generated by LiteX infrastructure. When simply enabling default decoding inZynq7000
cpu in target:The generated verilog changes to:
(the rest stays the same) In which case, the data passed by my core doesn’t get or’ed with the data passed from base soc. Now, the way I constructed my target is I placed the bridge at
0x43c0 0000
and I placed my core at theSoCRegion
of origin 0x0, this way it’s address is properly decoded even if zynq'scsr_decode = True
. I don’t know why exactly changing the decoding in zynq impacts the selection this greatly. Looks like it got optimised this way.