OVGN / OpenHBMC

Open-source high performance AXI4-based HyperRAM memory controller
Apache License 2.0
56 stars 12 forks source link

OpenHBMC #2

Closed LasseEriksson closed 2 years ago

LasseEriksson commented 3 years ago

Hi, i have downloaded your openhbmc ip.

I do not now what freq and phase you should attache to clk_iserdes.

I am trying to run it at 100Mhz at the Hyperram.

Best regards Lasse Eriksson

OVGN commented 3 years ago

Hello, Lasse!

I do not now what freq and phase you should attache to clk_iserdes.

Concerning clocking, for 100MHz operation you should have:

I'm going to commit an example project very soon.

Feel free to ask questions, as IP documentation is not ready yet.

Regards, VGN

OVGN commented 3 years ago

Lasse, please update your IP core, I decided to revert back some new features, as they are not tested enough.

LasseEriksson commented 3 years ago

Hi, Thx for the info.

I think you have done a great job with this ip.

Have you tested this ip with 2 cores in the same design.

I have a design with 2 Micorblazes and 2 ip cores from synaptic labs that is working.

I have taken this design and put yours ip in.

When I implement the design i get some wierd errors. "Implementation Complete, Failed Nets" but with a green ok symbol.

It gives me warning about one of the IP cores (openHBMC_Core) not the second one (openHBMC_Samp).

RTSTAT #1 Critical Warning 1 net(s) are partially routed. The problem bus(es) and/or net(s) are System_i/clk_wiz_1/inst/clk_out4. clk_out4 is 300Mhz going to iserdes_clk.

Best Regards Lasse

LasseEriksson commented 3 years ago

Hi, i have tried to open your IP core using vivado edit ip core.

All the Fifo ip are red locked and when i try to synthesis it complains it can not find fifo ip cores.

Best Regards Lasse Eriksson

OVGN commented 3 years ago

I think you have done a great job with this ip.

Thanks a lot)

Have you tested this ip with 2 cores in the same design.

Yes, I have a custom board with Spartan-7 (XC7S50) and two W956D8MBYA5I (200MHz capable). Also have a working design with two memory controllers, Microblaze, I/D caches, DMAs, etc.

I have taken this design and put yours ip in. When I implement the design i get some wierd errors. "Implementation Complete, Failed Nets" but with a green ok symbol. It gives me warning about one of the IP cores (openHBMC_Core) not the second one (openHBMC_Samp). RTSTAT #1 Critical Warning 1 net(s) are partially routed. The problem bus(es) and/or net(s) are System_i/clk_wiz_1/inst/clk_out4. clk_out4 is 300Mhz going to iserdes_clk.

I think there are two possible reasons:

  1. Probably your clk_wiz by default is inferring a BUFG on clk_iserdes, which drives BUFIO inside the controller IP, this is a violation. You could set "No buffer" option for clk_iserdes clock and leave BUFGs for clk_hbmc_0 and clk_hbmc_90.
  2. Probably error appears because of the clock region constraint violation. To achieve highest performance I use BUFIO+BUFR to clock ISERDES. Looks like your two memory part IOs are not within a single clock region.

Anyway, I have a solution. Today I have tested a new clocking scheme with BUFG only and in this mode a weak XC7S50 could run both memory parts at 150MHz. Now ISERDES clocking scheme is selectable. Please, update the IP core from my repo, select ISERDES Clocking Mode = "BUFG" in IP block design configuration GUI and make another try.

BTW, 150MHz limitation is due to Spartan-7 BUFG switching capabilities. Just cannot drive clk_iserdes with BUFG faster than ~464MHz, i.e. memory clock is 464MHz / 3 = ~150MHz. I mean that if your FPGA is faster than Spartan-7, you could probably push this limit even with BUFG clocking mode.

All the Fifo ip are red locked and when i try to synthesis it complains it can not find fifo ip cores.

What Vivado version are you using? I have packed IP in 2020.2 and can edit it without any errors. BTW, I'm going to replace Xilinx FIFO IP cores with my custom FIFOs to have a verilog only sources.

Regards, VGN

LasseEriksson commented 3 years ago

Hi, i figured out why the fifo ip cores are red locked in vivado. It is becuse we have different chips so i had to upgrade the fifo cores to match my chip.

Now when I synthesize your ip cores i get this message for all the cores.

[Designutils 20-1280] Could not find module 'fifo_72b_18b_512w'. The XDC file c:/projekt/olm3/breaker/vivado/2020.2/breaker.tmp/openhbmc_v2_0_project/OpenHBMC_v2_0_project.gen/sources_1/ip/fifo_72b_18b_512w/fifo_72b_18b_512w.xdc will not be read for any cell of this module.

Do you get this errors ?

The design synthesize and implements and says all is good.

I have not been able to get some time to try it on real hardware yet. Hopeful next week.

I am running vivado 2020.2 and my system settings is set to vhdl.

Best regards Lasse

OVGN commented 3 years ago

Hi!

Now when I synthesize your ip cores i get this message for all the cores. Do you get this errors ?

Yep, I see the same warning. There are 3 different FIFO IPs for 16, 32 and 64 AXI bus width. In fact all data about the Xilinx IP core in is .xci file, which is actually version controled. In your block design, the openHBMC core is configured with a certain AXI width parameter and only certain FIFO is instantiated via generate. But looks like Vivado generated .xdc for all FIFOs and now complains that there is no module (unused FIFOs) for some xdc files. I don't know yet how to manage this issue, but you can ignore this warning. Moreover, I'm going to replace xilinx IPs with my custom FIFOs and this issue will go away.

I have not been able to get some time to try it on real hardware yet. Hopeful next week.

Looking forward your hardware test results! Thanks a lot for testing, I really need some external usage experience.

Regards, VGN

LasseEriksson commented 3 years ago

Hi!,

it seems that the core is working but the base address it responds to is 0 to mem range not as i have have set in the address editor.

I have set one core to 0x10000000 base address and the second to 0x20000000.

How do i set the base address. The range in vivado for s_axi_araddr should be 22:0, when i expand your ip for the axi bus it read range 31:0. I looked at the working core i it has the range 22:0 for araddr .

Best Regards Lasse Eriksson

OVGN commented 3 years ago

it seems that the core is working but the base address it responds to is 0 to mem range not as i have have set in the address editor. How do i set the base address. The range in vivado for s_axi_araddr should be 22:0, when i expand your ip for the axi bus it read range 31:0.

Fixed. AXI address width now is calculated from memory range value in Address Editor, bus width will be updated as soon as you click Validate Design. Memory size parameter in GUI is not needed any more, everything is controlled by Address Editor.

I looked at the working core i it has the range 22:0 for araddr .

Yep, now for 8MB space we have [22:0] AXI address range.

Please, update your IP core. Among other things, I fixed a critical bug in DRU and lot of other small bugs. I have also attached example projects with single and dual RAMs in bufg and bufio clocking modes.

Regards, VGN

LasseEriksson commented 3 years ago

Hi, i have run some test with your core.

When connected to Microblaze Peripherak bus it works, but i will not work if you connect to MB cache bus(AXI_DC and AXI_IC).

I have also checked the time when running full ram test between Synaptic and your core. It seems your core is 10% faster. :) 👍

I got the ip core running one time with the AXI_DC bus so it could be a timing error in the core. Must of the time the microblaze goes stall when connected to the AXI_DC bus.

Best Regards Lasse.

OVGN commented 3 years ago

Hello!

When connected to Microblaze Peripherak bus it works, but i will not work if you connect to MB cache bus(AXI_DC and AXI_IC).

Hmm..., this is strange. I have integrated lastest version core (v2.0 rev.78) in my OpenIRV project, where I have RTOS on Microblaze with DC, IC, 3 x DMAs and dual HBMC controller configuration, and...this is working. I saw cacheline AXI wrapped bursts from CPU to hyperram along with burst from DMAs.

I have also checked the time when running full ram test between Synaptic and your core. It seems your core is 10% faster. :)

Glad to hear that :) I hope it will run even faster as soon as I add command queueing.

I got the ip core running one time with the AXI_DC bus so it could be a timing error in the core. Must of the time the microblaze goes stall when connected to the AXI_DC bus.

Can you, please, reveal your AXI configuration? I mean, the AXI bus width, address size, CPU cacheline size in beats. Looks like this issue is repetive. Can you add in your block design a System ILA on AXI right before the memory core and catch first cacheline access from CPU, that fails?

Best Regards, VGN

OVGN commented 3 years ago

Hi,

After some experiments, I found out that there is one more bug in DRU. This part of the memory controller is the most complex. I'm going to make a testbench that will cover all possible states to get sure, that DRU is working properly in any condition. I'll let you know as soon as bug fixed.

Regards, VGN

LasseEriksson commented 3 years ago

Hi, i am sorry but i have to pause the testing. To much work right now.

OVGN commented 3 years ago

Hi, i am sorry but i have to pause the testing. To much work right now.

Sure!

I have redesigned and verified DRU. Prevoius design was really poor, there were several conditions, where data recovery module was failing to detect some single data beats within a burst. Now it looks to be working well.

Regards, VGN