Closed jim-martz closed 5 years ago
Hello Jim,
Thanks for testing the MCPHA application. I can reproduce the problem. Looks like it's a bug in the Xilinx's device tree scripts:
https://github.com/Xilinx/device-tree-xlnx
Generated tmp/mcpha.tree/pl.dtsi contains two entries for the same module:
reader_0: axi_axis_reader@40008000 {
};
dtg_reader_0: axi_axis_reader@40008000 {
/* This is a place holder node for a custom IP, user may need to update the entries */
clock-names = "aclk";
clocks = <&misc_clk_0>;
compatible = "xlnx,axi-axis-reader-1.0";
reg = <0x40008000 0x8000>;
};
Hopefully, it's fixed in 2019.1.
In my scripts, I'm using the 'all' make rule only with the led_blinker project. Maybe you could use a similar approach as a workaround. I think that the following commands should do the job:
make NAME=led_blinker all
rm boot.bin tmp/boot.bif
make NAME=mcpha boot.bin
Best regards,
Pavel
Thank you for the reply. I seen that the make all was only used on the led_blinker project and thought about giving that a try. But even though the make did not complete, I was still able to open the mcpha project and look at it. I am not sure what way we are going with the project I am working on. Using the standard RP project I was able to make the fpga bit stream file with 64ks buffers on the ADC, by making the DAC buffer 1(was worried 0 may cause some problems). With 64k ADC buffers we may be able to capture enough data for our project. If not, I will be looking at the mcpha project some more then.
I am wondering, when transferring the 8Ms buffer over the 1Gb Ethernet, Do you have any data on the how quickly the sample data is transferred? I mean its is 32bits per set of samples over 1Gb/s but there is the overhead of the TCP and how fast the RP can handle the data.
Thanks, Jim
I use the adc_test project to test how fast the ADC samples could be transferred to a remote PC. Here is a link to a post on the Red Pitaya forum with the results of the tests:
http://forum.redpitaya.com/viewtopic.php?t=317&p=2118#p2118
Transferring two bytes per sample, the highest sample rate that I achieved was 31.25 MSPS. With the client program running under Windows 7, the Windows 7 network monitoring showed 63-65 MB/s network usage.
I think that faster transfer rates can be achieved with zero-copy transmission of data.
The problem is fixed in Vitis 2019.2. Making the device tree for the mcpha project works without any error message. Closing this issue.
Hello,
I got an error when building mcpha. After cloning red-pitaya-notes I run "make NAME=mcpha all" to build the project. About an hour in it fails with this error when making the device tree.
Any idea what may be going on here?
Thanks, Jim
Description of the setup:
Description of the problem:
Steps to reproduce the problem:
1. 2. 3.