awmlee / red_pitaya_streaming_client_python

11 stars 5 forks source link

streaming speed over ethernet #1

Closed SJLC closed 4 years ago

SJLC commented 4 years ago

README mentions 13 mega-samples/sec over wifi and says ethernet is much faster -- what have you seen for continuous streaming rate over ethernet?

The docs for RP streaming server only promise 20MB/sec and I am wondering if that's a real bottleneck (and if so, where is it??) https://redpitaya.readthedocs.io/en/latest/appsFeatures/apps-featured/streaming/appStreaming.html

awmlee commented 4 years ago

Hi! I now realize that mega-samples/sec is a bit confusing, because there's ambiguity as to whether it's one channel or two. I'll revise the readme to reflect my new measurements and clock everything in MB/s. Using red-pitaya over ethernet, I've been able to get a max of 40 MB/s after which data starts getting lost. I measured this two ways:

1) using the code in this repository and looking at the "Data Rate MB/s" readout in the python gui. The approximate average was 40 MB/s 2) using a windows network tool to measure the amount of data received from RP over the gigabit network over 60 seconds (average was 41 MB/s)

Setup: RP connected directly to a client machine over gigabit ethernet. Observe the data rate transferred in dual channel, 14 bit mode, with various acquisition clock divisors (125 MSPS/divisor). I tried divisors, 40,20,15,12,10,1. I could not display the data past a divisor of 40. Data started getting lost beyond a divisor of 10.

Limitation: I'm not completely sure. I don't think it's the python code running on the client machine. When data gets lost on client side I can see the queue depth increasing; I didn't observe this in my testing. On the RP side, there's a streaming manager that TCP packetizes the data and streams it. Part of the packet is a lost counter, which starts to increase dramatically when I try to increase the data rate too much. I think this lost counter is incremented when the code cannot keep up with the buffer writing of the FPGA.

If we could loopback stream on the RP, I think we could get a more definitive answer for the capability of the streaming code running on the RP.

SJLC commented 4 years ago

Thank you for the detailed summary of your observations. I'm asking around to see if I can find out where that bottleneck might be, since it seems like the Zynq 7010 should be capable of sustaining capture at higher rates based on my reading of the datasheet.