tknopp / RedPitayaDAQServer

Advanced DAQ Tools for the RedPitaya (STEMlab 125-14)
https://tknopp.github.io/RedPitayaDAQServer/dev/
Other
34 stars 9 forks source link

Fast continuous sample read-out #52

Closed StackEnjoyer closed 1 year ago

StackEnjoyer commented 1 year ago

Hi,

firstly I would like to thank you guys for providing this API. For a project I am working on I am interested in reading out the RedPitaya internal buffer in small chunks at a high rate, i.e. 4000 sample-pairs at 30 Hz. The method included in the API I am currently using is readSamples.

However, I have noticed that receiving the samples via this method causes a large computational overhead due to the many good-practice checks which are performed within readSamples. I would like to implement a fastReadSamples function based on the SCPI commands provided by the API for faster (unsafer) small-chunk readout, i.e. a few ms per read.

I have tried to use the RP:ADC:DATA? command as detailed in the Acquisition and Transmission section of the SCPI docs, however, the read operation tends to hang up after sending the ASCII command. As an example, the (failing) implementation looks like this:

function fastReadSamples!(rp::RedPitaya, reqWP::Int64, b::AbstractArray)
    numSamples = Int((length(b)) ÷ 2)
    command = string("RP:ADC:DATA? ", reqWP, ",", numSamples, "\n")
    write(rp.socket, command)
    # Flush command socket
    @async readline(rp.socket)
    # Read ADC data
    read!(rp.dataSocket, b) # <- this part hangs up often
    return nothing
end

Currently, a work-around is to use the RP:ADC:DATA:PIPELINED? command, however, this also transmits the performance data which causes additional overhead. The MWE is:

function fastReadSamples!(rp::RedPitaya, reqWP::Int64, b::AbstractArray)
    numSamples = Int((length(b)) ÷ 2)
    command = string("RP:ADC:DATA:PIPELINED? ", reqWP, ",", numSamples, ",", numSamples, "\n")
    write(rp.socket, command)
    # Flush command socket
    @async readline(rp.socket)
    # Read ADC data
    read!(rp.dataSocket, b)
    # Flush data socket (21 bytes of performance data)
    read(rp.dataSocket, 21)
    return nothing
end

## rp connected, acquisition and trigger set
buffer = zeros(Int16, 8000)
fastReadSamples!(rp, 0, buffer)

During testing I have ensured that the internal write pointer has progressed further than the requested samples, i.e. the data exists. What am I missing to make RP:ADC:DATA? work? Any help would be greatly appreciated.

nHackel commented 1 year ago

Hello,

I have a few questions about the measurement setup. I am not sure if I understood the sampling rate correctly. Are you sampling with 30 Hz? Or do you want to retrieve 4000 Samples every 1/30 s with sampling rate x?

When the read operation hangs up, is the RedPitaya still responsive to other commands or does everything timeout?

At the moment I am not exactly sure where the overhead is coming from, maybe with the answers I have a better picture. For our setups the status information provided by the pipelined version never proved to the bottleneck. We can usually read continuously (tested for up to an hour) at a decimation of 8/ a sampling rate of around 15 MHz

nHackel commented 1 year ago

If you are calling readSamples for every 4000 Sample Buffer, then that could result in your overhead. In that case I'd recommend using the Channel version of the readSamples function.

There you can start an inital transmission of n*4000 Sample groups with a chunksize of 4000. You can then retrieve and process each chunk from the channel while the transmission is still ongoing. This way the server can just constantly transmit data you dont have the additional back and forth of the query/response between client and server.

A small example of such a setup can be seen here.

TacHawkes commented 1 year ago

I have a few questions about the measurement setup. I am not sure if I understood the sampling rate correctly. Are you sampling with 30 Hz? Or do you want to retrieve 4000 Samples every 1/30 s with sampling rate x?

The goal is to call readSamples approx. 30 times a second (for a "real-time" GUI display). For this sampling rate (488.125 kHz in this case) this means about 8000 samples for each call. Calling readSamples without a channel has some initialization overhead which makes this update rate not possible right now.

The solution in your example might be a possibility, however it would be more elegant to have this running in an infinity loop.

Is there some kind of bug with the RP:ADC:DATA? command? This command returns true on the command socket but a read afterwards on the dataSocket blocks indefinitely.

nHackel commented 1 year ago

There was indeed an issue in RP:ADC:DATA? which I fixed in this commit. Depending on which version of the project you are using, it might be easier to just replicate the commit locally for now. Though we will soon publish a new release.

For your plan you can use either the RP:ADC:DATA:PIPELINED? version or the (fixed) RP:ADC:DATA?, from the server perspective they achieve the same transmission rate. The pipelined version is actually intended for this use-case, as it allows the server to already send the next group of samples while the client is still processing the old ones. The server is then also responsible for only transmitting "existing" data, saving even more costly SCPI commands, which are dominant for so few samples. At your sampling rate this might not matter though.

StackEnjoyer commented 1 year ago

Hi,

thank you very much for the quick response. I have tested RP:ADC:DATA? with your fix and it works as expected. As @TacHawkes mentions, the goal is to retrieve 8000 samples per call at 30 Hz, which takes about 1.5 ms now.