Open phlash opened 3 years ago
So I've now tried this (remote device: OrangePi Zero LTS using a FUNcube Dongle Pro+, with latest SoapyFCDPP driver, local device my Lenovo E590 laptop using WiFi to introduce some network jitter!), result: stable for a few seconds, then begins emitting many XRUN recoveries (readStream recovered from..
), eventually stalling (Gqrx see no more input).
I suspect this is due to the still quite low transfer size / period selected (1006 frames), whereby any network jitter destabilises the flow control. My own solution has no flow control, uses a larger transfer size / period (default 24000 frames), traded for latency of course, but does not have these challenges.
For comparison, omitting remote:prot=tcp
(and thus using UDP) results in a transfer size of 357 frames, and reports dropped packets ('S' appears on client) constantly when testing with SoapySDRUtil
.
@phlash Something has to be very broken with the flow control.
I pushed a branch to disable flow control, if thats worth trying
What transfer size are you using? This is where transfer size is defined: https://github.com/pothosware/SoapyRemote/blob/master/common/SoapyRemoteDefs.hpp#L91 Its currently 4096 because some platforms would bomb out on larger sizes, I think apple and/or windows. It think though it could easily be increased on linux.
The flow control window comes from the https://github.com/pothosware/SoapyRemote/wiki#remotewindow remote:window setting which supposedly resizes the socket buffer on the receive size so the kernel guarantees that much space. And the window is just that divided by the transfer size. Its currently set to 42 MiB by default. Which should have been like 10K of these transfers before needing a response from flow control.
@guruofquality I'll give that try tomorrow. I'm only guessing that it's the flow control going wrong somewhere, as that seems to be the major difference between the approach taken here and my dumb 'let the kernel sort it out' approach (that is only possible when using TCP).
@guruofquality Flow control is exonerated, your test build has similar behaviour to unmodified SoapyRemote, as does my own code with a small period specified for the ALSA buffer in SoapyFCDPP. Looks like any overrun/overflow is down to the ability of my small test CPU to keep up when there is more task switching in general, eg: if I enable TRACE logging with small ALSA periods, then I see overflows continuously, especially if that logging also goes over the network to the client.
I have made a couple of changes to my own solution that seem to help, and may be worth considering for SoapyRemote:
Originally posted by @guruofquality in https://github.com/pothosware/SoapyFCDPP/issues/13#issuecomment-886273814
How does it (remoting solution) compare when the protocol is set to tcp for soapy remote? https://github.com/pothosware/SoapyRemote/wiki#remoteprot
SoapyRemote is trying to have headers with metadata and some kind of flow control. But if plain tcp is useful, I dont see why that couldnt be a mode in SoapyStreamEndpoint.cpp