kpreid / shinysdr

Software-defined radio receiver application built on GNU Radio with a web-based UI and plugins. In development, usable but incomplete. Compatible with RTL-SDR.
https://shinysdr.switchb.org/
GNU General Public License v3.0
1.07k stars 115 forks source link

Transmit support #124

Open quentinmit opened 5 years ago

quentinmit commented 5 years ago

I don't actually see an open issue about this. Feel free to close as a dupe if I missed it.

ShinySDR has a variety of bits of transmit support (Osmo TX driver, modulators, etc.) but none of it appears to be wired up into a functional UI yet.

I'd like to take a stab at this. Do you have any notes or thoughts about how it was intended to be implemented before I start working on it?

quentinmit commented 5 years ago

Here's what I'm thinking:

Then to support client-side audio,

kpreid commented 5 years ago

There's some overdue core refactoring #111 that I want to get in before adding any major features like transmit support that might lock in the current architecture more. (Basically, on the receive side, the hardcoded graph design that Top implements is going to go away in favor of receivers and similar declaring their requested inputs. Transmitting may or may not use similar structure.) I've got some prototype code going but it hasn't been put to use yet.

That said, some responses:

Receiver should probably be split into a Transceiver parent object with common cells, a Receiver child object with a demodulator, and a Transmitter child object with a modulator; I think it makes sense for a transmitter to always be linked to a receiver.

The picture that I had in mind was that there would be an object that is less "transceiver" and more "frequency of interest" with optionally attached receiver and transmitter. Unfortunately, I don't currently remember the rationale. It might have to do with the future of frequency database interaction (it needs to be more server-side integrated than it is).

Top._do_connect needs to connect Transceivers that are valid to sources' get_tx_drivers.

Because GR flow graph reconfiguration is disruptive, and transmit sample timing need not have anything to do with receive sample timing, I believe it will be best to have a flow graph for each transmitter, separate from the receivers'. (Syncing them would be relevant if one wants to, say, implement a repeater, but I'm going to call that out of scope.)

audiomux.AudioManager needs to track audio sources, ideally paired with audio sinks but possibly it would be easier to use two instances of AudioManager.

AudioManager has the very specific job of mixing and resampling audio to many destinations. Transmitting does neither mixing nor multiple destinations, at least in straightforward cases. Furthermore, AudioManager is going to go away with the #111 refactoring because the dependency graph will make its job implicit.

quentinmit commented 5 years ago

There's some overdue core refactoring #111 that I want to get in before adding any major features like transmit support that might lock in the current architecture more. (Basically, on the receive side, the hardcoded graph design that Top implements is going to go away in favor of receivers and similar declaring their requested inputs. Transmitting may or may not use similar structure.) I've got some prototype code going but it hasn't been put to use yet.

I think the changes I proposed here do not actually make the current architecture locked in more; it will add a bit of wiring to Top but that will be in the same place as all the wiring you're already going to have to touch around receivers.

That said, some responses:

Receiver should probably be split into a Transceiver parent object with common cells, a Receiver child object with a demodulator, and a Transmitter child object with a modulator; I think it makes sense for a transmitter to always be linked to a receiver.

The picture that I had in mind was that there would be an object that is less "transceiver" and more "frequency of interest" with optionally attached receiver and transmitter. Unfortunately, I don't currently remember the rationale. It might have to do with the future of frequency database interaction (it needs to be more server-side integrated than it is).

That's sort of what I was thinking - the Transceiver object would track the frequency information and the selected mode, with optionally attached receiver and transmitter. It sounds like in your mind tuning would involve making a new Frequency object and reattaching the receiver and transmitter to it?

Top._do_connect needs to connect Transceivers that are valid to sources' get_tx_drivers.

Because GR flow graph reconfiguration is disruptive, and transmit sample timing need not have anything to do with receive sample timing, I believe it will be best to have a flow graph for each transmitter, separate from the receivers'. (Syncing them would be relevant if one wants to, say, implement a repeater, but I'm going to call that out of scope.)

Oh, good point. I didn't realize Top is actually a gr.top_block. Yes, I think we would want separate flowgraphs to the extent possible. But can we assume that GR drivers can actually be used from two different flowgraphs at once? I don't know what the contract is on opening the transmit and receive halves of a device at the same time.

I thought about the repeater usecase and it's certainly interesting but I think it's fine to assume that connection would be made outside GR (e.g. with a Pulseaudio loopback device).

audiomux.AudioManager needs to track audio sources, ideally paired with audio sinks but possibly it would be easier to use two instances of AudioManager.

AudioManager has the very specific job of mixing and resampling audio to many destinations. Transmitting does neither mixing nor multiple destinations, at least in straightforward cases. Furthermore, AudioManager is going to go away with the #111 refactoring because the dependency graph will make its job implicit.

Why does AudioManager go away with #111? You still need the moral equivalent to do mixing and resampling.

kpreid commented 5 years ago

It sounds like in your mind tuning would involve making a new Frequency object and reattaching the receiver and transmitter to it?

No, that would be a bad modeling of possibly continuous change. Sorry, as I said I don't remember exactly what the rationale was. In practice I'll do whatever fits in well when the refactoring is in progress.

But can we assume that GR drivers can actually be used from two different flowgraphs at once? I don't know what the contract is on opening the transmit and receive halves of a device at the same time.

This is one of those under-specified things. Audio devices that might be attached to a transceiver don't care. gr-osmosdr when used with the HackRF will fail to switch over unless you ensure the source block is destroyed before you open the sink and vice versa, which is why the osmosdr plugin has support for doing that. But this is independent of whether the blocks are in separate flow graphs.

Why does AudioManager go away with #111? You still need the moral equivalent to do mixing and resampling.

Because instead of having a thing dedicated to making audio resampling connections, each audio sink('s managing wrapper) will be able to specify "I want a sum of these receivers' audio outputs at this sample rate" and the dependency engine will construct the necessary intermediate blocks based on that specification. It's not that AudioManager's job will be replaced, but it will be distributed among generic algorithms and independent units of task-specific (audio) rules.