sm0svx / svxlink

Advanced repeater system software with EchoLink support for Linux including a GUI, Qtel - the Qt EchoLink client
http://svxlink.org/
Other
426 stars 168 forks source link

Audio methods for en-/decoders #263

Open dl1hrc opened 7 years ago

dl1hrc commented 7 years ago

I need a hint how I have to handle the audio devices if I want to use an en- and decoder in one application. Let's start as an example with the new ReflectorLogic class: m_logic_con_in is the audio handle for the incoming audio from a linked logic. I want to have a AudioDecimator that resample the incoming audio frames from INTERNAL_SAMPLE_RATE 16k to 8k before passing it to an encoder instance.

The encoder has reversed function, that means that the (dmr) encoded (udp)-samples have to send to the decoder and must then interpolated from 8k to INTERNAL_SAMPLE_RATE, thereafter piped to m_logic_con_out. I'm a bit confused about the audio devices and the registerSource/Sink, addSource/Sink and so on :) Please could you give me a small "schematic" to do that :) vy 73s de Adi / DL1HRC

sm0svx commented 7 years ago

The usual scheme I use is

AudioSource *prev_src = 0;
AudioXyz *xyz = new AudioXyz(arg1, arg2);
prev_src = xyz

AudioAbc *abc = new AudioAbc(arg1, arg2, arg3);
prev_src->registerSink(abc, true);
prev_src = abc;

Then you have chained xyz together with abc. If you want the first audio component xyz to receive audio from the logic linking code, logic_con_in should be set to xyz. If you want the last audio component abc to send audio to the logic linking, logic_con_out should be set to abc. But don't try to do that in the master branch!

There is a problem. The Logic class in the master branch handle the logic_con_in and logic_con_out variables and there is no way to manipulate them from a derived class. In the svxreflector branch I split the Logic class into LogicBase and Logic. Your new logic should probably derive from LogicBase but it is a problem that it is not yet merged to master. Anyway, when using LogicBase you should reimplement the logicConIn() and logicConOut() functions. They should return a sink object (decimator?) and source object (interpolator?) respectively which must be created before calling LogicBase::initialize().

If you want to get going with the coding before I have merged the svxreflector branch to the master branch one way would be to create your own branch using the svxreflector branch as the base. I would not accept a merge request to that branch though. After I have merged the svxreflector branch into the master branch, your code could also be merged. The Git way to do that would probably be to first rebase your branch onto the master branch and then merge it. If that seem daunting, maybe your patch will be easy to manually copy into a new branch derived from the master branch later.

dl1hrc commented 7 years ago

Thank you for the explanation. Maybe was question was somewhat misleading. The reference to the reflector was just an example. I'm working on a new logic, the only difference in audio handling between ReflectorLogic an mine is that I have to use the Decimator and Interpolator. Here my idea briefly:

Encoder: logic_con_in(16k audio stream from linked logic)->Decimator(8k)->(Fifo)->Encoder(Hardware||UDP||Software)->encoded data stream(UDP to network)

Decoder: encoded data stream(UDP from network)->Decoder(Hardware||UDP||Software)->(Fifo)->Interpolator(->16k)->logic_con_out(16k audio stream to linked logic)

The problem I have is as follows:

AudioSource *prev_src = m_logic_con_in; ???

if (INTERNAL_SAMPLE_RATE>8000) {
  AudioXyz *xyz = new AudioDecimator(arg1, arg2);
  prev_src = xyz
}

AudioAbc *abc = new AudioEncoder(arg1, arg2, arg3);
prev_src->registerSink(abc, true);
prev_src = abc;

...

sm0svx commented 7 years ago

That was what I thought you were working on, BrandmeisterLogic or something like that?

Look at the ReflectorLogic. You should assign m_logic_con_in from either xyz or abc, depending on what sampling rate is used. Something like:

AudioEncoder *enc = new AudioEncoder(arg1, arg2, arg3);
m_logic_con_in = enc;

if (INTERNAL_SAMPLE_RATE>8000)
{
  AudioDecimator *dec = new AudioDecimator(arg1, arg2);
  dec->registerSink(enc, true);
  m_logic_con_in = dec;
}
AudioSource *prev_src = m_logic_con_in;
dl1hrc commented 7 years ago

one problem is that m_logic_con_in is declared as AudioSink and prev_src is an AudioSource. You will get an error: cannot convert 'Async::AudioSink' to 'Async::AudioSource' in initialization ...

sm0svx commented 7 years ago

Ah, yes. You probably don't need it at all since nothing can be connected after the AudioEncoder.

dl1hrc commented 7 years ago

working on it in my DmrLogic branch...

Am 06.04.2017 um 22:51 schrieb Tobias Blomberg:

Ah, yes. You probably don't need it at all since nothing can be connected after the AudioEncoder.

sm0svx commented 7 years ago

Ok. I saw the new RewindLogic in the DmrLogic branch. Looks like you have gotten quite far with the AMBE codec interface!

I now have some DMR gear here, both an AMBE-stick and a DMR handie transceiver. I've been looking a bit at implementing a SvxLink native 4FSK demodulator for the DMR air interface since that is what I'm most interested in. Have not implemented anything in C++ though. Just done some experimentation in MATLAB. However, I'll try to get the Reflector things finished first so that I can merge it to the master branch.

dl1hrc commented 7 years ago

The connection to the brandmeister network in my RewindLogic is working well so far. I can receive ambe encoded streams over UDP connection from the network. Working on the ThumbDV support at the moment. The parallel use of one hardware-de/encoder from SvxLink de/encoder instances is not really clear for me. I guess I will face some problems when accessed parallel :/ The 4FSK (de)modulator sounds good and is probably a very great step forward into the digital world. On the other hand it would be good to have the raw 8k audio as well in the logic to connect both modes.

sm0svx commented 7 years ago

On the other hand it would be good to have the raw 8k audio as well in the logic to connect both modes.

Do you mean the normal analogue audio stream to be able to receive either standard FM or DMR? One way to do that is to set up a parallel receiver configuration with a suitable squelch using the same audio device. That will also work for a DDR but as it is implemented today it would mean that the demodulation is done twice, wasting CPU. Optimizations can be done later.

dl1hrc commented 7 years ago

I mean as a first step we could connect a simple analogue F3E repeater logic with the DmrLogic using the DV3k stick as de/encoder (or AMBEServer, or dsd-lib) by logiclinking. DmrLogic handles the up/downstream to/from the dmr network and maybe later extended by rx/tx classes with 4FSK en/decoders to drive flat audio radios or better sdr stuff. Would be nice (when finished the reflector) if you could take a deeper look into my sources where the dv3k instances are created and the access of the DV3K stick via usb in en/decoder classes. Maybe there are some errors since I get read failures when I access the device from both parallel. We discussed it formerly, my problem was to access one stick independently from en- and decoder instance. I'm not satisfied with the class construction, since e.g. the defines in the header are redundant. A structure like your Txfactory would be probably better.