Opendigitalradio / dabtools

DAB/DAB+ software for RTL-SDR dongles and the Psion Wavefinder including ETI stream recording. This project is currently unmaintained.
GNU General Public License v3.0
17 stars 11 forks source link

EIC 62105 (RDI) Support (or SYNC ETI support) #16

Closed lars18th closed 6 years ago

lars18th commented 6 years ago

Hi,

I suggest to enhance the "dabtools" package with support for the RDI Protocol (Receiver Data Interface), aka EIC 62105, as it's more suitable for receiving DAB signals.

So, instead of the the dab2edi a new tool called dab2rdi will be desirable.

You agree?

andimik commented 6 years ago

You are invited to send a pull request

basicmaster commented 6 years ago

@lars18th: There were just some consumer/professional receivers that had an RDI output - at the times when there was no DAB+ yet. Also some software that supported it as output format (e.g. IRT DAB scout 1). But today RDI doesn't play any role though it was explicitely designed for the receiver side.

RDI is also a way too complex format and its spec is not available for free. So it makes much more sense to stay with ETI, which quite easily allows to access the content of all the subchannels of a DAB ensemble.

mpbraendli commented 6 years ago

Sorry I brought up RDI in the other discussion. I propose to reject this proposal.

lars18th commented 6 years ago

Hi,

So, let me to do a question: If we like to share the bitstream between a DAB demodulator (RF to bitstream, aka tuner) and a DAB decoder (bitstream to output, aka player)... then what it's the correct format?

mpbraendli commented 6 years ago

"correct" would be RDI. But we use ETI because it's more practical.

lars18th commented 6 years ago

"correct" would be RDI. But we use ETI because it's more practical.

OK. However, you pointed that ETI can be impractical when the source has errors. That is when the bitstream comes from a tuner (receiver). It's this true or not? (https://github.com/AlbrechtL/welle.io/issues/199#issuecomment-365670612)

As you can imagine I'm not an expert of DAB. However, I'm reading a lot and I like to know the best format for share the DAB bitstream between a TUNER and a DECODER. Why? Because sharing IQ samples and doing SDR it's impractical in several user cases.

I hope you can help me in this search. :smile:

mpbraendli commented 6 years ago

The cut you are doing between tuner and decoder quite arbitrary, and there is no standardised interface today. ODR-DabMod cannot handle an ETI stream with errors, because in the normal use cases you can assume the ETI is error-free. But a decoder reading ETI must be able to cope with errors, and I'm quite sure dablin does.

lars18th commented 6 years ago

The cut you are doing between tuner and decoder quite arbitrary,

It depends! It's my objective.

and there is no standardised interface today.

Are you sure that the RDI protocol doen't targets exactly this?

a decoder reading ETI must be able to cope with errors, and I'm quite sure dablin does.

OK. So as I understand it's possible to use ETI as a [demodulator]-->[decoder] protocol. Right? The only requirement is that the decoder handle errors in the bitstream and the demodulator generates padding when insufficient data is available. I'm right?

And thank you! I'm learning so much. :smile:

andimik commented 6 years ago

Yes, dablin_gtk can cope with errors in the eti, because I am testing various eti Files from half Europe and some are really weak and have errors in it

mpbraendli commented 6 years ago

Are you sure that the RDI protocol doen't targets exactly this? The key word I used was "today" :-) RDI is an old protocol we don't want to use.

For info, demodulators don't insert padding for insufficient data, they wrongly decode data (because e.g. signal quality is bad).

lars18th commented 6 years ago

For info, demodulators don't insert padding for insufficient data, they wrongly decode data (because e.g. signal quality is bad).

Sorry? That's breaks the specification! ETI it's a synchronous protocol with a fixed bitrate. So, in case you use it for transport the DAB bitstream from the demodulator to the decoder, you need to maintain the bitrate. So, if the decoder losts the signal lock or found severe errors, then it needs to send ETI frames with "padding". It can't skeep frames!

More or less it's the same that a DVB demodulator/streamer does when it losts the signal and are sending a CBR Transport Stream.

Remember, my objective it's do the same as the DAB-ETI-encapsulated-over-MPEG-TS-in-satellite broadcasts, but between a DAB Demodulator and a DAB Decoder. So, in this user case the bitstream can have errors.

mpbraendli commented 6 years ago

Sorry, you misunderstood me. I didn't say bits get omitted. Bits get decoded wrongly in a demodulator. That leads to bit errors.

lars18th commented 6 years ago

Sorry, you misunderstood me. I didn't say bits get omitted. Bits get decoded wrongly in a demodulator. That leads to bit errors.

Don't worry! :smile:

However, this can't be entire true: When the demodulator lost the signal... then it doesn't decode wrongly, instead it doesn't decode anything. This is the case when it needs to generate padding.

After a review of the code of the tool "dab2eti", I see that the two relevant functions are:

And in several cases the srd_demod() function can return error (return 0), instead of ok (return 1). In this case the function dab_process_frame() isn't executed. And obviosly in this case no ETI frames are generated.

So, in my opinion the only requited modification is add support for SYNCHRONOUS ETI output. In this case the bitrate will be constant, and the only assumption that will be false for the decoder is that the data in the bitstream is clean. So, if the decoder can handle it, then the communication can be correct.

What you think?

lars18th commented 6 years ago

Hi,

Only as a reference (for people that arrives to this issue):

So I hope, in the next release the SYNCH problem between the source of the bitstream (the demodulator) and the destination (the decoder) will be improved.

Any commet about this? @mpbraendli

basicmaster commented 6 years ago

This is wrong. eti-cmdline was added as another live source; dab2eti can still be used.

And BTW it is no issue for DABlin if temporarily no ETI frames arrive (due to reception problems etc.); as long as this is the case, silence is output.

lars18th commented 6 years ago

This is wrong. eti-cmdline was added as another live source; dab2eti can still be used.

Thank you for pointing this. And, it's normal that dab2eti continues working (it does similar). However, I feel "dabtools" is stalled in development. So I think eti-cmdline will finally replace it.

And BTW it is no issue for DABlin if temporarily no ETI frames arrive (due to reception problems etc.); as long as this is the case, silence is output.

And it's great to hear it! However, this has two problems:

1) When reading form PIPE, the DABlin can sync... however, reading from a file how to deal with the silence?

2) When using other "players" (aka DAB decoders), you need to send CBR for simplifying the communcation. Futhermore, ETI-Ni is defined as a CBR, so when "silence" appears padding data it's required.

Please, take into account that my idea is get reliable communication between different applications using the ETI-NI protocol. So, the padding for CBR is a requirement (in my opinion).

What you think?

basicmaster commented 6 years ago

However, I feel "dabtools" is stalled in development. So I think eti-cmdline will finally replace it.

This is unfortunately the case, as currently no one had found the time to check improvements (see #14) - while Jan is actively working on his project(s). However as I mention in the DABlin README, dab2eti takes way less CPU power.

When reading form PIPE, the DABlin can sync... however, reading from a file how to deal with the silence?

What do you mean with "silence" in the file case?

When using other "players" (aka DAB decoders), you need to send CBR for simplifying the communcation. Futhermore, ETI-Ni is defined as a CBR, so when "silence" appears padding data it's required.

While such padding could be rather easily done on ETI level (at least in theory, as you probably have to add additional timing measurement for insertion etc.), this does not consider the actual subchannels. And to do this padding on subchannel level would be an absolute nightmare!

So just outputting nothing until reception has been restored, is the better solution here IMO - instead of forwarding any faked, made-up ETI frames.

lars18th commented 6 years ago

However, I feel "dabtools" is stalled in development. So I think eti-cmdline will finally replace it.

This is unfortunately the case, as currently no one had found the time to check improvements (see #14) - while Jan is actively working on his project(s). However as I mention in the DABlin README, dab2eti takes way less CPU power.

In any case it's good to have more than one tool for IQ_2_ETI ! Multiple implementations are an advantage, not a problem. :smile:

When reading form PIPE, the DABlin can sync... however, reading from a file how to deal with the silence?

What do you mean with "silence" in the file case?

When the demodulator lost the "lock" of the RF signal, and decode it it's impossible or the noise makes impossible to understand anything. In this case, a Realtime ETI Generator needs to have a deterministic behaviour. At best, maintaining the bitrate of the bitstream.

While such padding could be rather easily done on ETI level (at least in theory, as you probably have to add additional timing measurement for insertion etc.), this does not consider the actual subchannels. And to do this padding on subchannel level would be an absolute nightmare!

I need to read more about ETI-NI. However, I'm sure it needs to support PADDING, as all protocols that I know with similar objectives (carry realtime A/V data) work in this way.

So just outputting nothing until reception has been restored, is the better solution here IMO - instead of forwarding any faked, made-up ETI frames.

Why? If the protocol supports padding, you only need to add it. If not, then you need to send simple "silcence" or "zero-data" packets. What it's the minimal unit of packet in ETI?

basicmaster commented 6 years ago

When the demodulator lost the "lock" of the RF signal, and decode it it's impossible or the noise makes impossible to understand anything. In this case, a Realtime ETI Generator needs to have a deterministic behaviour. At best, maintaining the bitrate of the bitstream.

The deterministic behaviour is to output nothing, because there is nothing to output. As ETI is a contribution format - which is used here at distribution side - this does not play a role.

I need to read more about ETI-NI. However, I'm sure it needs to support PADDING, as all protocols that I know with similar objectives (carry realtime A/V data) work in this way.

See ETSI ETS 300 799 which is publicly available. Padding is not possible, as the field NST must not be 0 (except during mux reconfig). So you always have to transmit at least one subchannel. But what will you transmit in that subchannel, when no data is available...!?

Why? If the protocol supports padding, you only need to add it. If not, then you need to send simple "silcence" or "zero-data" packets. What it's the minimal unit of packet in ETI?

Just simple "silence" or "zero-data" is not possible with ETI, as mentioned before.

lars18th commented 6 years ago

Just simple "silence" or "zero-data" is not possible with ETI, as mentioned before.

I feel this is the reason for the existence of the RDI protocol that no one likes to support (and isn't public available). :smile:

In any case, it'll possible to find a solution! For example, whats the mininum ETI packet? It's the Logical Frame each 24ms, right? So, the documentation has the solution (as it has handles for errors from the network transport):

You can read all of this in the Specification of the Protocol: http://www.etsi.org/deliver/etsi_i_ets/300700_300799/300799/01_30_9733/ets_300799e01v.pdf Pages 19-21.

But, this solves the problem? NO. As you can see in Page 28, Fig. 6, the Logical Frame isn't the same as the Network representation. So, you need to work with the ETI-NI-G.703 frame, that's the unit used by the tools.

So, regarding ETI-NI frames:

So, as a summary. When generating a "silence", that it's an ETI-NI frame without any data, you need to carry:

I feel easy to do, right? :smile:

mpbraendli commented 6 years ago

Sounds about right. Keep in mind you still need to solve the 'when' question: dab2eti sends out ETI because it demodulates a signal, from which the 24ms rate for the ETI frames is naturally derived. With no signal, you need to have something that looks at the time and sends an empty ETI frame out after the correct delay.

basicmaster commented 6 years ago

@lars18th:

Futhermore, section "5.9 Null transmissions" describes how to do when "silence"

I indeed was wrong regarding this and therefore added support for it to DABlin, to prevent any issues in case it occurs.

However I still don't see the actual reason why generating such ETI frames should be generated here or by eti-cmdline.

@mpbraendli: Correct; this is what I meant earlier with "you probably have to add additional timing measurement for insertion etc.". I cannot see why it should be needed to add such complexity here. At least I myself will not add it ;-)

lars18th commented 6 years ago

@basicmaster ,

Futhermore, section "5.9 Null transmissions" describes how to do when "silence"

I indeed was wrong regarding this and therefore added support for it to DABlin, to prevent any issues in case it occurs.

Great point! https://github.com/Opendigitalradio/dablin/commit/9325e95debb5144e61763470a07e8e2b635c71dc And Thank you! Even, quite simple, but it works.

However I still don't see the actual reason why generating such ETI frames should be generated

Because if you decouple the demodulator withing the decoder this has sense. Doing in this way you can grant that the ETI-NI bitstream has a Constant Bitrate of exactly 2Mbps. If this it's true, then decoder can use the "clock" of the demodulator.

As an example, and I know that DAB Radio people doesn't like to hear about DTV, but... Any DTV demodulator will output an MPEG-TS with a fixed bitrate. And this bitstream can be decoded by any decoder or process. Even if it has errors or incomplete data.

Why not go in the same way with DAB? As I see in a lot of DAB open projects, all it's completly coupled with the IQ samples. And the only way to work in a client-server model it's using the RAW data as an intermediate format. However, this format consumes a lot of bandwidth. And the SDR processing consumes a lot of CPU computation. Why not decouple the demodulator and the decoder? Using then the ETI-NI as an intermediate format the demodulator focuses on Signal Processing, and the decoder on Bitstream processing. Then any client-server model will use only 2Mbps of bandwith.

From my point of view a lot of advantages. :smile:

lars18th commented 6 years ago

Hi @mpbraendli ,

dab2eti sends out ETI because it demodulates a signal, from which the 24ms rate for the ETI frames is naturally derived. With no signal, you need to have something that looks at the time and sends an empty ETI frame out after the correct delay.

Using an SDR input, for example with RTL_TCP... you don't lost the synch sometimes? I see (hear) a lot of times glitches in the player... and the reason it's because the RTL dongle has troubles delivering the samples in time (mainly because the USB protocol, and not the internal hardware clock). So the net result it's a jitter problem. Then, the solution it's use an internal clock that tries to synch with the dongle clock. And how to do it if the comunication it's blocking? Impossible! You need to read asynch and use an internal clock... doing some resynch when needed. Almost this it's the way for and A/V MPEG decoder.

Then my proposal it's encapsulate all of this in the demodulator, and generate a CBR bitrate with ETI-NI format. Then any DAB player can use this source... independly if the source it's RF demodulated, software generated, from a network source, from satellite downlink, etc.

And to achieve this, if you read the samples asynch and use an internal clock, then you can do it. Each 24ms you need to process a DAB Frame and generate the corresponding ETI-NI frame. And only need to care about resynch time-to-time with the incoming samples.

I know this can sound "new" to DAB SDR people. But isn't for DTV people.

basicmaster commented 6 years ago

@lars18th:

Because if you decouple the demodulator withing the decoder this has sense.

It of course makes sense to separate demodulation and decoding, as already said.

Doing in this way you can grant that the ETI-NI bitstream has a Constant Bitrate of exactly 2Mbps. If this it's true, then decoder can use the "clock" of the demodulator.

It is a bad idea to rely on the CBR here on receiver side. How can you be sure that every ETI frame is received exactly when it is expected to be received (e.g. WiFi, as you said)? So you anyhow need to have your own clock on receiver side. This is actually what DABlin does, and without it it would also not be able to replay ETI dumps.

DAB Radio people doesn't like to hear about DTV

Why not?

Any DTV demodulator will output an MPEG-TS with a fixed bitrate. And this bitstream can be decoded by any decoder or process. Even if it has errors or incomplete data.

When using hardware, this might be a requirement, but we are in the software domain here and, as said, the transmission path does not guarantee that fixed bitrate.

Furthermore you cannot seriously put DAB on a level with ~DTV~ DVB in terms of recovery from discontinuities:

lars18th commented 6 years ago

Hi @basicmaster ,

I'm glad to comment with you about this! I'm not a DAB professional, and your responses are very interesting. Let to comment with you some questions to see if I can learn anything. :smile:

Doing in this way you can grant that the ETI-NI bitstream has a Constant Bitrate of exactly 2Mbps. If this it's true, then decoder can use the "clock" of the demodulator.

It is a bad idea to rely on the CBR here on receiver side. How can you be sure that every ETI frame is received exactly when it is expected to be received (e.g. WiFi, as you said)? So you anyhow need to have your own clock on receiver side. This is actually what DABlin does, and without it it would also not be able to replay ETI dumps.

As I understand after reading the specification of the ETI protocol, the Fixed Bitrate it's a requirement. Why then it's good to break this? Futhermore, the specification indicates that each frame correspond to 24ms of sound; but it doesn't imposses that the Transport Protocol will provide packets in synch. This it's the reason to use the FCT and the FSYNC. I suggest to re-read the section "8. Network Adaptation for G.704 networks". This will explain how and why the ETI bitstreams needs to be meet the requirement of a CBR.

As a simple explain: if each ETI frame exists and it has a durantion of 24ms then the decoder can decode using the clock of the sender even if the frames are transmisted asynch and with jitter errors. You only need to use a buffer of N frames. No more, no less.

Any DTV demodulator will output an MPEG-TS with a fixed bitrate. And this bitstream can be decoded by any decoder or process. Even if it has errors or incomplete data.

When using hardware, this might be a requirement, but we are in the software domain here and, as said, the transmission path does not guarantee that fixed bitrate.

Please, don't forget that the transport can be done SYNCH or ASYNCH. Both scenarios are possible, and the two are described in the standard. Making the assumption of ASYNCH for Networks and/or software, and SYNCH for hardware, it's wrong.

If you do streaming the you need to use fixed bitrate or work with Time Stamps. So, if the objective it's a tuner-decoder decopuled in a client-server model... then you're doing a realtime streaming. Right?

Furthermore you cannot seriously put DAB on a level with DTV DVB in terms of recovery from discontinuities:

In an MPEG-TS you have specific means for synchronisation (PCR and DTS/PTS) and an MPEG-TS packet itself does not correspond to a specific duration. So you have to wait until the mentioned fields are transmitted and for the respective PES packet to start again, to be able to resync and resume playback. In case of an audio track this could e.g. take one second, which is definitely noticable. In DAB, a DAB frame has the fixed length of 24ms. There are no timecodes and stuff; you just can resume decoding. Period.

Only as a comment: all DTV standards do in the same way for this.

Regarding the Time Stamps in a MPEG-TS bitstream, this it's required to "synch" between multiple streams present in the stream. As the Digital Radio DAB has only one stream per program (the audio) this isn't required. However, the DAB specification has implicit timestamps. This it's because ALL FRAMES HAVE A FIXED DURANTION (in fact, 24ms). And for such period only ONE FRAME can exist. And the FCT counter present in the ETI bitstream is the method in this protocol to associate the frames with the time.

So, as a summary ETI works because the CBR. And if you meet this requirement the decoupling of the demodulator and the decoder can be done without jitter troubles. Or as minimum this it's my understand. :smile:

Then my suggestion, or idea, as you prefer, it's pushing in the direction of use ETI-NI as an intermediate interchange format between DAB tools. But to acomplish this it's strictly required the CBR. You agree with this?

mpbraendli commented 6 years ago

You are correct in principle about the synchronous nature of an ETI stream. And ETI itself has been designed with synchronous networks in mind (G.703 is one example). We are however not using ETI on asynchronous networks, and our usage of ETI falls outside of the specification for that reason. And also because we use it on the receive side.

I would like to bring the discussion away from considerations that are too theoretical, not because it's uninteresting, but rather because I'd like to reach a conclusion about what to do.

To summarise

I don't see what benefit we would get from enforcing a synchronous ETI stream.

Am I missing something?

lars18th commented 6 years ago

Hi @mpbraendli ,

Am I missing something?

Just one thing...

I don't see what benefit we would get from enforcing a synchronous ETI stream.

The benefit is that the DAB decoder (player) can maintain the CLOCK SYNC with the ETI Bitstream. So...

In case of lock loss, no ETI frames are generated by the demodulator

This needs to be in reverse: "in case of signal loss or fatal errors, NULL ETI frames must be generated by the demodulator".

One example: the tool ODR-DabMod requires no missing ETI-NI frames. And regarding the NULL ETI frames: See section 5.9 of Page 27 (NULL transmissions) http://www.etsi.org/deliver/etsi_i_ets/300700_300799/300799/01_30_9733/ets_300799e01v.pdf

I hope that this will clarify why support such configuration.

mpbraendli commented 6 years ago

Sorry @lars18th but you are repeating yourself, but you are not giving me any convincing reason why it's necessary. As we said earlier, the decoder does not need to maintain clock sync. Our experience shows that the current ETI generation is usable for carrying an ensemble from a demodulator to a decoder, and enforcing a synchronous generation makes the tools more complex for no benefit. I conclude that we don't need to implement a synchronous ETI output.

lars18th commented 6 years ago

I conclude that we don't need to implement a synchronous ETI output.

I'm sad I can't convince you. But maybe I'm completely wrong.

In my (short) experience the tools that use ETI-NI as input, like the ODR-DabMod, will fail (exit or crash) if some frame is missing.

Our experience shows that the current ETI generation is usable for carrying an ensemble from a demodulator to a decoder

Can you please tell me what tools are doing this? This is just what I'm looking for.

In any case, thank you for your comments. I hope that in the future I'll be able to contribute by presenting a non complex implementation that incorporates this SYNC output, if it has any benefits and is useable.

Regards. :smile:

mpbraendli commented 6 years ago

Can you please tell me what tools are doing this? This is just what I'm looking for.

dab2eti or eti-cmdline as demodulators, dablin as decoder.

In any case, thank you for your comments. I hope that in the future I'll be able to contribute by presenting a non complex implementation that incorporates this SYNC output, if it has any benefits and is useable.

The important question is the benefit. We're all volounteers, and every feature adds complexity and incurs maintenance effort, so we don't want to add features for which the benefits are not clear.

lars18th commented 6 years ago

Hi @mpbraendli ,

I repeat, in case it has not been made clear, thank you very much for your comments! I just want to help and collaborate. :smile:

Can you please tell me what tools are doing this? This is just what I'm looking for.

dab2eti or eti-cmdline as demodulators, dablin as decoder.

These are the tools that I already know. And the main problem is that the DABlin is the only current player that supports ETI-NI as input. My goal is to try to standardize this in some way (I mean between open source applications).

andimik commented 6 years ago

You can always feed other tools with IQ samples

lars18th commented 6 years ago

Hi @andimik ,

With all due respect!

You can always feed other tools with IQ samples

And regarding ETI:

So, If I like to have a central DAB demodulator (like dab2eti or eti-cmdline) that executes the SDR processing and generates the ETI bitstream...

I understand that many of you do not see the reason for this decoupling. But it makes a lot of sense to me, though. So, for this I'll continue working in this way.

Regards.

andimik commented 6 years ago

I guess you still don't understand the difference between consumer and broadcaster.

lars18th commented 6 years ago

Hi @andimik ,

I guess you still don't understand the difference between consumer and broadcaster.

Very possibly. Can you explain it to me, please? :smile: