Open firesign opened 6 years ago
Doesn’t a long press of F5 do this? That’s what I have been doing.
F5 only disables Transmit, but it doesn’t decode the operator’s own CW input—at least, that’s the way my mcHF is configured.
Hi, currently the operators CW is only decoded if the internal keyer is being used, i.e. not when using straight key mode (or using an external keyer). For straight key, the RX CW decoder could be used, however, right now it works only for received audio. It would be an extension to accept also input from the TRX operator in straight key mode. Doable, not but not implemented.
73 Danilo
Having gotten my first PR merged I might work on this next.
I might be wrong about this, but it feels like there's an issue with decoding using the internal keyer too if your iambic paddle technique is not perfect. E.g. two dots can decode as E E
instead of I
or two dashes decode as T T
rather than M
, if you press the paddle twice rather than holding it. Even if it seemed like you got the resulting timing correct. Mostly noticed with other characters where you have to go back and forth between paddles, e.g. P
coming out as W E
etc.
It would be nice if the decoding just went through the same code for both RX and operator input. Any pointers/thoughts on code structure affecting that, before I dive in, would be welcome.
Colour coding the decoded text separately for RX and TX, as mentioned in #1706 for RTTY, might be nice too.
I never have had decoding problems in TX. But as Danilo already stated: if internal decoder is used also input from straight key can be decoded... I give you a GO for implementing this.
Hi, the iambic keyer decoding is always perfect by design (if implemented correctly that is). If you generate dit-space-dit-long space it is "i", if you generate dit-long space-dit-long space it is "ee".
It will show you what your input as interpreted by the keyer code is AND what the TRX is sending out, i.e. what someone else will decode. So I don't see a need to change the decoding approach. And I would even guess that our decoder for received singles would probably decode the same in most cases (at least I hope so).
What I think what would make sense: we should connect the straight key to the internal decoder for received signals. Here we could leave out the actual audio decoding wave into morse signal on/off, since we perfectly well know when there is a signal and when not. No need to get that from audio decoding. But one the other hand, it should be very straight forward to feed in the generated audio and see what happens...
And of course, then you could let both approaches run and see what happens. Nevertheless, I think the iambic keyer should use its internal knowledge about the signals generated to provide the morse character.
@db4ple OK, but what threshold is used to distinguish short space from long space by the keyer? i.e. if you were to do a test where you pressed the dit paddle twice and varied the gap between presses, at what point would it switch from decoding I
to E E
? At exactly one dit time between presses?
@db4ple OK, but what threshold is used to distinguish short space from long space by the keyer? i.e. if you were to do a test where you pressed the dit paddle twice and varied the gap between presses, at what point would it switch from decoding
I
toE E
? At exactly one dit time between presses?
Could you mind if I answer...
As far as I know, there are hardcoded thresholds ( aka counters) in a state machine which is run by audio IRQ -> calls each 66us if I'm correct about number. So, that's why You see such this behavior...
this state machine interpreted your manipulation by paddle into "hash" in the same format as Decoding algorithm.
Do not think that this is a problem actually... It just shows the errors in manipulations...
It is 660uS, but in general this is right. Essentially, if I understand the algorithm correctly, whenever the both paddles are not pressed for longer than a space time (based on the WPM setting), the character is considered to be finished, and subsequently "decoded". So either you make your next "move" within a space time (which is always directly based on the wpm) after the last dit or dah stopped, or it is character "over". I would not consider this a general issue. But I am not at all into CW, so using the CW keyer for anything but "playing" a little with it is not my area of experience.
So either you make your next "move" within a space time (which is always directly based on the wpm) after the last dit or dah stopped, or it is character "over".
So your answer to my question of what the threshold is is a space time, which I understand to be three times the dit time.
It was feeling to me like the threshold being used by the keyer/decoder was a lot less than this and maybe closer to one dit time. But I will have to slow the wpm right down and test it properly.
Well, I was not describing it correctly, the time I was writing about is in fact called "pause_time" (which is equal to a dit minus some milliseconds ), the "space_time" is in fact 6 times as long. You can check this out by yourself:
https://github.com/df8oe/UHSDR/blob/855f00b20104ee5f17ac79f546054721caa8a5ce/mchf-eclipse/drivers/audio/cw/cw_gen.c#L389 sets the times.
But still, everything else is correct, you have to make your next move within one "pause_time".
The numbers in that function seem wrong to me. I thought the pause between characters was supposed to be 3 times the dit time and a space between words is 7 times the dit time.
But in that function the pause is set to the same as the dit time, and the space to 6 dit times. Which agrees with what I had "felt" to be the case - that you needed to make your next move within one dit time for the keyer to recognise a continuing character.
I have tested but I am unable to release the paddle and press it again within one dit. So I never was able to key an "i" that way. It always results in "e e" - by hearing and by software decoding. So I by myself do not see any problems when using the internal keyer. But using straight key is a quite other stuff.
But nevertheless I would like to test what is decoded if CW text can be decoded "old style" or using RX CW decoder - for testing and debugging first only enabled in debug menu.
I started looking at this again - work in progress here: https://github.com/martinling/UHSDR/pull/1
It's easy enough to send operator input through the RX CW decoder in straight key mode, by feeding the decoder the signal that's generated for the sidetone audio.
However, the decoder gets understandably confused, because in between dit/dahs the radio switches back to RX. So the decoding only works if you turn the CW TX->RX delay up high enough that the radio stays in TX mode throughout your message.
There needs to be some thought about what to do here. We do want to go back to decoding the RX audio at the end of the operator's transmission, but at any given instant when the key is released, we have no idea if the operator is finished yet. And the decoder isn't going to be happy about rapidly switching between two operators' signals with different characteristics.
So I think ideally we'd have two copies of the CW decoder state - one for TX and one for RX. When the radio is in RX, the TX decoder would effectively be fed with silence, and then when in TX, the RX decoder would be fed with silence. [Of course, ideally you don't actually waste cycles processing silence, and just update the decoder states accordingly at mode switch].
That would need some refactoring of the decoder to replace the global state, though.
CW Code Practice Function
Since we have a CW decoder, can we add a function to enable the operator to key morse into the radio (with transmit turned off), and the radio then decodes and displays it?