Closed Beherith closed 2 years ago
This is quite interesting as I've tested it with a digital logic probe (USB based) and it captures the high/low transition as symmetrical.
I haven't seen issues with any decoders as well which is interesting considering the data you have above. I'll do some further testing on this and confirm from the development branch (which should have much tighter tolerances on the high/low transitions)
Also worth noting, with using the RMT there is one sacrificial bit sent (a one bit) added to each packet at the end since there signal is sent as HIGH then LOW and the RMT has shown to corrupt the last half wave at times without this extra bit added (which is ignored by the decoder as noise between packets).
The data shown is from yesterday's dev branch. I just looked through the issues in this repo and I will test with version 1.1.1 as well tomorrow, maybe its the RMT?
I have tested my tester with a 10khz square wave, and showed that the tester is ok. Thank you for your quick response.
Check with release v1.2.3 (last stable) and if you can capture the full packet stream (look for 22 one bit preamble) and check if the "corrupt" bit is indeed between packets or is in the packet itself.
I tested 1.1.1: Decoder works in both polarities, but I cant acquire a loco from the mobile interface, only desktop. No corruption happens in packets with 22 preable ones, when testing the timing. Also, powering the districs on results in straight DC on the OPS and PROG tracks
With 1.2.3: Decoder works in both polarities, but I cant set speed from mobile interface, only desktop. No corruption happens in the 22one preambled packets, but the timing jitter is noticeably higher than with 1.1.1. This results in decoders responding correctly to read CV1 only 1 out of 3 times (I have a spare decoder that has a dummy load with LEDS instead of a motor). Also, powering the districs on results in straight DC on the OPS and PROG tracks.
Image shows timings from 1.2.3:
I used an ESP8266 as the 10khz testing generator, and another esp8266 for the timing analysis. 99.9% of the samples were within 5microseconds , with 90% being within 1 microsecond accuracy.
Shall I plot the histograms of the duration of high and low DCC signals coming from each version, with a large chunk off the prog track?
At this point the only version that is critical is the one on the development branch. It is interesting that there was that much deviation from v1.1.1 to v1.2.3 since they use the same code for signal generation (hardware timer), about the only thing I can think of is changes in the framework layer for digitalWrite but I doubt that is significant for this test. On the development branch it is switched to the RMT which has dedicated circuitry to generate square wave signals.
As for powering districts and seeing raw DC voltage, that is a sign of the timer not being triggered. Are you sure you are not using v1.3.0 (master) which has a know issue which is part of why I shifted to the RMT?
I downloaded the code from github website by selecting the tag 1.2.3 in the dropdown.
Ok, I will do some more in depth timing testing on the develop branch, as there is no point in forking old things.
Thank you!
Ok, I will do some more in depth timing testing on the develop branch, as there is no point in forking old things.
Thanks, I am curious on your findings since this should be pretty consistent with the RMT generating a 96usec half wave for zero and 58usec for one. Per spec the min/max for zero is 95,9900 (zero stretching) and one is 52,64. You can see the timing used on the development branch here.
Since I don't have an extra esp8266 that has more than a couple pins free (ESP-01, only 2 GPIO pins IIRC), I'll be exploring setup of an esp32 for signal capture. I will be using a slightly modified version of the OpenMRN DccDecoder code, it will need a few adjustments in the DccDecodeFlow since it is currently setup for Tiva devices (line 390 as example) but should work to decode the packets. Example usage: https://github.com/bakerstu/openmrn/blob/master/applications/dcc_decoder/main.cxx
I used this code on an arduino nano https://gist.github.com/Beherith/c26fc758f54f43f4029f0fa94655cb33 to monitor the DCC using the DCC library https://github.com/MynaBay/DCC_Decoder/tree/master/examples/DCC_Monitor because it had a good monitor with error detection. I only tried it so far on the development branch, but without much success, ill try it tomorrow on 1.1.1 and 1.2.3.
1.2.3 was the most stable in terms of timing:
Can you explain the graph? It is interesting that there is a noticeable dip in the middle of v1.1.1 and master but I'm not sure what they mean.
Comparing v1.1.1 to v1.2.3 signal generation code v1.1.1 used two timers (one for full wave and second to flip polarity on the pin), v1.2.3 combined this into a single timer usage (more efficient usage of timers) only tracking the half wave. On "master" this was shifted to the RMT which controls polarity and timing based on the input data, the downside for "master" is that it ran inside a FreeRTOS task to feed the RMT a packet at a time waiting for the RMT to complete before pushing the next block to it. On the development branch the task is dropped in favor of an ISR hook on RMT TX complete (end of packet) which triggers the next packet to be fed to the RMT within the ISR which should result in more accurate timing. The only downside I know of with the RMT is the need for a "corruption" bit at the end of the packet which results in a stretched one bit (could easily be switched to a zero which should still be ignored by the decoders, change https://github.com/atanisoft/ESP32CommandStation/blob/master/src/DCC/DCCSignalGenerator_RMT.cpp#L78 from DCC_ONE_BIT to DCC_ZERO_BIT)
X axis is the microsecond count, Y axis is the relative number of times that length pulse (high and low pulses pooled) was observed. E.g the Master lines 0.2 value at 57 usecs means that of all the ~32k pulses, 20% of them were observed as 57usecs long.
Should I separate high and low to to isolate the effects of the timers?
That is interesting, I don't think the timers isolation will make a lot of difference since they are not used with the RMT (master code).
Ok, I will investigate the timings and exact nature of signal asymmetry on the develop branch, and whether they affect packets. Jitter shouldn't be an issue, as the DCC protocol should be robust enough.
Thank you!
On the dev branch I haven't fully tested the PROG track interface but it should work. If you face issues I would switch to the OPS output instead and activate a loco with a known/set speed so you can detect it in the packet stream intermixed with idle packets. I would recommend leaving RailCom disabled for your tests as it is not stable and needs to be reworked a bit more before it will be stable.
@Beherith Can you retest this using the v1.5.0-alpha1 build?
I haven't been able to reproduce a bit time spread greater than 2usec in any recent version that uses the RMT to generate the DCC signal.
Dear @atanisoft , you have done a lovely job, but I have ran into a strange issue. I started off on the master branch of your repo, and notices that my DCC locos only responded to DCC commands when put on the tracks in one direction. This lead me to believe there is some assymetry in the code.
I created a small testing board on an ESP8266 that counts the DCC one and zero bit high and low times, and was surprised to see that the signal is indeed not symmetric. I then set up ESP IDF and compiled your latest development branch, only to see the same thing happen.
Here is the tester code: https://gist.github.com/Beherith/9d090f64437d6ca721a76e70a097a664
The output after jamming into excel looks like the following:
The above error happens on every DCC packet.
I have an ESP32 board and an arduino motor shield doing the driving.