Closed laserir closed 6 years ago
Thanks for the report. You seem to indicate you are trying to capture a "RC5-like" protocol. I'm assuming that this is NOT an actual RC5 device etc, but something producing a modified but similar protocol. Correct?
Can you provide us with: a) What the devices are exactly? Please include the device type, make and model numbers etc. (e.g. FooBar brand TV, model AB-C1234, circa 1984) b) Some "raw" captures/output of the IRrecvDumpV2.ino program. e.g. messages that work, and ones that don't etc. This will enable us to see what is going on with the protocol and potentially develop a decoder/sender for it. c) Any other possibly useful info.
Without that, we really can't help you well.
My initial guess without seeing any data is, if you've had to lower the interval length to 333 (from 889) then some of the matching routines will have very little to no error margin to match on, as it works as a percentage of the "length" as given error margin. You could try tuning the TOLERANCE value to correct for that. Note: Another cause of unrepeatable/unreliable message captures is if the transmission modulation frequency is not the same as the receiving module used.
For some background, I think this issue distils what is going on to a degree. Correct?
I was just going to post this link ;) Yes, and on different hardware (Nano, A.IR Shield Nano) and the IRRemote-library it works perfectly. (The changes I made are in the last post.)
The frequency of the signal is 36 kHz, but the receiver is 100% working for this. (Same hardware used as on the working shield)
Pitching up TOLERANCE results in unreliable values. I have tried various combinations of TOLERANCE, correction etc.
Again, all NEC commands work! For research I recorded a signal of the device (it's about lasertag phasers emitting ir codes...) with an USB IR Toy and play back the exact same signal one after another. Sometimes it is recognized, sometimes not. Strange!
When you look at the raw data in the link above you'll see there is a relatively big variation in the signals.
@laserir I spent a good several hours playing with the protocol and using some of your data. If you try the lasertag-dev branch I just uploaded, it should probably work for you.
Oh boy you sure are you correct on the data being crap. There are timing issues galore. I spent way way way to much time tweaking parameters to try to get it to match everything consistently. e.g. your "Unit 13" data from the other issue, nope .. not a chance. But most of everything else is detected.
Let me know how it goes, and please update us with what the devices/model numbers are etc. So we can document what this crappy protocol is really for. I just called it Lasertag, because that's what it sounded like. ;-)
Wow you are super fast :-O
Unfortunately none of the signals get recognized. (Tried with recorded signals as well as real devices)
In DEBUG mode I get: DEBUG: getRClevel: Unexpected width. Exiting.
Adapted the parameters to those that work fine with the Nano/irremote:
#define LASERTAG_TICK 333U
#define LASERTAG_MIN_GAP 5000UL
#define LASERTAG_TOLERANCE 25U
#define LASERTAG_EXCESS 0U
With both your and "my" parameters... no luck.
The protocol is a common proprietary communication protocol from many commercial lasertag systems, such as laserchaser and various others.
Can you include the output from the DumpV2 program here please? (and please quote the output as code) i.e. https://help.github.com/articles/creating-and-highlighting-code-blocks/
That will help me work out what's going on.
The unexpected width is a common problem I had when using the data you supplied in the other issue. As I said, the timings are woeful. It's not made any better because they chose a short T1/TICK time etc. Anyway, it is what it is, and to be honest, the library's based on a system that really doesn't suit capturing then decoding this type of signal.
We'd almost be far better off decoding on the fly. e.g.
loop until we have 15 bits of data {
is the signal from the module HIGH?
wait 1/3 tick time.
if module is HIGH
wait 1/3 tick time
if module is HIGH
wait 1/3 tick time.
if module is LOW
wait 1/3 tick time.
if module is LOW
wait 1/3 tick time.
if module is LOW
add a '0' bit to the data.
is the signal from the module LOW?
wait 1/3 tick time.
if module is LOW
wait 1/3 tick time
if module is LOW
wait 1/3 tick time.
if module is HIGH
wait 1/3 tick time.
if module is HIGH
wait 1/3 tick time.
if module is HIGH
add a '1' bit to the data.
}
etc.
You will find attached 3 signals: Signal 1: Send with a nano and your sendLasertag routine (taken over to the irremote library). Value: 0x1, which translates to Unit "RED 1" Signal 2 & 3: Different actual captures from Unit "RED 2" (Value: 0x2) Capture and playback with USB IR Toy.
Strange thing is that the artificial signal (Signal 1) always gets decoded, but the playback of Signal 2 or 3 sometimes decodes fine (as I said about 30% of the cases) and sometimes doesn't even return raw output with dumpV2. So it's either a decoded signal or nothing...
Here is the dumpV2 output: Signal 1:
Encoding : LASERTAG
Code : 1 (13 bits)
Timing[25]:
+ 398, - 280, + 408, - 272, + 394, - 300, + 392, - 254,
+ 410, - 270, + 396, - 296, + 392, - 272, + 396, - 298,
+ 392, - 296, + 368, - 300, + 392, - 298, + 368, - 634,
+ 392
uint16_t rawData[25] = {398, 280, 408, 272, 394, 300, 392, 254, 410, 270, 396, 296, 392, 272, 396, 298, 392, 296, 368, 300, 392, 298, 368, 634, 392}; // LASERTAG 1
uint32_t address = 0x1;
uint32_t command = 0x0;
uint64_t data = 0x1;
Signal 2:
Encoding : LASERTAG
Code : 2 (13 bits)
Timing[23]:
+ 380, - 292, + 380, - 378, + 302, - 306, + 354, - 318,
+ 302, - 392, + 328, - 300, + 354, - 340, + 328, - 386,
+ 274, - 310, + 380, - 292, + 354, - 638, + 696
uint16_t rawData[23] = {380, 292, 380, 378, 302, 306, 354, 318, 302, 392, 328, 300, 354, 340, 328, 386, 274, 310, 380, 292, 354, 638, 696}; // LASERTAG 2
uint32_t address = 0x2;
uint32_t command = 0x0;
uint64_t data = 0x2;
Signal 3:
Encoding : LASERTAG
Code : 2 (13 bits)
Timing[23]:
+ 272, - 358, + 378, - 272, + 380, - 332, + 358, - 272,
+ 384, - 288, + 332, - 362, + 306, - 324, + 358, - 294,
+ 358, - 358, + 302, - 326, + 328, - 642, + 726
uint16_t rawData[23] = {272, 358, 378, 272, 380, 332, 358, 272, 384, 288, 332, 362, 306, 324, 358, 294, 358, 358, 302, 326, 328, 642, 726}; // LASERTAG 2
uint32_t address = 0x2;
uint32_t command = 0x0;
uint64_t data = 0x2;
EDIT: I just tried again all signals with the nano/irremote... 100% success. Maybe it's something about the esp8266 timing?
Can you please elaborate in more detail what you mean by "Different actual captures from Unit "RED 2" (Value: 0x2) Capture and playback with USB IR Toy.".
Are you saying "I'm pressing the trigger/button on the toy, pointed to the IR module on the ESP8266"? or are you saying "I'm playing back a (raw) capture from some other device to the IR module on the ESP8266"?
I'm going to assume the first case in this reply.
So, that being the case, it seems to be able to decode it fine ... when it actually sees the signal. Are there any "missed decoded" i.e. UNKNOWN messages reported? If there was something transmitted, and it couldn't decode it, and its over 6 on or off pulses (total .. e.g. 3 bits) it should display something. Have a look at the DumpV2 code, and if there is a threshold value, try lowering it. It probably isn't in that branch, so don't panic if it isn't. I find it really odd given how poor the some of the previous captures were, that there are no failed (UNKNOWN) messages.
I'm also going to assume that you're waiting a few seconds before pulling the trigger/pressing the button etc. i.e. the ESP isn't busy sending text to the screen while you are transmitting etc.
If that is all the case, you should be seeing something. Always. Successful decode or not. There is a small window while the library is trying to decode and print etc, that it could miss a subsequent message, but that's fairly slim.
Just checking, you are running at 115200 baud, right?
If all of that is okay, then something might be up with your hardware. i.e. the ESP8266 itself, or it might have a bad circuit/not trigger the GPIO properly. But, as you've said, NEC etc work fine. Hmmm, so the hardware should be working.
Are you using the exact same physical IR receiver module on every device? e.g. Arduino & the ESP8266. You are using a 36kHz demodulator, right? Heck, while I'm asking, can you draw out the circuit diag, with part numbers etc.
Have you made any other changes to the library? Like disabling other protocols? (That could be hiding the UNKNOWN/failures etc too. e.g. decodeHash is what shows UNKNOWN (undecoded) messages.)
Oh, and if you have DEBUG turned on, that could be hindering capture in some cases.
EDIT: Oh, and what GPIO/Pin are you using, and what ESP8266 board is it?
What I meant was that I captured Signal 2 & 3 with an USB IR Toy, so I can be sure to have the same signal when changing parameters.
I wasn't aware that turning off DECODE_HASH also hides UNKNOWN messages. I turned it back on and fired the unit on the receiver. (Hardware data sheet: https://www.analysir.com/blog/wp-content/uploads/2017/08/Product-Sheet-A.IR-Shield-ESP8266_ESP32-TRx.pdf) But I'm pretty sure it's not a hardware issue (tried different D1 minis and shields, also have 100% success with the same components on the nano and 100% success with different other protocols).
I let pass several seconds before the next "shot".
So here are the UNKNOWN captures (all of the same device, all should decode as 0x2):
Encoding : UNKNOWN
Code : 3F02BA8B (12 bits)
Timing[23]:
+ 406, - 262, + 384, - 374, + 256, - 354, + 306, - 366,
+ 252, - 442, + 256, - 374, + 358, - 336, + 278, - 438,
+ 246, - 340, + 380, - 292, + 304, - 688, + 746
uint16_t rawData[23] = {406, 262, 384, 374, 256, 354, 306, 366, 252, 442, 256, 374, 358, 336, 278, 438, 246, 340, 380, 292, 304, 688, 746}; // UNKNOWN 3F02BA8B
Encoding : UNKNOWN
Code : F7A10E98 (12 bits)
Timing[23]:
+ 302, - 306, + 302, - 392, + 196, - 476, + 278, - 352,
+ 304, - 348, + 278, - 438, + 226, - 382, + 328, - 366,
+ 252, - 458, + 196, - 392, + 302, - 688, + 644
uint16_t rawData[23] = {302, 306, 302, 392, 196, 476, 278, 352, 304, 348, 278, 438, 226, 382, 328, 366, 252, 458, 196, 392, 302, 688, 644}; // UNKNOWN F7A10E98
Encoding : UNKNOWN
Code : F3A73F23 (12 bits)
Timing[23]:
+ 196, - 432, + 304, - 348, + 328, - 386, + 304, - 326,
+ 302, - 370, + 252, - 442, + 272, - 356, + 278, - 374,
+ 276, - 438, + 274, - 352, + 302, - 668, + 622
uint16_t rawData[23] = {196, 432, 304, 348, 328, 386, 304, 326, 302, 370, 252, 442, 272, 356, 278, 374, 276, 438, 274, 352, 302, 668, 622}; // UNKNOWN F3A73F23
Encoding : UNKNOWN
Code : 66DDBEFA (12 bits)
Timing[23]:
+ 304, - 390, + 328, - 324, + 324, - 346, + 350, - 364,
+ 300, - 330, + 320, - 310, + 324, - 388, + 242, - 366,
+ 354, - 318, + 354, - 340, + 244, - 726, + 670
uint16_t rawData[23] = {304, 390, 328, 324, 324, 346, 350, 364, 300, 330, 320, 310, 324, 388, 242, 366, 354, 318, 354, 340, 244, 726, 670}; // UNKNOWN 66DDBEFA
All of them have 12 bits... Maybe it doesn't even try to decode as the LASERTAG_BITS is constant defined as 13?
I've tried to receive the 0x2 code, send with your send routine:
irsend.sendLasertag(0x2, 13, 0);
Result: UNKNOWN, 12 bits
irsend.sendLasertag(0x1, 13, 0);
Result: LASERTAG, 13 bits, Value: 0x1
As you can see in the other issue, the number of bits seems to be different on some codes. (https://github.com/z3t0/Arduino-IRremote/issues/532#issuecomment-336461835)
Maybe it would be better to define a LASERTAG_MIN_BITS?! Just brainstorming...
The "bits" value, like the "code" value in the UNKNOWN is basically useless information. i.e. not to be trusted, its a sort of made up value for that "protocol". So, 12 for bits in that text is a complete red-herring. Ignore it. (IIRC it is just the number of samples it detection divided by 2. Which is more useful than the old default, which was to say EVERY unknown message it found, regardless of size, was in fact 32 bits.; -)
The library should be decoding irsend.sendLasertag(0x2, 13, 0);
correctly, as it is even in one of the unit tests I have: e.g.
https://github.com/markszabo/IRremoteESP8266/blob/lasertag-dev/test/ir_Lasertag_test.cpp#L110
It does produce 13 bits of data, when you take Manchester Encoding into play.
A quick look over the UNKNOWN data you supplied indicates to me just how out of whack the timings are on those signals. They, in a perfect-ish world all should be values of ~333 or ~666 +/-50 or so. Some are just way to high. e.g. 438 .. or way to low e.g. 196. The library tries to compensate for these things, but there are trade-offs. i.e. higher tolerance values cause bits in this and other protocols to get mistaken. For instance, if a TOLERANCE of 50% is chosen, then 666 is matched by >= 333 and <= 999 That puts it overlapping with the 333 values (even before it gets some tolerance fuzziness)
It's not the minimum size (bits) either, if you see here: https://github.com/markszabo/IRremoteESP8266/blob/lasertag-dev/test/ir_Lasertag_test.cpp#L83
The smallest size possible is 13 rawbuf
slots. The first check in the decoder (here) is if it is at least 13 in size. Note: From the rawData you provided, those samples are 23 in size. 23 is larger than 13. i.e. That's not the issue.
Previously, I had a lot of debugging code in the decodeLasertag() routine, and I can honestly say, it's failing to match due to poor input data. To get it to work with that data the way it is, is going to take some dedicated/special handling so we can't use a lot of the standard routines in the library to do it.
You seem to indicate you've got arduino code that matches it every time, can you point me to that code. They may have some ideas for matching it better than what I'm currently doing.
It did the following changes to the original IRRemote-Library for 100% recognition of this protocol with an Arduino Nano & A.IR Nano Shield (same components ((on the shield)) as in this problem):
No correction factor:
int IRrecv::getRClevel (decode_results *results, int *offset, int *used, int t1)
{
(...)
//correction = (val == MARK) ? MARK_EXCESS : - MARK_EXCESS;
correction = 0;
(...)
}
Tick-Length:
#define MIN_RC5_SAMPLES 13
//#define RC5_T1 889
#define RC5_T1 333
SEND: No start bit, just data bits:
void IRsend::sendRC5 (unsigned long data, int nbits)
{
(...)
// Start
//mark(RC5_T1);
//space(RC5_T1);
//mark(RC5_T1);
(...)
}
DECODE: Get rid of checks:
bool IRrecv::decodeRC5 (decode_results *results)
{
(...)
//if (irparams.rawlen < MIN_RC5_SAMPLES + 2) return false ; //<<< accept any length
// Get start bits
//if (getRClevel(results, &offset, &used, RC5_T1) != MARK) return false ; //<<< no
//if (getRClevel(results, &offset, &used, RC5_T1) != SPACE) return false ; //<<< start
//if (getRClevel(results, &offset, &used, RC5_T1) != MARK) return false ; //<<< bit
(...)
}
All other data and parameters stay the same, no more changing to the code. Other than described in the solution to the other issue I found that TOLERANCE can stay at 25.
I've gone over line-by-line of the current arduino library, and ours. They are functionally the same for the decoding part (except, we have a lot more defensive code, none of which is triggering for this case etc. as far as I can tell). This leaves the quality of the data collected by the ESP8266 as the leading culprit.
Can you put the ESP and the Arduino side-by-side, and "record" (i.e. get the raw data) the exact same message simultaneously? i.e. So I can compare both platforms capture results to the same input. (Example: Put the ESP and Uno side-by-side, put the toy 1 meter away, press the trigger once. Cut and paste the raw results.)
That should help diagnose any potential hardware issues (which I doubt are there), and expose any timing differences between the platforms. e.g. It could be the ESP8266s interrupts etc are lagging compared to the Arduino and producing odd sized timings because of that. Honestly, I expect that to be roughly the same. e.g. with in a few 10s of useconds.
I can get my hands on the units in 2 days again and will capture the signal side-by-side!
I've got another suggestion for this (from Chris of AnalysIR.com), which I'd like to share and ask for your opinion: As the length (width) of the MARKS and SPACES are very varying, in theory the should match either 333 usec or 666 usec. The idea is to correct the length before calling the match() routine. This could prevent the overlapping tolerances...
So I could match... (+/- 150 usec)
uint16_t width = results->rawbuf[*offset];
if(width > 183 && width < 483){width = 333;}
else if (width > 516 && width < 816){width = 666;}
What do you think?
@laserir I had similar thoughts. I've re-tooled the matching routines to ignore excess and tolerances, and to use a fixed delta range (e.g. +/- value) only.
Care to try this branch? https://github.com/markszabo/IRremoteESP8266/tree/lasertag-experimental
I've added those four examples you listed, and it seems to match those, and the previous ones. So, with any luck that should solve your problem. However, I do really want to get to the bottom of why the timings may be different on the ESP vs the ATmega's, so if you do get the chance, a side-by-side test would be most helpful.
This seems to be the right direction!!
I had to comment out the following lines, as the signals tend to consist of different number of bits (as you can see here: https://github.com/z3t0/Arduino-IRremote/issues/532#issuecomment-336461835 , 12 or 11 13 or 12 bits):
bool IRrecv::decodeLasertag(decode_results *results, uint16_t nbits,
bool strict) {
(...)
// Compliance
//if (actual_bits < nbits) return false; // Less data than we expected.
//if (strict && actual_bits != LASERTAG_BITS) return false;
(...)
}
With these lines not commented out, only the library-send signals get recognized for all values, as they have a stable 13 bits per signal. Probably it should be 12 bits for sending too, even though also 13 bit messages produce the desired result by the Units.
Also, the decoding seems to be shifted a bit due to this. E.g. Unit 2 (11 12 bits) gets decoded as Unit 1 (12 13 bits). When you look at the signals of Unit 1 & Unit 2 from the above link it is clear why: The last bit is 1 in both cases, so for Unit 2 the last 0 is missing which would shift the 1 in the right position.
Maybe it is possible to "fill" the last position(s) with zeros if actual_bits < nbits ?
~~Also default should be:
#define LASERTAG_BITS 12U
~~
It should remain as it is:
#define LASERTAG_BITS 13U
This seems to work for me: (Today I have only raw captured signals for testing, I can access the real Units in the next 1-2 days for final testing)
// Compliance
if (actual_bits < nbits) {
for(int mbcount = 1; mbcount <= (nbits - actual_bits); mbcount++) {
data <<= 1; //Add 0
}
}
//if (actual_bits < nbits) return false; // Less data than we expected.
//if (strict && actual_bits != LASERTAG_BITS) return false;
(Again... I'm always amazed how helpful and dedicated many developers are to help with such projects! Your effort is very much appreciated! In this case especially @crankyoldgit @AnalysIR )
I've updated that branch with more unit tests based on the data you indicated for Red 1 & 2 from that issue comment, and the library (as far as I can tell) seems to decode them as we would expect. i.e. as Unit 1 & 2 respectively, all with 13 bits. You can see what I did/added here.
Maybe there is something wrong in my unit test harness which is different from live testing, but that seems unlikely at this point.
As the DumpV2 program should be displaying what it decodes to, AND the raw data for what it saw, can you please add that output to your reply? That should help me pin-point wrong decodes better.
Also, as you seem to be simulating the device via sendRaw() at present, can you also include for each INCORRECT decode, what data and sendRaw() call you used to produce it, and if it was sent using this library, or a different one on say, an Uno etc.
i.e. I need more data as the examples you are pointing out to me seem to work fine with this experimental branch.
In the meantime, I'll look over the unit test harness code that simulates it and see if I can find anything odd/wrong etc.
I think I may have found the issue. In an earlier patch in this chain, I changed an end of buffer check in getRClevel(), I've reverted it back in this branch just now.
I think I've found a unit test bug where previous data was being accessed when decoding new messages if certain conditions were met. I think this was masking the decode issue for me.
So, please git pull
on the experimental branch again. Hopefully we've nailed it this time.
friendly ping to see if you've tried it yet.
Sorry, I got sick a few days ago. :-( Yesterday I went to the arena that has the original Units for some quick testing. Due to christmas parties they are almost fully booked which unfortunately leaves very little time for me to capture signals.
I tried some configurations with the d1 mini. The original Units only got recognized all when I used the sketch with the changes described above (https://github.com/markszabo/IRremoteESP8266/issues/366#issuecomment-349326223)
Your suggestion here (https://github.com/markszabo/IRremoteESP8266/issues/366#issuecomment-349534679) didn't have any influence. (Tried both with and without my change.)
I'll try to capture some signals side-by-side d1/nano but also think they will be in the same range.
Did you try updating to the latest in the lasertag-experimental branch? i.e. after this commit.
Yes, I tried! This was what I meant. Were there any other alterations to the code?
I don't think so. Do a git pull
to be sure. Based on all the data I have from you and as far as I can tell, this should work.
Did some testing... Indeed this seems to solve the problem. Recognition rate is >90%, which is sufficient for my use. The still UNKNOWN signals all were not matched due to bigger variations (> +/-150 usec). Do you still want me to take any data (side-by-side etc.)?
Okay. That's good news, because I was mostly out of ideas if it didn't work. ;-) You can tune the variation parameters if you need to suit your environment etc. 90+% is probably fine as you say, so I'll look at merging this into the library as is.
As for the side-by-side timings, yes, I think that would be helpful if you can, it may allow us to tune this decoder even further.
Also, any of the UNKNOWNS might be helpful too. I'd hate for a player to feel there shot hadn't registered. ;-)
FYI, I've created a new lasertag branch and cherry-picked the commits from the experimental branch into it. It's off for review in PR #374.
Version/revison of the library used
2.2.1
Expected behavior
Tried to recreate feature to decode a RC5-like protocol that works in the original IRRemote-Library with the following changes:
_ir_RC5RC6.ccp
#define RC5_T1 333U //original:
889U`also commented out some checks if the signal is valid, as it only consists of a raw bitstream in manchester encoding:
Actual behavior
These changes work for me with a Nano and IRRemote-Library 100%. When I did the same changes to this library it's working in about 30-40% of the signals. Strange thing is: I also use it (same sketch) with NEC signals: they work perfect.
Circuit diagram and hardware used (if applicable)
I use A.IR TRx Shield with D1 mini from AnalysIR.com.
I have followed the steps in the Troubleshooting Guide & read the FAQ
Yes, I have!
Other useful information
The signal is raw manchester encoded. The interval length is 333 usec (RC5: 889); yes, I edited this value! Even if I don't move the receiver and sender, sometimes it gets the signal, sometimes it doesn't.