Closed acelletti closed 4 years ago
I don't think this is a bug in yasna, but things are working as expected.
INTEGER values are encoded as two's complement. Actually the number you are encoding here is -1, and thus is out of range for u64.
See this answer on stack overflow. Also check out section 8.3 in this document.
In fact, the -1 is encoded illegally because you'd have to remove one of the two 0xff bytes to achieve a shorter representation. See section 8.3.1 in the document linked above.
@est31 this is an issue as I understand that the standard might make this representation illegal, however, I found this was breaking my program reading the RDP protocol encoded from one of the clients. When writing a parser, it is good practice to be as flexible as possible when reading and be as strict as you can when writing. As far as I can tell removing that check returns the correct values for both cases, and the most used lib for ASN1 der-parser
reads it no problem.
I would argue that unless it breaks the parser it should accept both.
For now, I have forked this project and removed the check myself, but I would hope this is a temporary fix.
In general I agree that it's a good idea to make reading flexible. But in this instance, i prefer erroring because it would confuse people if they tried reading negative values as unsigned and didn't get errors.
You can do this to get the desired result in both instances:
#[test]
fn yasna_read_integer() {
// this one works fine for both
const INTEGER_WITH_TRAILING_ZERO: &[u8] = &[2, 3, 0, 255, 255];
// this one returns IntegerOverflow in yasna
const INTEGER_NO_TRAILING_ZERO: &[u8] = &[2, 2, 255, 255];
let tests: &[(u32, &[u8])] = &[(65535, INTEGER_NO_TRAILING_ZERO), (65535, INTEGER_WITH_TRAILING_ZERO)];
for &(evalue, data) in tests {
let value = ::parse_ber(data, |reader| Ok(reader.read_tagged_der()?.value().iter().fold(0u32, |v, w| (v << 8) | *w as u32))).unwrap();
assert_eq!(value, evalue);
}
}
The MSDN protocols examples use the [0x02, 0x02, 0xff, 0xff]
to encode the value 65535
as can be seen here :
0x02 0x02 0xff 0xff -> DomainParameters::maxChannelIds = 65535
0x02 0x02 0xfc 0x17 -> DomainParameters::maxUserIds = 64535
0x02 0x02 0xff 0xff -> DomainParameters::maxTokenIds = 65535
The ASN.1 structure definition is detailed in T125 section 7, part 2:
DomainParameters ::= SEQUENCE
{
maxChannelIds INTEGER (0..MAX),
maxUserIds INTEGER (0..MAX),
maxTokenIds INTEGER (0..MAX),
numPriorities INTEGER (0..MAX),
minThroughput INTEGER (0..MAX),
maxHeight INTEGER (0..MAX),
maxMCSPDUsize INTEGER (0..MAX),
protocolVersion INTEGER (0..MAX)
}
Any idea why Microsoft would intentionally use an incorrect encoding for a widely known protocol like ASN.1?
Any idea why Microsoft would intentionally use an incorrect encoding for a widely known protocol like ASN.1?
I guess that's how Microsoft works :). They put bugs into their implementations of things and then don't fix them so that you have to use their products for stuff to work :).
I have found an issue with the library where it fails to read a valid integer during parsing. I have created a contrived test example below that shows this works using the
der-parser
library:I don't know enough about ASN.1 notation to know why there is an explicit check here that seems to be the cause of this:
but as it seems that both should be supported I am raising this as a bug so it can be fixed.