Closed monoxgas closed 2 years ago
Thanks. The code is in a transitional state (well isn't it always?) as in the code was originally written purely for read-only purposes and I'm having to retro fit in writing. That's why there's inconsistencies such as some objects which are fully rebuildable at serialization while others construct the data and reparse. I'd prefer to move it to one way or another, but I'm in part mindful that at least for top level authentication tokens I'd rather keep the byte representation as originally received so that the more common use case of client/server authentication doesn't break if my reserialization isn't correct or breaks a checksum somewhere.
I've tended to go for a builder paradigm where possible, you have your read-only object which can be converted into builder to modify and then convert it back to a read-only object. Though of course as the code has been written it isn't always 100% consistent. But certainly I can imagine as the complexity of the object graph goes up then it's going to make it more complex to use that approach, though I guess if you have a builder you could convert the whole PAC data to a "builder", modify and reserialize in place.
As for ensure correct reserialization perhaps changing things that need to be reserialized in the exact same way we should convert them to use a structure which guarantees a full round trip.
I assume this can be a longer term PR as I work out the best ways to contribute. I've made some initial changes to help support updating and re-packing tickets, authenticators, and the PAC. Some initial questions+thoughts:
Create
functions and underneath the DER bytes are manually built and the object is then parsed as normal. Is this the preferred strategy going forward? As objects get deeper and deeper in terms of nesting, it can difficult to alter a single property and rebuild the entire object tree from there. You end up with lots ofCreate
calls with copy/pasted params.IDERObject
to export their own encoding method. I like this strategy but I'm unsure where the boundaries for application are. Feels like the higher level objects (KerberosTicket
, etc.) primarily use the_data
field to maintain their raw representation.seq.WriteContextSpecific(0, b => b.WriteInt32((int)obj));
as opposed toseq.WriteContextSpecific(0, (int)obj);
. I assume the latter is due to typed helper wraps being added and the former is just left-over code, but I did want to check to ensure i understood their equivalency.KERB_VALIDATION_INFO
internal structure combined with discrepancies in how time and string objects are handled make it difficult to "repack" the bytes without upsetting the checksums. For instance, theMaximumLength
of someRPC_UNICODE_STRING
objects vary even in the MS implementation. When we cast to strings and back we lose that data. This is one area where I took the field modifications route, allowing them to be updated, then modifying theGetData
method to re-encode, but this will probably need to change.out_key
parameter for the decryption functions. After we successfully decode a Ticket, we can then go looking for the embedded PAC and give it the key we used. A "cleaner" solution might be to expose the active key objects down the IfRelevant trees, but I wasn't a fan of this.Current code is awesome though, I do wonder what optimizations/APIs I'm missing just having to dissect the code without reference. I'll message you privately regarding that. Let me now how you're feeling about handling these object trees, field updates, and repacking nested structures. Feels like
IDerObject
,GetData()
,_data
, etc. all have some overlap and I might not be understanding the larger picture.