The SMB script needs to report to clients when the server started. Honeyd does actually keep track of uptime, but doesn't report this information to the scripts.
NOTE: The system time that the SMB script needs to output if not a simple timestamp. It's in some arcane and insanely complex format that is only ever used here. For more detail, see:
Keeping byte-order in mind, the completed time value should be read as two little-endian 32-bit integers. The result, however, should be handled as a 64-bit signed value representing the number of tenths of a microsecond since January 1, 1601, 00:00:00.0 UTC.
WHAT?!?!
Yes, you read that right folks. The time value is based on that unwieldy little formula. Read it again five times and see if you don't get a headache. Looks as though we need to get out the protractor, the astrolabe, and the didgeridoo and try a little calculating. Let's start with some complex scientific equations:
In other words, the server time is given in units of 10^-7 seconds. Many CIFS implementations handle these units by converting them into Unix-style measurements. Unix, of course, bases its time measurements on an equally obscure date: January 1, 1970, 00:00:00.0 UTC25. Converting between the two schemes requires knowing the difference (in seconds) between the two base times.
So, if you want to convert the SystemTime to a Unix time_t value, you need to do something like this:
The SMB script needs to report to clients when the server started. Honeyd does actually keep track of uptime, but doesn't report this information to the scripts.
NOTE: The system time that the SMB script needs to output if not a simple timestamp. It's in some arcane and insanely complex format that is only ever used here. For more detail, see:
http://www.ubiqx.org/cifs/SMB.html