bsi-group / dumpntds

Speeds up the extraction of password hashes from ntds.dit files. For use with the ntdsxtract project or the dshash script
27 stars 11 forks source link

dsusers.py and dshashes.py throw "IndexError: list index out of range" #2

Open jskrivseth opened 6 years ago

jskrivseth commented 6 years ago

Taking the files that dumpntds produces and running them through

python ./dshashes.py datatable.csv linktable.csv . --passwordhashes SYSTEM

produces the following exception

[+] Scanning database - 0% -> 6239 records processed
[!] Warning! Multiple records with PEK entry!
[+] Scanning database - 0% -> 6240 records processed
[!] Warning! Multiple records with PEK entry!
[+] Scanning database - 0% -> 6241 records processed
[!] Warning! Multiple records with PEK entry!
Error in sys.excepthook:
Traceback (most recent call last):
  File "/ntdsxtract/ntds/__init__.py", line 31, in simple_exception
    sys.stderr.write("[!] Error!", value, "\n")
TypeError: function takes exactly 1 argument (3 given)

Original exception was:
Traceback (most recent call last):
  File "./dshashes.py", line 90, in <module>
    db = dsInitDatabase(sys.argv[1], sys.argv[3])
  File "/ntdsxtract/ntds/dsdatabase.py", line 174, in dsInitDatabase
    dsCheckMaps(db, workdir)
  File "/ntdsxtract/ntds/dsdatabase.py", line 207, in dsCheckMaps
    dsBuildMaps(dsDatabase, workdir)
  File "/ntdsxtract/ntds/dsdatabase.py", line 263, in dsBuildMaps
    dsMapRecordIdByName[record[ntds.dsfielddictionary.dsObjectName2Index]] = int(record[ntds.dsfielddictionary.dsRecordIdIndex])
IndexError: list index out of range

This is consistent for several ntds.dit files from different environments and the same error occurs in dsusers.py. This occurs with an ntds.dit that is "known good", meaning that we previously ran it through esedbexport without issue.

At first I thought this was a format issue with the CSV output files, but now I'm thinking it's an issue with a missing record from the linktable. I will trace the code in runtime to see what index element it is looking for.

jskrivseth commented 6 years ago

ntdsxtract dies here: https://github.com/csababarta/ntdsxtract/blob/master/ntds/dsdatabase.py#L263

record[ntds.dsfielddictionary.dsObjectName2Index] fails when ntds.dsfielddictionary.dsObjectName2Index = 6

len(record) == 6, so record[6] fails. I thought that maybe this implies that ATTm589825 may not be found for this record, but perhaps something else is happening that causes record[] to be incomplete. I'll keep digging.

jskrivseth commented 6 years ago

I've found a solution. The problem appears to be rooted in the fact that dumpntds doesn't strip newlines that can exist in column values. In the datatable.csv, you may find rogue broken lines, whenever a column has newlines in it. This causes the record to be short and garbage records to follow. In my case, unfortunately I can't share the raw data.

Stripping newlines from the values seems to allow everything to work. I will make a pull request for this change

woanware commented 6 years ago

That's great! Thanks for debugging

On Thu, 22 Mar 2018, 18:07 Jesse Skrivseth, notifications@github.com wrote:

I've found a solution. The problem appears to be rooted in the fact that dumpntds writes values to the output, but doesn't strip newlines that might be in the columns of the ntds. In the datatable.csv, you may find rogue broken lines, whenever a column has newlines in it. This causes the record to be short and garbage records to follow.In my case, unfortunately I can't share the raw data.

Stripping newlines from the values seems to allow everything to work. I will make a pull request for this change

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bsi-group/dumpntds/issues/2#issuecomment-375404962, or mute the thread https://github.com/notifications/unsubscribe-auth/ABBq9Iw43E16JnYscnO_mrUZ2Xq1bqKZks5tg-hwgaJpZM4S3VZ4 .