Closed GoogleCodeExporter closed 8 years ago
The NTFS file might be too small or it is not a normal file(e.g., a spark file)
and has no associated sectors.
Original comment by tinyb...@gmail.com
on 24 Jan 2013 at 9:11
One example was menu.lst. This was a small file (<1K). I added a load of
characters to it and then it started to give results such as:
(hd0,0)173746000+2
I chopped out the extra characters, and now it gives
(hd0,0)173746000+1
I created a new empty file, t.txt on a 100GB volume with 19GB free space - it
gave no result in blocklist -> (hd0,0)
Then made it 6k in size by adding junk - it gave
(hd0,0)0173759568+8,173657648+4
reducing the file to 2k gives
(hd0,0)0173759568+4
reducing to 12 bytes gives
(hd0,0)0173759568+1
blocklist /$mft gives
(hd0,0)6291456+39936
copying the t.txt which has 20 characters in it to new file t1.txt, I get no
result
blocklist /t1.txt
(hd0,0)
Original comment by Steve6375
on 24 Jan 2013 at 10:17
P.S. Sorry, the NTFS HDD I am using is 100GB volume with 20GB of free space,
(not a 20GB drive).
Original comment by Steve6375
on 24 Jan 2013 at 10:18
Sorry I had a typo with my last post. spark should be sparse.
OK, I see. It is normal. The small files could be allocated out of the
"sectors" in NTFS(by default), in which case we cannot list its blocks.
It is not a problem. It is just a feature with NTFS.
Original comment by tinyb...@gmail.com
on 24 Jan 2013 at 3:53
In that case all small files should return
(hd0,0)
but that is not the case - some do and some don't.
Original comment by Steve6375
on 24 Jan 2013 at 4:12
That is also normal. By default it is allocated no sectors for small files.
Original comment by tinyb...@gmail.com
on 24 Jan 2013 at 4:58
Sure - it uses the first cluster in the MFT, I know and understand that - so
why is it that an edited small file is listed by blocklist even though it only
has 12 bytes in it?
Original comment by Steve6375
on 24 Jan 2013 at 5:54
OK - done some more testing an it seems once the file goes over about 680 bytes
it moves the data to a new record. Then even if you edit it, the data stays in
the new record and is not written to the $MFT entry.
There is not much you can do about returning the $MFT. The data could be in 1
or 2 sectors and would usually start at offset 148h.
I guess it would be nice to return something though, rather than nothing, say...
(hd0,0)[$MFT]+2
though for a small file it might only be in 1 sector.
Original comment by Steve6375
on 24 Jan 2013 at 7:20
See
http://computer-forensics.sans.org/blog/2012/10/15/resident-data-residue-in-ntfs
-mft-entries
Original comment by Steve6375
on 24 Jan 2013 at 7:52
I am not really familiar with NTFS. I cannot do this work. I will wait some
time and see if anyone would give a patch for it.
Original comment by tinyb...@gmail.com
on 25 Jan 2013 at 3:33
The internal block-list representation of files does not support a file with
starting non-zero offset in a sector.
Close this issue now.
Original comment by tinyb...@gmail.com
on 27 Jan 2013 at 1:58
Original issue reported on code.google.com by
Steve6375
on 23 Jan 2013 at 1:50