log2timeline / plaso

Super timeline all the things
https://plaso.readthedocs.io
Apache License 2.0
1.7k stars 334 forks source link

log2timeline.py: unable to read backup NTFS volume header #3592

Closed MikeHofmann closed 2 years ago

MikeHofmann commented 3 years ago

Description of problem:

We have two images (from different aquisition tools, differents systems, different examiner) which fail to be parsed with log2timeline.py. Shortly after starting the process, the following error is given:

2021-05-14 08:53:31,742 [INFO] (MainProcess) PID:122 <data_location> Determined data location: /usr/share/plaso
2021-05-14 08:53:31,757 [INFO] (MainProcess) PID:122 <artifact_definitions> Determined artifact definitions path: /usr/share/artifacts
Checking availability and versions of dependencies.
[OK]

Unable to scan source with error: Unable to open file system with error: pyvshadow_volume_open_file_object: unable to open volume. libvshadow_ntfs_volume_header_read_data: invalid volume system signature. libvshadow_ntfs_volume_header_read_file_io_handle: unable to read NTFS volume header. libvshadow_volume_open_read_ntfs_volume_headers: unable to read backup NTFS volume header. libvshadow_volume_open_read: unable to read NTFS volume headers. libvshadow_volume_open_file_io_handle: unable to read from file IO handle..

We tried different tools to convert the image files (from EWF to EWF, from EWF to RAW) and retry parsing without success. Both images open with XWays without trouble. Also tried --no_vss without success.

Command line and arguments:

log2timeline.py --workers 1 --debug timeline.plaso /redactedt/62_redacted/redacted.E01

Source data:

Please provide the source data you used when you experienced the problem. For publicly available data please provide an URL or path of the source data.

Plaso version:

log2timeline.py --version
plaso - log2timeline version 20210412

Operating system Plaso is running on:

Installed using latest docker image

Installation method:

Installed using latest docker image

Debug output/tracebacks:

logfile just contains one line:

2021-05-14 08:53:34,195 [ERROR] (MainProcess) PID:122 <log2timeline> Unable to scan source with error: Unable to open file system with error: pyvshadow_volume_open_file_object: unable to open volume. libvshadow_ntfs_volume_header_read_data: invalid volume system signature. libvshadow_ntfs_volume_header_read_file_io_handle: unable to read NTFS volume header. libvshadow_volume_open_read_ntfs_volume_headers: unable to read backup NTFS volume header. libvshadow_volume_open_read: unable to read NTFS volume headers. libvshadow_volume_open_file_io_handle: unable to read from file IO handle..
joachimmetz commented 3 years ago

It looks like your images are missing the NTFS back-up volume header. Are these images of a volume created by a live imaging tool on Windows? Are you sure your imaging tool includes the full volume and not silently skips the last sector?

Have a look at https://github.com/libyal/libbfoverlay/wiki/Examples#correcting-truncated-windows-live-volume-images to see if that can help work-around the missing data

joachimmetz commented 3 years ago

Possibly related https://github.com/log2timeline/dfvfs/issues/514

MikeHofmann commented 3 years ago

Are these images of a volume created by a live imaging tool on Windows?

One was done with AccessData® FTK® Imager 4.5.0.3 the other with Logicube Falcon-Neo 3.1. I doubt that these two are affected, especially the Falcon-Neo was done offline as its a hardware-imager.

I'll try some of the recovery tips from your link later in the week.

joachimmetz commented 3 years ago

I doubt that these two are affected, especially the Falcon-Neo was done offline as its a hardware-imager.

any other reasons why the backup volume header could be missing?