fkie-cad / FACT_docker

Dockerfile for building the FACT container
GNU General Public License v3.0
21 stars 10 forks source link

binwalk output missing #5

Closed frakman1 closed 2 years ago

frakman1 commented 2 years ago

Although the analysis completed successfully, cwe_checker displays results, and unpacking/extraction appears to have worked because I can browse the rootfs file-tree, the binwalk tab is missing its contents:

image

File Tree: image

I am using this openwrt firmware

jstucke commented 2 years ago

Looking at the error you posted earlier

[2021-10-21 15:32:03][fail_safe_file_operations][ERROR]: Could not read file: FileNotFoundError [Errno 2] No such file or directory: '/tmp/fact-docker-tmp/fact_analysis_binwalk_z6fyi8gn/1828f1cdceb0576a99be4818a302bb642f213ef68325bd2b136cb5f53bffd76f_55.png'

this seems also to be a path problem. I will look into it.

jstucke commented 2 years ago

It seems binwalk is not installed correctly. To fix this for your FACT container you could try to do the following:

This seemed to fix it at least for me. I still don't know why the initial installation fails, though. There seems to be no error.

frakman1 commented 2 years ago

Very strange. I was able to run all those steps and the installation completed. However,when I tried to re-do analysis again, it removed the firmware but never re-did the analysis. Info->System shows: No analysis in progress I did that again for the second firmware and the same thing happened. Now there is firmware listed on the Home page.

UPDATE: This is probably my fault. I saw a permissions error message for /tmp/fact-docker-tmp subfolders. I changed ownership to 999:999 , chmod'd to 777 and touched REINITIALIZE_DB and restarted but that didn't recreate the firmware database unfortunately. The /media/data folders appear to be populated with a lot of data.

Is there a way to recover or should I just start over, clear the db folders and upload firmware again?

frakman1 commented 2 years ago

Ok, I started over . I deleted the contents of /media/data/fact* and /tmp/fact-docker-tmp Restarting docker container shows Authorization errors.

$ docker stop fact; docker rm fact; docker run -it  --name fact --group-add $(getent group docker | cut -d: -f3)  -v /media/data:/media/data -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/fact-docker-tmp:/tmp/fact-docker-tmp -p 0.0.0.0:5000:5000 frakman1/fact:latest start
fact
fact
[2021-10-22 19:39:31][start_all_installed_fact_components][INFO]: starting db
[2021-10-22 19:39:31][MongoMgr][INFO]: Starting local mongo database
[2021-10-22 19:39:31][MongoMgr][INFO]: Starting DB: mongod --auth --config /opt/FACT_core/src/config/mongod.conf --fork --logpath /var/log/fact/mongo.log
[2021-10-22 19:39:33][start_all_installed_fact_components][INFO]: starting frontend
[2021-10-22 19:39:33][start_all_installed_fact_components][INFO]: starting backend
[2021-10-22 19:39:33][process][WARNING]: Error: Authentication not successful: Authentication failed., full error: {'ok': 0.0, 'errmsg': 'Authentication failed.', 'code': 18, 'codeName': 'AuthenticationFailed'}
[2021-10-22 19:39:33][process][WARNING]: Error: Authentication not successful: Authentication failed., full error: {'ok': 0.0, 'errmsg': 'Authentication failed.', 'code': 18, 'codeName': 'AuthenticationFailed'}
[2021-10-22 19:39:33][process][CRITICAL]: SHUTTING DOWN SYSTEM
[2021-10-22 19:39:33][process][CRITICAL]: SHUTTING DOWN SYSTEM
[2021-10-22 19:39:33][process][WARNING]: Error: Authentication not successful: Authentication failed., full error: {'ok': 0.0, 'errmsg': 'Authentication failed.', 'code': 18, 'codeName': 'AuthenticationFailed'}
[2021-10-22 19:39:33][process][CRITICAL]: SHUTTING DOWN SYSTEM
[2021-10-22 19:39:33][process][WARNING]: Error: Authentication not successful: Authentication failed., full error: {'ok': 0.0, 'errmsg': 'Authentication failed.', 'code': 18, 'codeName': 'AuthenticationFailed'}
[2021-10-22 19:39:33][process][CRITICAL]: SHUTTING DOWN SYSTEM
[2021-10-22 19:39:35][start_all_installed_fact_components][CRITICAL]: Backend did not start. Shutting down...

Not sure why as I have not changed anything in the config files and all other files have been cleared

frakman1 commented 2 years ago

I copied the mongo.log file from the container and it is complaining about the fact_admin@admin user

$ docker cp fact:/var/log/fact/mongo.log ./
$ cat mongo.log 
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] MongoDB starting : pid=12 port=27018 dbpath=/media/data/fact_wt_mongodb 64-bit host=604d410987ed
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] db version v3.6.8
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] modules: none
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] build environment:
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten]     distarch: x86_64
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2021-10-22T20:16:12.948+0000 I CONTROL  [initandlisten] options: { config: "/opt/FACT_core/src/config/mongod.conf", net: { bindIp: "127.0.0.1", port: 27018 }, processManagement: { fork: true }, security: { authorization: "enabled" }, storage: { dbPath: "/media/data/fact_wt_mongodb", engine: "wiredTiger", journal: { enabled: true } }, systemLog: { destination: "file", path: "/var/log/fact/mongo.log" } }
2021-10-22T20:16:12.949+0000 I STORAGE  [initandlisten] 
2021-10-22T20:16:12.949+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2021-10-22T20:16:12.949+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2021-10-22T20:16:12.949+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=7480M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2021-10-22T20:16:13.907+0000 I STORAGE  [initandlisten] WiredTiger message [1634933773:907263][12:0x7fe206e79ac0], txn-recover: Set global recovery timestamp: 0
2021-10-22T20:16:14.364+0000 I CONTROL  [initandlisten] 
2021-10-22T20:16:14.364+0000 I CONTROL  [initandlisten] ** WARNING: You are running on a NUMA machine.
2021-10-22T20:16:14.364+0000 I CONTROL  [initandlisten] **          We suggest launching mongod like this to avoid performance problems:
2021-10-22T20:16:14.364+0000 I CONTROL  [initandlisten] **              numactl --interleave=all mongod [other options]
2021-10-22T20:16:14.365+0000 I CONTROL  [initandlisten] 
2021-10-22T20:16:14.365+0000 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: b6593f71-bd07-4e52-81e8-00693e91d797
2021-10-22T20:16:14.574+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 3.6
2021-10-22T20:16:14.580+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: 25b18049-64c9-41e5-b993-9f1d7d7ec871
2021-10-22T20:16:14.764+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/media/data/fact_wt_mongodb/diagnostic.data'
2021-10-22T20:16:14.765+0000 I NETWORK  [initandlisten] waiting for connections on port 27018
2021-10-22T20:16:14.765+0000 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: 32879836-df9b-445b-8dc0-a0bdc792f322
2021-10-22T20:16:14.776+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44832 #1 (1 connection now open)
2021-10-22T20:16:14.776+0000 I ACCESS   [conn1] note: no users configured in admin.system.users, allowing localhost access
2021-10-22T20:16:14.776+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:44832 conn1: { driver: { name: "PyMongo", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-89-generic" }, platform: "CPython 3.8.10.final.0" }
2021-10-22T20:16:14.777+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44834 #2 (2 connections now open)
2021-10-22T20:16:14.778+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:44834 conn2: { driver: { name: "PyMongo", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-89-generic" }, platform: "CPython 3.8.10.final.0" }
2021-10-22T20:16:14.778+0000 I ACCESS   [conn2] SCRAM-SHA-1 authentication failed for fact_admin on admin from client 127.0.0.1:44834 ; UserNotFound: Could not find user fact_admin@admin
2021-10-22T20:16:14.782+0000 I NETWORK  [conn2] end connection 127.0.0.1:44834 (1 connection now open)
2021-10-22T20:16:14.782+0000 I NETWORK  [conn1] end connection 127.0.0.1:44832 (0 connections now open)
2021-10-22T20:16:14.880+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44836 #3 (1 connection now open)
2021-10-22T20:16:14.880+0000 I NETWORK  [conn3] received client metadata from 127.0.0.1:44836 conn3: { driver: { name: "PyMongo", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-89-generic" }, platform: "CPython 3.8.10.final.0" }
2021-10-22T20:16:14.881+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44838 #4 (2 connections now open)
2021-10-22T20:16:14.881+0000 I NETWORK  [conn4] received client metadata from 127.0.0.1:44838 conn4: { driver: { name: "PyMongo", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-89-generic" }, platform: "CPython 3.8.10.final.0" }
2021-10-22T20:16:14.882+0000 I ACCESS   [conn4] SCRAM-SHA-1 authentication failed for fact_admin on admin from client 127.0.0.1:44838 ; UserNotFound: Could not find user fact_admin@admin
2021-10-22T20:16:14.885+0000 I NETWORK  [conn4] end connection 127.0.0.1:44838 (1 connection now open)
2021-10-22T20:16:14.885+0000 I NETWORK  [conn3] end connection 127.0.0.1:44836 (0 connections now open)
2021-10-22T20:16:15.123+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44844 #5 (1 connection now open)
2021-10-22T20:16:15.124+0000 I NETWORK  [conn5] received client metadata from 127.0.0.1:44844 conn5: { driver: { name: "PyMongo", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-89-generic" }, platform: "CPython 3.8.10.final.0" }
2021-10-22T20:16:15.126+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44846 #6 (2 connections now open)
2021-10-22T20:16:15.126+0000 I NETWORK  [conn6] received client metadata from 127.0.0.1:44846 conn6: { driver: { name: "PyMongo", version: "3.12.1" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-89-generic" }, platform: "CPython 3.8.10.final.0" }
2021-10-22T20:16:15.127+0000 I ACCESS   [conn6] SCRAM-SHA-1 authentication failed for fact_admin on admin from client 127.0.0.1:44846 ; UserNotFound: Could not find user fact_admin@admin
2021-10-22T20:16:15.134+0000 I NETWORK  [conn6] end connection 127.0.0.1:44846 (1 connection now open)
2021-10-22T20:16:15.134+0000 I NETWORK  [conn5] end connection 127.0.0.1:44844 (0 connections now open)
2021-10-22T20:16:15.161+0000 I INDEX    [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
2021-10-22T20:16:15.161+0000 I INDEX    [LogicalSessionCacheRefresh]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2021-10-22T20:16:15.164+0000 I INDEX    [LogicalSessionCacheRefresh] build index done.  scanned 0 total records. 0 secs
2021-10-22T20:16:15.164+0000 I COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: createIndexes { createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], $db: "config" } numYields:0 reslen:98 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_msg 398ms
frakman1 commented 2 years ago

OK, I finally got it to work again. I thought I'd share my learnings here for posterity.

It turns out that 999 should be the user ID of the files and folders under /media/data/fact* but not the group ID. I had mistakenly done a chown -R 999:999 on them all. Instead, the group ID had to be docker. So chown -R 999:docker ... was what fixed it in the end.

frakman1 commented 2 years ago

It seems binwalk is not installed correctly. To fix this for your FACT container you could try to do the following:

  • start the container
  • exec into the container: docker exec -it fact /bin/bash
  • cd into the binwalk plugin folder: cd /opt/FACT_core/src/plugins/analysis/binwalk
  • rerun the install script: python3 install.py

This seemed to fix it at least for me. I still don't know why the initial installation fails, though. There seems to be no error.

So for the sake of completeness, what step should I take to redo the binwalk part? I don't get the differences between the "Analysis->update analysis" and "Admin->redo analysis" and "Run additional analysis->Add analysis to file". They all seem to show the long list of plugin options to run.

frakman1 commented 2 years ago

The python reinstallation appeared to work:

fact@f69b1ea30069:/opt/FACT_core/src/plugins/analysis/binwalk$ python3 install.py
fact@f69b1ea30069:/opt/FACT_core/src/plugins/analysis/binwalk$

I tried doing the Run additional analysis->Add analysis to file and selected force analysis update but I still see the error:

[2021-10-23 00:50:35][fail_safe_file_operations][ERROR]: Could not read file: FileNotFoundError [Errno 2] No such file or directory: '/tmp/fact-docker-tmp/fact_analysis_binwalk_mh4immk6/1126460f3acf5b6b8e00dc6352051960da869943ec5fdbbf43b0a0ee9fa67cdf_33752533.png'

The contents of that folder:

fact@f69b1ea30069:/tmp/fact-docker-tmp$ tree
.
|-- fact_analysis_binwalk_1_0x7994
|-- fact_analysis_binwalk_pb5mj03g
|-- fact_unpack_e2rk9dd4
|   |-- files
|   |-- input
|   |   `-- 3c423b898fe58ebfc286a3bf78e9c6c409e9517ae5c968875005e7d730e5a561_5268
|   `-- reports
|       `-- meta.json
|-- fact_unpack_lf35kx4i
|   |-- files
|   |-- input
|   |   `-- 4cde10f86a58c57aa46c609053f0b981eef3078320d3a3d5bddb0597373631c7_14184
|   `-- reports
|       `-- meta.json
|-- input_vectors99tno6t9
|   `-- ubirename
|-- input_vectorsi4_pgjig
|   `-- flash_unlock
|-- input_vectorsk6fu8pea
|   `-- sgdisk
`-- input_vectorsq7dpq14y
    `-- vmlinux_XZ_24728.elf

14 directories, 8 files
jstucke commented 2 years ago

So for the sake of completeness, what step should I take to redo the binwalk part? I don't get the differences between the "Analysis->update analysis" and "Admin->redo analysis" and "Run additional analysis->Add analysis to file". They all seem to show the long list of plugin options to run.

"update analysis" should work in this case (failed analyses should also get "updated")

jstucke commented 2 years ago

I still got a permission error with #7 :

File "/opt/FACT_core/src/plugins/analysis/binwalk/code/binwalk.py", line 27, in process_object tmp_dir = TemporaryDirectory(prefix='fact_analysisbinwalk', dir=get_temp_dir_path(self.config)) File "/usr/lib/python3.8/tempfile.py", line 919, in init self.name = mkdtemp(suffix, prefix, dir) File "/usr/lib/python3.8/tempfile.py", line 497, in mkdtemp _os.mkdir(file, 0o700) PermissionError: [Errno 13] Permission denied: '/tmp/fact-docker-tmp/fact_analysis_binwalk_1cj39oam'

This error disappeared and the analysis completed when I manually added write permissions (in the host) to /tmp/fact-docker-tmp/, though.

sudo chmod a+w -R /tmp/fact-docker-tmp/

Edit: this should not occur when using start.py run as the permissions should be set correctly in the script. I tested it with docker run, though.

jstucke commented 2 years ago

7 should have fixed this. If it still doesn't work for you, feel free to reopen the issue.

frakman1 commented 2 years ago

I don't understand the reference to start.py and how I can use it. I saw it referenced also in the readme but not sure where it comes into play. I usually just run docker....run and I only see three parameters:, start, pull-containers and start-branch. I used pull-containers the first time but after that, it's always start Are you saying that I can pass parameters meant for start.py into the docker run's start entrypoint?

frakman1 commented 2 years ago

Regarding the fix itself:

RUN useradd -r --no-create-home -d /var/log/fact fact

Is there way to incorporate that into the existing docker container/image or must I rebuild everything from scratch?

jstucke commented 2 years ago

Is there way to incorporate that into the existing docker container/image

all --no-create-home -d /var/log/fact probably does is not creating the folder /home/fact and instead pointing to the folder /var/log/fact in /etc/passwd (where you would usually look for a users home folder)

So using docker exec to get in the container, creating /var/log/fact if it does not exist and changing the home path in /etc/passwd should probably have the same effect.

I don't understand the reference to start.py and how I can use it

It is meant as an easy to use script so that you do not have to construct the docker run command by hand or manually set folder permissions. The readme should get updated with #8 to make this clearer. It e.g. already has options for setting the DB and config paths or using a different branch of FACT than master (see python3 start.py run --help).

frakman1 commented 2 years ago

This worked and now I see binwalk output and entropy chart. Thank you

frakman1 commented 2 years ago

I ran "Run additional analysis" on a single file from a previous analysis without binwalk output and although there were no errors in the docker logs I still don't see an image or output under the binwalk tab. I also tried "update analysis" for the whole firmware with the same results. In both cases, I only selected the binwalk checkmark from the list of options.

The logs for "update analysis" shows many pages of chunks like this:

Files included: set()
[2021-10-29 10:21:08][Analysis][INFO]: Analysis Completed:
UID: 6e1ba59cfef81b79c1db2a7ae13f1cd46e33aca53178203ff94772234b33b799_38
 Processed analysis: ['unpacker', 'file_type', 'malware_scanner', 'crypto_hints', 'software_components', 'binwalk', 'crypto_material', 'users_and_passwords', 'printable_strings', 'ip_and_uri_finder', 'interesting_uris', 'string_evaluator', 'cve_lookup', 'source_code_analysis', 'elf_analysis', 'kernel_config', 'exploit_mitigations', 'input_vectors', 'file_system_metadata', 'init_systems', 'cpu_architecture', 'file_hashes', 'qemu_exec', 'known_vulnerabilities', 'tlsh', 'cwe_checker']
 Files included: set()

a couple of entries had giant Files included sets:

2021-10-29 10:21:42][Analysis][INFO]: Analysis Completed:
UID: e2a2bf456d247c2e3136af71eabbe20ca6311494571d1c25cf47103d6e92349a_47362048
 Processed analysis: ['unpacker', 'file_type', 'malware_scanner', 'crypto_material', 'binwalk', 'users_and_passwords', 'printable_strings', 'ip_and_uri_finder', 'software_components', 'crypto_hints', 'elf_analysis', 'file_system_metadata', 'init_systems', 'input_vectors', 'string_evaluator', 'file_hashes', 'qemu_exec', 'kernel_config', 'exploit_mitigations', 'source_code_analysis', 'interesting_uris', 'cpu_architecture', 'cve_lookup', 'known_vulnerabilities', 'tlsh', 'cwe_checker']
 Files included: {'577f17f7c8e711e66d1765a817362e8723ec4e26372c4adb0be6b4ecb55ea0e5_3780', '91dea00333d2613344ffdb0a42eedd0fe2c916726625665a3d59c3811be4621a_385', '3c808820dd9fac884294029097ce767634aac38ce4e0fff22e0793bb0566a634_16792', '65f1d4462307919e64022f40cd0a97f310c8bc9dcc9589492d3ed7316ab2a19b_54', '47918d882d569b600c527218575e4e536ebe1ea3f640f624b26a61e2fd499498_35', '9f4d64a9d55ae4eb8aa9e07a7364c2f334f602bf1a38dd5740cb6dea96e33e55_296', '67b756fd55dddacfc0795e 
...

But the end result was no binwalk output or entropy image.

jstucke commented 2 years ago

There is a functionality in the analysis scheduling of FACT that is meant to prevent running analyses over and over unnecessarily. It works by looking if there is already a result for this file and analysis plugin in the database. It could be that before, when it was broken, there was some result stored in the database and now the analysis is not run again because of that. If that is the case you could still do either:

frakman1 commented 2 years ago

Yes that worked for a single file. "re-do analysis" for the entire firmware is not really feasible since it takes 7 hours to do everything. I wish there was a "force analysis update" for the "update analysis" run of the entire firmware.

Is there any logic I can change in the container to bypass this duplicate check just this time?

jstucke commented 2 years ago

There is a silly workaround you could do: increase the version of the analysis plugin. The "smart" scheduling not only checks if there are already results in the DB, it also checks if the results were produced by the latest version of the plugin. So e.g. for binwalk you could simply increase the VERSION in /opt/FACT_core/src/plugins/analysis/binwalk/code/binwalk.py.

Also feel free to open an issue with a feature request for a "force update" checkbox for "update analysis".

frakman1 commented 2 years ago

Unfortunately, that didn't seem to work. I went into that file, updated the VERSION field to 0.5.5 and then did a top level Update Analysisfrom the Firmware page and only checked the binwalk option . The logs shows streams of empty-file-set, analysis-complete blocks like this:

[2021-11-02 13:38:08][Analysis][INFO]: Analysis Completed:
UID: fd94daa04bafd792fa54798e09e6f14fe3991a8f6d0be981edbb7e567b8f9f99_96
 Processed analysis: ['unpacker', 'file_type', 'users_and_passwords', 'ip_and_uri_finder', 'printable_strings', 'malware_scanner', 'binwalk', 'crypto_material', 'crypto_hints', 'software_components', 'exploit_mitigations', 'cve_lookup', 'string_evaluator', 'interesting_uris', 'file_hashes', 'input_vectors', 'file_system_metadata', 'init_systems', 'elf_analysis', 'kernel_config', 'source_code_analysis', 'cpu_architecture', 'qemu_exec', 'known_vulnerabilities', 'cwe_checker', 'tlsh']
 Files included: set()
frakman1 commented 2 years ago

UPDATE:

I noticed that in the Info->System page, the binwalk version didn't update so I tried restarting the docker container. docker restart fact did the trick. Thanks