Open oguzhanozgur opened 1 month ago
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9900): Max retries exceeded with url: /status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7dbd80424160>: Failed to establish a new connection: [Errno 111] Connection refused'))
This means that the extractor container could not be reached. The extractor is responsible for unpacking the firmware image and runs inside a docker container. Could you make sure the docker image is there? docker image ls | grep -i fact_extractor
should give at least one result with tag "latest". Also could you make sure there are no errors when you start the backend? It should look something like this when the unpacking scheduler starts without errors:
[2024-10-01 14:43:46][extraction_container][INFO]: Started unpack worker 0
[2024-10-01 14:43:47][extraction_container][INFO]: Started unpack worker 1
[2024-10-01 14:43:47][extraction_container][INFO]: Started unpack worker 2
[2024-10-01 14:43:47][extraction_container][INFO]: Started unpack worker 3
[2024-10-01 14:43:47][unpacking_scheduler][INFO]: Unpacking scheduler online
It could also be the case that extractor containers are still running from another time when you did not shut down FACT cleanly. Then you can run docker ps | grep fact_extractor | cut --delimiter " " --fields 1 | xargs docker stop
to stop the containers before starting FACT again.
When I start the tool, i can see these logs (and I guess there is no error here.)
oguzhanozgur@oguzhanozgur:~/FACT_core$ ./start_all_installed_fact_components [2024-10-01 15:56:46][start_all_installed_fact_components][INFO]: starting database [2024-10-01 15:56:46][start_all_installed_fact_components][INFO]: starting frontend [2024-10-01 15:56:46][start_all_installed_fact_components][INFO]: starting backend [2024-10-01 15:56:46][fact_base][INFO]: Starting FACT Frontend @ 4.3-dev (d96db1d1, Python 3.10.12) [2024-10-01 15:56:46][init][INFO]: Alembic DB revision: head: 05d8effce8b3, current: 05d8effce8b3 [2024-10-01 15:56:46][fact_base][INFO]: Starting FACT DB-Service @ 4.3-dev (d96db1d1, Python 3.10.12) [2024-10-01 15:56:46][init][INFO]: Alembic DB revision: head: 05d8effce8b3, current: 05d8effce8b3 [2024-10-01 15:56:46][fact_base][INFO]: Starting FACT Backend @ 4.3-dev (d96db1d1, Python 3.10.12) [2024-10-01 15:56:46][fact_base][INFO]: Successfully started FACT DB-Service [2024-10-01 15:56:46][init][INFO]: Alembic DB revision: head: 05d8effce8b3, current: 05d8effce8b3 [2024-10-01 15:56:47][fact_base][INFO]: Successfully started FACT Frontend [uWSGI] getting INI configuration from /home/oguzhanozgur/FACT_core/src/config/uwsgi_config.ini Starting uWSGI 2.0.25.1 (64bit) on [Tue Oct 1 15:56:47 2024] compiled with version: 11.4.0 on 01 October 2024 07:03:53 os: Linux-6.5.0-1027-oem #28-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 25 13:32:46 UTC 2024 nodename: oguzhanozgur machine: x86_64 clock source: unix detected number of CPU cores: 12 current working directory: /home/oguzhanozgur/FACT_core/src detected binary path: /home/oguzhanozgur/ozzy/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! your processes number limit is 125501 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: enabled uwsgi socket 0 bound to TCP address 127.0.0.1:5000 fd 3 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] Python main interpreter initialized at 0x58d1df7f1fc0 python threads support enabled your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 500064 bytes (488 KB) for 10 cores Operational MODE: preforking+threaded [2024-10-01 15:56:47][ip_and_uri_finder_analysis][INFO]: ip signature path: /home/oguzhanozgur/ozzy/lib/python3.10/site-packages/common_analysis_ip_and_uri_finder/yara_rules/ip_rules.yara [2024-10-01 15:56:47][ip_and_uri_finder_analysis][INFO]: ip signature path: /home/oguzhanozgur/ozzy/lib/python3.10/site-packages/common_analysis_ip_and_uri_finder/yara_rules/uri_rules.yara [2024-10-01 15:56:47][frontend_main][INFO]: Web front end online WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x58d1df7f1fc0 pid: 54743 (default app) uWSGI is running in multiple interpreter mode spawned uWSGI master process (pid: 54743) spawned uWSGI worker 1 (pid: 54962, cores: 2) spawned uWSGI worker 2 (pid: 54964, cores: 2) spawned uWSGI worker 3 (pid: 54966, cores: 2) spawned uWSGI worker 4 (pid: 54968, cores: 2) spawned uWSGI worker 5 (pid: 54970, cores: 2) Stats server enabled on 127.0.0.1:9191 fd: 19 [2024-10-01 15:56:47][compare][INFO]: Comparison plugins available: Software, File_Coverage, File_Header [2024-10-01 15:56:48][scheduler][INFO]: Analysis scheduler online [2024-10-01 15:56:48][scheduler][INFO]: Analysis plugins available: binwalk 1.0.0, cpu_architecture 0.4.0, crypto_hints 0.2.1, crypto_material 0.5.2, cve_lookup 0.1.0, cwe_checker 0.5.4, device_tree 2.0.0, elf_analysis 0.3.4, exploit_mitigations 0.2.0, file_hashes 1.2, file_system_metadata 1.0.0, file_type 1.0.0, hardware_analysis 0.2, hashlookup 0.1.4, information_leaks 0.2.0, init_systems 0.4.2, input_vectors 0.1.2, interesting_uris 0.1, ip_and_uri_finder 0.4.2, ipc_analyzer 0.1.1, kernel_config 0.3.1, known_vulnerabilities 0.2.1, printable_strings 0.3.4, qemu_exec 0.5.2, software_components 0.4.2, source_code_analysis 0.7.1, string_evaluator 0.2.1, tlsh 0.2, users_and_passwords 0.5.4 [2024-10-01 15:56:48][unpacking_scheduler][INFO]: Unpacking scheduler online [2024-10-01 15:56:48][unpacking_scheduler][INFO]: Queue Length (Analysis/Unpack): 0 / 0 [2024-10-01 15:56:48][comparison_scheduler][INFO]: Comparison scheduler online [2024-10-01 15:56:48][back_end_binding][INFO]: Intercom online [2024-10-01 15:56:48][fact_base][INFO]: Successfully started FACT Backend [2024-10-01 15:56:51][fact_base][INFO]: System memory usage: 21.3%; open file count: 6 [2024-10-01 15:56:52][fact_base][INFO]: System memory usage: 21.3%; open file count: 7 [2024-10-01 15:56:53][fact_base][INFO]: System memory usage: 21.3%; open file count: 542
And here is the docker output you want : oguzhanozgur@oguzhanozgur:~$ docker image ls | grep -i fact_extractor fkiecad/fact_extractor latest d128d1a4c51c 12 days ago 2.26GB
Also, there is an error about the command you provided. oguzhanozgur@oguzhanozgur:~/FACT_core$ docker ps | grep fact_extractor | cut --delimiter " " --fields 1 | xargs docker stop "docker stop" requires at least 1 argument. See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
There is no log entry that suggest an extractor container was started. Did you change anything in the configuration file (src/config/fact-core-config.toml
by default)? What happens if you try to start the extractor container manually with docker run -it --rm --entrypoint bash fkiecad/fact_extractor:latest
(it should normally give you a shell which you can exit with Ctrl+D)?
Also, there is an error about the command you provided. oguzhanozgur@oguzhanozgur:~/FACT_core$ docker ps | grep fact_extractor | cut --delimiter " " --fields 1 | xargs docker stop "docker stop" requires at least 1 argument.
That is actually good, since it means there are no orphaned extractor containers running. It does not explain what the problem is, though.
I didnt change anything in the conf flies. When i try to execute your command: What should i do now?
I'm still not sure what the underlying problem is. Everything looks fine apart from the extraction containers not starting in the unpacking scheduler. Could you try running the scheduler tests?
pytest src/test/integration/scheduler
Also could you try starting only the backend with ./start_fact_backend.py
from the src
directory and after it has started check the output of docker ps
?
Normally the extractor containers should show up in the output, e.g.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
734b1efc61aa fkiecad/fact_extractor "gunicorn --timeout …" 2 seconds ago Up 2 seconds 0.0.0.0:9903->5000/tcp, [::]:9903->5000/tcp relaxed_clarke
c3d3017ec49a fkiecad/fact_extractor "gunicorn --timeout …" 3 seconds ago Up 2 seconds 0.0.0.0:9902->5000/tcp, [::]:9902->5000/tcp nice_leakey
fcf139c31a01 fkiecad/fact_extractor "gunicorn --timeout …" 3 seconds ago Up 3 seconds 0.0.0.0:9901->5000/tcp, [::]:9901->5000/tcp goofy_chaplygin
74760d59043d fkiecad/fact_extractor "gunicorn --timeout …" 4 seconds ago Up 3 seconds 0.0.0.0:9900->5000/tcp, [::]:9900->5000/tcp adoring_turing
Here is the all results :
oguzhanozgur@oguzhanozgur:~/FACT_core/src$ ls
alembic.ini compile_yara_signatures.py flask_app_wrapper.py intercom plugins start_fact_frontend.py unpacker
analysis config helperFunctions manage_users.py __pycache__ start_fact.py update_statistic.py
bin config.py init_postgres.py migrate_database.py scheduler statistic version.py
check_signatures.py conftest.py install migrate_db_to_postgresql.py start_fact_backend.py storage web_interface
compare fact_base.py install.py objects start_fact_database.py test
oguzhanozgur@oguzhanozgur:~/FACT_core/src$ ./start_fact_backend.py
Traceback (most recent call last):
File "/home/oguzhanozgur/FACT_core/src/./start_fact_backend.py", line 35, in <module>
from intercom.back_end_binding import InterComBackEndBinding
File "/home/oguzhanozgur/FACT_core/src/intercom/back_end_binding.py", line 15, in <module>
from storage.binary_service import BinaryService
File "/home/oguzhanozgur/FACT_core/src/storage/binary_service.py", line 11, in <module>
from unpacker.tar_repack import TarRepack
File "/home/oguzhanozgur/FACT_core/src/unpacker/tar_repack.py", line 11, in <module>
from unpacker.unpack_base import UnpackBase
File "/home/oguzhanozgur/FACT_core/src/unpacker/unpack_base.py", line 11, in <module>
from docker.types import Mount
ModuleNotFoundError: No module named 'docker'
oguzhanozgur@oguzhanozgur:~/FACT_core/src$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
oguzhanozgur@oguzhanozgur:~/FACT_core/src$ pytest src/test/integration/scheduler
/usr/local/lib/python3.10/dist-packages/_pytest/config/__init__.py:329: PluggyTeardownRaisedWarning: A plugin raised an exception during an old-style hookwrapper teardown.
Plugin: helpconfig, Hook: pytest_cmdline_parse
ConftestImportFailure: ModuleNotFoundError: No module named 'semver' (from /home/oguzhanozgur/FACT_core/src/conftest.py)
For more information see https://pluggy.readthedocs.io/en/stable/api_reference.html#pluggy.PluggyTeardownRaisedWarning
config = pluginmanager.hook.pytest_cmdline_parse(
ImportError while loading conftest '/home/oguzhanozgur/FACT_core/src/conftest.py'.
conftest.py:15: in <module>
from analysis.plugin import AnalysisPluginV0
analysis/plugin/__init__.py:1: in <module>
from .plugin import AnalysisPluginV0, Tag # noqa: F401
analysis/plugin/plugin.py:8: in <module>
import semver
E ModuleNotFoundError: No module named 'semver'
oguzhanozgur@oguzhanozgur:~/FACT_core/src$
Did you maybe forget to activate your virtualenv? There are some import errors:
ModuleNotFoundError: No module named 'docker'
ModuleNotFoundError: No module named 'semver'
Yes, i forgot :( I tried again and here is the result :
This is the docker ps result :
(ozzy) oguzhanozgur@oguzhanozgur:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
And lastly, this is the pytest result :
OSError: [Errno 24] Too many open files
This could be the root cause. FACT needs a lot of open files at once. What do you get when you run ulimit -n
and ulimit -n -H
? This gives you the allowed number of open files (soft limit and hard limit respectively). If this is exceeded, you will get errors. The soft limit should be at least 600. You can raise the soft limit by running e.g. ulimit -n 9999
. Raising the hard limit is a lot trickier (see https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user). The limits count only for your current shell, so you may need to set them for each shell/tab independently (you can also add it to the .bashrc for example). Could you try raising the limit and run the tests again?
Hi again, Here is the new result:
It seems an exception occurred in the extraction container, but we need to activate additional logging to see what it was. Could you try to run this command?
pytest -vvv --log-cli-level=DEBUG -s src/test/integration/scheduler/test_unpack_and_analyse.py
ERROR root:unpacking_scheduler.py:273 Could not fetch unpacking container logs
ERROR root:unpacking_scheduler.py:202 Exception happened during extraction of 418a54d78550e8584291c96e5d6168133621f352bfc1d43cf84e81187fef4962_787.: Extraction container could not be reached.
Sadly, this was not really helpful: It seems the container did not start at all and therefore did not produce any error log. This is really puzzling. What happens if you try to start the container manually with the parameters used in FACT:
docker run --rm -it -p "9990:5000/tcp" --entrypoint gunicorn fkiecad/fact_extractor:latest --timeout 600 -w 1 -b 0.0.0.0:5000 server:app
Does it start? Are you able to do a curl "http://localhost:9990/status"
in another shell when it runs?
So the container starts fine, just not when started from FACT using the docker python API it seems.
What is the version of the python docker package? (you can get it by running pip show docker | grep -i version
with activated venv)
Also is does not look like you are really using a venv judging by the package path /home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connection.py
. Could you try creating a venv in your src
folder by running
python3 -m venv venv
. venv/bin/activate
and then in that shell reinstall all python packages with
pip install -r install/requirements_pre_install.txt
python3 install.py
and then try starting FACT again (also from this shell)?
I thinks there is a problem with docker. Here are the new logs: (btw, i could start FACT successfully but same issue occured again)
I thinks there is a problem with docker.
That's the only remaining source for this problem I can think of at this point. There are some problems that can occur if you install docker and then try to use it without restarting. Have you restarted the system since you installed docker (or FACT)? If not, that could also be worth a try.
And if that wasn't the issue, could you try running this script to try starting the container manually using the python docker API?
from time import sleep
import docker
DOCKER_CLIENT = docker.from_env()
container = DOCKER_CLIENT.containers.run(
image='fkiecad/fact_extractor',
ports={'5000/tcp': 9999},
volumes={'/dev': {'bind': '/dev', 'mode': 'rw'}},
privileged=True,
detach=True,
remove=True,
entrypoint='gunicorn --timeout 600 -w 1 -b 0.0.0.0:5000 server:app',
)
sleep(3)
print(container.logs().decode())
container.stop()
Where should I write this script, couldnt understand
You could either write it to a file and run the file with python3 <file>
or simply start a python shell by running python3
in the terminal and pasting the contents into the shell (with ctrl+shift+v).
Here is the script result:
That also looks normal. I'm really not sure what to make of this: Starting the containers doesn't work when you start FACT, but it works when you do it manually. There could in theory be a problem with permissions on the mounted folders. Could you try it like this (so that it works exactly like it is called in FACT):
from multiprocessing import Manager
from pathlib import Path
from tempfile import TemporaryDirectory
from time import sleep
from config import load, backend
from unpacker.extraction_container import ExtractionContainer
load()
Path(backend.docker_mount_base_dir).mkdir(exist_ok=True)
tmp_dir = TemporaryDirectory(dir=backend.docker_mount_base_dir)
try:
with Manager() as manager:
ec = ExtractionContainer(id_=1, tmp_dir=tmp_dir, value=manager.Value('i', 0))
ec.start()
sleep(3)
container = ec._get_container()
print(container.logs())
ec.stop()
finally:
tmp_dir.cleanup()
There is also one thing I did not ask about: How much RAM do you have in the system where FACT is running?
I think there is a problem here :
I think there is a problem here :
This script must be executed form the src
directory for the import paths to line up. Sorry for the confusion.
Hi again, Sorry for the late response.
Sorry that was my fault: The directory does not exists yet which causes the error. I updated the script. Also you didn't answer this question:
How much RAM do you have in the system where FACT is running?
This can be a problem, because FACT is rather memory hungry. Normally it shouldn't be a problem during startup, though.
Sorry, didnt see that one. I have 32GB ram total.
By the way, I dont know why this happened but I can not start FACT anymore. Here are the logs:
oguzhanozgur@oguzhanozgur:~$ source .venv/bin/activate
(.venv) oguzhanozgur@oguzhanozgur:~$ cd FACT_core/
(.venv) oguzhanozgur@oguzhanozgur:~/FACT_core$ ./start_all_installed_fact_components
[2024-10-09 17:01:06][start_all_installed_fact_components][INFO]: starting database
[2024-10-09 17:01:06][start_all_installed_fact_components][INFO]: starting frontend
[2024-10-09 17:01:06][start_all_installed_fact_components][INFO]: starting backend
[2024-10-09 17:01:06][fact_base][INFO]: Starting FACT DB-Service @ 4.3-dev (adfbfe8f, Python 3.12.3)
[2024-10-09 17:01:06][fact_base][INFO]: Starting FACT Frontend @ 4.3-dev (adfbfe8f, Python 3.12.3)
[2024-10-09 17:01:06][__init__][INFO]: Alembic DB revision: head: 05d8effce8b3, current: 05d8effce8b3
[2024-10-09 17:01:06][__init__][INFO]: Alembic DB revision: head: 05d8effce8b3, current: 05d8effce8b3
[2024-10-09 17:01:06][fact_base][INFO]: Successfully started FACT DB-Service
[2024-10-09 17:01:06][install][ERROR]: Failed to run docker compose -f /home/oguzhanozgur/FACT_core/src/install/radare/docker-compose.yml up -d:
time="2024-10-09T17:01:06+03:00" level=warning msg="/home/oguzhanozgur/FACT_core/src/install/radare/docker-compose.yml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion"
Network radare_default Creating
Network radare_default Error
failed to create network radare_default: Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
Traceback (most recent call last):
File "/home/oguzhanozgur/FACT_core/src/../start_fact_frontend", line 92, in <module>
FactFrontend().main()
^^^^^^^^^^^^^^
File "/home/oguzhanozgur/FACT_core/src/../start_fact_frontend", line 72, in __init__
run_cmd_with_logging(f'docker compose -f {COMPOSE_YAML} up -d')
File "/home/oguzhanozgur/FACT_core/src/helperFunctions/install.py", line 221, in run_cmd_with_logging
raise err
File "/home/oguzhanozgur/FACT_core/src/helperFunctions/install.py", line 216, in run_cmd_with_logging
subprocess.run(cmd_, stdout=PIPE, stderr=STDOUT, encoding='UTF-8', shell=shell, check=True, **kwargs)
File "/usr/lib/python3.12/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['docker', 'compose', '-f', '/home/oguzhanozgur/FACT_core/src/install/radare/docker-compose.yml', 'up', '-d']' returned non-zero exit status 1.
[2024-10-09 17:01:06][fact_base][INFO]: Starting FACT Backend @ 4.3-dev (adfbfe8f, Python 3.12.3)
[2024-10-09 17:01:07][__init__][INFO]: Alembic DB revision: head: 05d8effce8b3, current: 05d8effce8b3
[2024-10-09 17:01:07][ip_and_uri_finder_analysis][INFO]: ip signature path: /home/oguzhanozgur/.venv/lib/python3.12/site-packages/common_analysis_ip_and_uri_finder/yara_rules/ip_rules.yara
[2024-10-09 17:01:07][ip_and_uri_finder_analysis][INFO]: ip signature path: /home/oguzhanozgur/.venv/lib/python3.12/site-packages/common_analysis_ip_and_uri_finder/yara_rules/uri_rules.yara
[2024-10-09 17:01:08][compare][INFO]: Comparison plugins available: Software, File_Coverage, File_Header
[2024-10-09 17:01:08][scheduler][INFO]: Analysis scheduler online
[2024-10-09 17:01:08][scheduler][INFO]: Analysis plugins available: binwalk 1.0.0, cpu_architecture 0.4.0, crypto_hints 0.2.1, crypto_material 0.5.2, cve_lookup 0.1.0, cwe_checker 0.5.4, device_tree 2.0.0, elf_analysis 0.3.4, exploit_mitigations 0.2.0, file_hashes 1.2, file_system_metadata 1.0.0, file_type 1.0.0, hardware_analysis 0.2, hashlookup 0.1.4, information_leaks 0.2.0, init_systems 0.4.2, input_vectors 0.1.2, interesting_uris 0.1, ip_and_uri_finder 0.4.2, ipc_analyzer 0.1.1, kernel_config 0.3.1, known_vulnerabilities 0.3.0, printable_strings 0.3.4, qemu_exec 0.5.2, software_components 0.5.0, source_code_analysis 0.7.1, string_evaluator 0.2.1, tlsh 0.2, users_and_passwords 0.5.4
[2024-10-09 17:01:08][unpacking_scheduler][INFO]: Unpacking scheduler online
[2024-10-09 17:01:08][unpacking_scheduler][INFO]: Queue Length (Analysis/Unpack): 0 / 0
[2024-10-09 17:01:08][comparison_scheduler][INFO]: Comparison scheduler online
[2024-10-09 17:01:08][back_end_binding][INFO]: Intercom online
[2024-10-09 17:01:08][fact_base][INFO]: Successfully started FACT Backend
[2024-10-09 17:01:11][fact_base][INFO]: System memory usage: 23.0%; open file count: 7
[2024-10-09 17:01:13][fact_base][INFO]: System memory usage: 23.0%; open file count: 542
I have 32GB ram total.
That shoud be plenty.
failed to create network radare_default: Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
This is not a problem I encountered before. Maybe it is also related to the problems with starting the extractor containers. According to https://stackoverflow.com/questions/43720339/docker-error-could-not-find-an-available-non-overlapping-ipv4-address-pool-am this may have something to do with a VPN running in the background. Could this be the problem in your case?
Yes, you are right. I disconnected from VPN and tool worked again but our problem is still continuing.
our problem is still continuing.
I'm sorry that the problem still persists. I'm still puzzled what could be the underlying issue here. What you could also do is running FACT in a VM. There are also pre-built Vagrant VirtualBox images that you can download here: https://portal.cloud.hashicorp.com/vagrant/discover/fact-cad/FACT-master (you may need to add port forwarding to access the web interface from your system as it uses NAT by default)
Hi again,
Vagrant is also has same issue. Also I want to install the tool with normal way. I tried re-installation several times but nothing changed. What should I do?
Vagrant is also has same issue
Are you sure that it is the same issue? Since it runs in a VM, this would not make a lot of sense. Maybe it has something to do with your hardware or your network
FACT version
4.3-dev
Environment
Distribution : Ubuntu 22.04.5 LTS powered by FACT 4.3-dev © Fraunhofer FKIE 2015-2024
Steps to reproduce
Observed Behavior
The analysis has ended immediately and nothing showed up at result screen.
Expeced Behavior
The analysis should start normally and the tool analyze the fields which I marked.
Installation logs
No response
Backend logs
[2024-10-01 13:29:01][connectionpool][WARNING]: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7dbd805f7c70>: Failed to establish a new connection: [Errno 111] Connection refused')': /status [2024-10-01 13:29:01][connectionpool][WARNING]: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7dbd805f7ee0>: Failed to establish a new connection: [Errno 111] Connection refused')': /status [2024-10-01 13:29:02][connectionpool][WARNING]: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7dbd805f71c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /status [2024-10-01 13:29:02][unpacking_scheduler][ERROR]: Could not fetch unpacking container logs [2024-10-01 13:29:02][unpacking_scheduler][ERROR]: Exception happened during extraction of 2bd7fcbb382db9223414bde8aefd4f7eab3299bc0084e43356e6c1ac26af3baf_4535.: Extraction container could not be reached. Traceback (most recent call last): File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connection.py", line 199, in _new_conn sock = connection.create_connection( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 789, in urlopen response = self._make_request( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 495, in _make_request conn.request( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connection.py", line 441, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connection.py", line 279, in connect self.sock = self._new_conn() File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connection.py", line 214, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7dbd80424160>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/requests/adapters.py", line 589, in send resp = conn.urlopen( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 873, in urlopen return self.urlopen( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 873, in urlopen return self.urlopen( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 873, in urlopen return self.urlopen( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/connectionpool.py", line 843, in urlopen retries = retries.increment( File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/urllib3/util/retry.py", line 519, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9900): Max retries exceeded with url: /status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7dbd80424160>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/oguzhanozgur/FACT_core/src/unpacker/unpack_base.py", line 55, in _extract_with_worker response = container.start_unpacking(tmp_dir, timeout=WORKER_TIMEOUT) File "/home/oguzhanozgur/FACT_core/src/unpacker/extraction_container.py", line 118, in start_unpacking response = self._check_connection() File "/home/oguzhanozgur/FACT_core/src/unpacker/extraction_container.py", line 133, in _check_connection return session.get(url, timeout=5) File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, kwargs) File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/home/oguzhanozgur/ozzy/lib/python3.10/site-packages/requests/adapters.py", line 622, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9900): Max retries exceeded with url: /status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7dbd80424160>: Failed to establish a new connection: [Errno 111] Connection refused'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/oguzhanozgur/FACT_core/src/scheduler/unpacking_scheduler.py", line 199, in work_thread extracted_objects = self.unpacker.unpack(task, tmp_dir, container) File "/home/oguzhanozgur/FACT_core/src/unpacker/unpack.py", line 42, in unpack extracted_files = self.extract_files_from_file(current_fo.file_path, tmp_dir, container) File "/home/oguzhanozgur/FACT_core/src/unpacker/unpack_base.py", line 41, in extract_files_from_file self._extract_with_worker(file_path, container, tmp_dir) File "/home/oguzhanozgur/FACT_core/src/unpacker/unpack_base.py", line 59, in _extract_with_worker raise ExtractionError('Extraction container could not be reached.') from error unpacker.unpack_base.ExtractionError: Extraction container could not be reached. [2024-10-01 13:29:02][unpacking_scheduler][INFO]: Unpacking completed: 2bd7fcbb382db9223414bde8aefd4f7eab3299bc0084e43356e6c1ac26af3baf_4535 (extracted files: 0) [2024-10-01 13:29:02][unpacking_scheduler][INFO]: Unpacking of firmware 2bd7fcbb382db9223414bde8aefd4f7eab3299bc0084e43356e6c1ac26af3baf_4535 completed. /home/oguzhanozgur/FACT_core/src/bin/internal_symlink_magic, 7: Warning: using regular magic file
/home/oguzhanozgur/FACT_core/src/bin/firmware' /home/oguzhanozgur/FACT_core/src/bin/internal_symlink_magic, 7: Warning: using regular magic file
/home/oguzhanozgur/FACT_core/src/bin/firmware' Process ExceptionSafeProcess-109: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/home/oguzhanozgur/FACT_core/src/helperFunctions/process.py", line 93, in run raise exception File "/home/oguzhanozgur/FACT_core/src/helperFunctions/process.py", line 87, in run Process.run(self) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/oguzhanozgur/FACT_core/src/scheduler/unpacking_scheduler.py", line 149, in extraction_loop self.check_pending() File "/home/oguzhanozgur/FACT_core/src/scheduler/unpacking_scheduler.py", line 173, in check_pending container.restart() File "/home/oguzhanozgur/FACT_core/src/unpacker/extraction_container.py", line 91, in restart self.stop() File "/home/oguzhanozgur/FACT_core/src/unpacker/extraction_container.py", line 67, in stop raise RuntimeError('Container is not running.') RuntimeError: Container is not running. [2024-10-01 13:29:03][scheduler][INFO]: Analysis Completed: 2bd7fcbb382db9223414bde8aefd4f7eab3299bc0084e43356e6c1ac26af3baf_4535 [2024-10-01 13:29:03][analysis_status][INFO]: Analysis of firmware 2bd7fcbb382db9223414bde8aefd4f7eab3299bc0084e43356e6c1ac26af3baf_4535 completedFrontend logs
No response
Other information
No response