kevoreilly / CAPEv2

Malware Configuration And Payload Extraction
https://capesandbox.com/analysis/
Other
1.97k stars 420 forks source link

API for Distributed CAPE doesn't run due to "'yara' is not defined" error. #552

Closed ai-suzuki closed 3 years ago

ai-suzuki commented 3 years ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

Ultimately, I want to make a Distributed CAPE. I created dist.ini for that, so I want to run the API and add a node from a worker.

Current Behavior

It was set with reference to this manual. https://capev2.readthedocs.io/en/latest/usage/dist.html?highlight=dist

When I execute the API with the following command, I get an error that yara is not defined. Of course, the worker doesn't recognize it either ... uwsgi --ini /opt/CAPEv2/utils/dist.ini

yara-python is installed.

(venv-cape) root@cape-master:~# pip list |grep yara
yara-python               4.1.0
(venv-cape) root@cape-master:~#

yara can also be imported.

(venv-cape) root@cape-master:~# python
Python 3.8.5 (default, May 27 2021, 13:30:53)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import yara
>>> print(yara.__version__)
4.1.0
>>>

However, when I run the API, the following debug appears.

(venv-cape) root@cape-master:~# uwsgi --ini /opt/CAPEv2/utils/dist.ini
[uWSGI] getting INI configuration from /opt/CAPEv2/utils/dist.ini
*** Starting uWSGI 2.0.19.1 (64bit) on [Wed Jul 28 03:06:17 2021] ***
compiled with version: 9.3.0 on 26 July 2021 09:56:08
os: Linux-5.8.0-55-generic #62~20.04.1-Ubuntu SMP Wed Jun 2 08:55:04 UTC 2021
nodename: cape-master
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /root
detected binary path: /usr/local/bin/uwsgi
chdir() to /opt/CAPEv2/utils
your processes number limit is 31609
your memory page size is 4096 bytes
 *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers ***
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 0.0.0.0:9003 fd 3
setuid() to 1000
Python version: 3.8.5 (default, May 27 2021, 13:30:53)  [GCC 9.3.0]
Python main interpreter initialized at 0x562299f363c0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 229488 bytes (224 KB) for 5 cores
*** Operational MODE: threaded ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 2916)
spawned uWSGI worker 1 (pid: 2917, cores: 5)
writing pidfile to /tmp/dist.pid
writing pidfile to /tmp/dist.pid
*** Stats server enabled on 127.0.0.1:9191 fd: 9 ***
mounting dist.py on /
Traceback (most recent call last):
  File "/opt/CAPEv2/utils/../lib/cuckoo/common/objects.py", line 814, in init_yara
    File.yara_rules[category] = yara.compile(filepaths=rules, externals=externals)
NameError: name 'yara' is not defined

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "dist.py", line 33, in <module>
    from lib.cuckoo.common.config import Config
  File "/opt/CAPEv2/utils/../lib/cuckoo/common/config.py", line 11, in <module>
    from lib.cuckoo.common.objects import Dictionary
  File "/opt/CAPEv2/utils/../lib/cuckoo/common/objects.py", line 834, in <module>
    init_yara()
  File "/opt/CAPEv2/utils/../lib/cuckoo/common/objects.py", line 815, in init_yara
    except yara.Error as e:
NameError: name 'yara' is not defined
OOPS ! failed loading app in worker 1 (pid 2917) :( trying again...
DAMN ! worker 1 (pid: 2917) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 2919)
mounting dist.py on /

I can't even try to add a node from a worker curl http://X.X.X.X:9003/node -F name=worker -F url=http://10.64.180.161:8000/apiv2/

dist.ini is set like this.

[uwsgi]
    plugins = /usr/lib/uwsgi/plugins/python3_plugin.so
    callable = app
    ;change this patch if is different
    chdir = /opt/CAPEv2/utils
    master = true
    mount = /=dist.py
    threads = 5
    workers = 2
    manage-script-name = true
    ; if you will use with nginx, comment next line
    socket = 0.0.0.0:9003
    safe-pidfile = /tmp/dist.pid
    protocol=http
    enable-threads = true
    lazy = true
    timeout = 600
    chmod-socket = 664
    chown-socket = cape:cape
    gui = cape
    uid = cape
    harakiri = 30
    hunder-lock = True
    stats = 127.0.0.1:9191
doomedraven commented 3 years ago

hello

yara is there https://github.com/kevoreilly/CAPEv2/blob/master/lib/cuckoo/common/objects.py#L37

your problem is that you run uwsgi without setting properly venv you need to add something like, fixing the path, google and uwsgi docs can provide more info about that

venv = /var/www/venv
doomedraven commented 3 years ago

https://github.com/kevoreilly/CAPEv2/commit/5e69cc42cda5219ba95a8b4d48111b7f5c293057 https://uwsgi-docs.readthedocs.io/en/latest/Python.html#virtualenv-support

ai-suzuki commented 3 years ago

thank you. I tried modifying dist.ini but it didn't work. In a virtual environment, uwsgi also seems to need to be installed with pip.

doomedraven commented 3 years ago

yes, to ensure that everything works run with python3 manage.py runserver 0.0.0.0:8000 so you can see if that is python problem or cape, as venv is nice but sometime is just a pain from my experience

ai-suzuki commented 3 years ago

python3 manage.py runserver 0.0.0.0:8000 is working fine.

I was facing the same problem as them. https://github.com/unbit/uwsgi/issues/1688

I can't fix it, so maybe I'll try again without venv.

doomedraven commented 3 years ago

uff, let us know how you solve so i can update docs, but ya is hard to get it properly working sometimes, i have tried recently gunicorn, but it didn't worked better for me, but is simplied in configuration, maybe you want to check it

ai-suzuki commented 3 years ago

OK. I'll tell you.

doomedraven commented 3 years ago

btw pay attention to the plugins, they might be in different folder if you install in venv

plugins-dir = /usr/lib/uwsgi/plugins
plugins = python38

check if plugins dir is correct adn try to add that to your config also as uwsgi started to work better for me after set that

ai-suzuki commented 3 years ago

Even in my environment, the plugin was in / usr / lib / uwsgi / plugins. ⇒ I fixed it to, but it didn't work /root/venv-cape/plugins

ai-suzuki commented 3 years ago

I felt the limit of venv, so I recreated the Cape directly on the OS(I made it a cape user instead of root). I was able to start uwsgi without the fatal error that was annoying when using venv. However, it is said that there is no module.

CAPEv2/modules/processing/parsers/CAPE/PredatorPain.py_disabled.py seems to be correct, but is CAPEv2 /cuckoo.pyproj relevant? Modify this ?

cape@cape-master2:~$ sudo uwsgi --ini /opt/CAPEv2/utils/dist.ini
[uWSGI] getting INI configuration from /opt/CAPEv2/utils/dist.ini
*** Starting uWSGI 2.0.18-debian (64bit) on [Wed Aug  4 07:27:17 2021] ***
compiled with version: 10.0.1 20200405 (experimental) [master revision 0be9efad938:fcb98e4978a:705510a708d3642c9c962beb663c476167e4e8a4] on 11 April 2020 11:15:55
os: Linux-5.8.0-63-generic #71~20.04.1-Ubuntu SMP Thu Jul 15 17:46:08 UTC 2021
nodename: cape-master2
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/cape
detected binary path: /usr/bin/uwsgi-core
chdir() to /opt/CAPEv2/utils
your processes number limit is 15505
your memory page size is 4096 bytes
 *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers ***
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 0.0.0.0:9003 fd 3
setuid() to 1001
Python version: 3.8.10 (default, Jun  2 2021, 10:49:15)  [GCC 9.4.0]
Python main interpreter initialized at 0x55c643f9daf0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 229488 bytes (224 KB) for 5 cores
*** Operational MODE: threaded ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 9353)
spawned uWSGI worker 1 (pid: 9354, cores: 5)
writing pidfile to /tmp/dist.pid
writing pidfile to /tmp/dist.pid
*** Stats server enabled on 127.0.0.1:9191 fd: 9 ***
mounting dist.py on /
CAPE parser: No module named PredatorPain.py_disabled - No module named 'modules.processing.parsers.CAPE.PredatorPain'
The flask-restful package is required: pip3 install flask-restful
DAMN ! worker 1 (pid: 9354) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 9462)
mounting dist.py on /
doomedraven commented 3 years ago

thanks for feedback about predator don't worry i renamed that to .py_disabled as i don't want extra dependencies on dead extractors(outdated), about /cuckoo.pyproj that is for visual studio, so no not relavant

doomedraven commented 3 years ago

btw it looks like you have missed dependencies

The flask-restful package is required: pip3 install flask-restful
ai-suzuki commented 3 years ago

Flask-restful was installed but I got a message

doomedraven commented 3 years ago

hm then for some reason it can't import it i guess

TheMythologist commented 3 years ago

Could you check whether your python packages are installed for the user or for root? You may have mistakenly installed the python packages for the user but not for the root user. Try sudo pip3 install flask-restful and see if it installs for the root user

ai-suzuki commented 3 years ago

true! I had it installed for the cape user ... I installed it for root and it works fine. thank you very much!!!

After that, proceed with the replication settings of mongodb

ai-suzuki commented 3 years ago

I tried to register masters and workers as below,

curl http://localhost:9003/node -F name=master -F url=http://<MASTER_IP>:8000/api/
curl http://localhost:9003/node -F name=worker -F url=http://<WORKER_IP>:8000/api/

I got an error like this. I didn't know what was wrong. {"message": "Invalid CAPE node (http://<WORKER_IP>:8000/apiv2): 'data'"}

Does part -F name=masterneed to be associated with Conf?

doomedraven commented 3 years ago

Have you enabled in api the machine list endpoint? I think is disabled by default and i need to ads note about this in docs

El mar., 10 ago. 2021 6:27, ai-suzuki @.***> escribió:

I tried to register masters and workers as below,

curl http://localhost:9003/node -F name=master -F url=http://:8000/api/ curl http://localhost:9003/node -F name=worker -F url=http://:8000/api/

I got an error like this. I didn't know what was wrong. {"message": "Invalid CAPE node (http://:8000/apiv2): 'data'"}

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/kevoreilly/CAPEv2/issues/552#issuecomment-895719329, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH37F56BZJPQGYB47C7LT4CTCFANCNFSM5BMLPLCA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

ai-suzuki commented 3 years ago

You were right. Once you have enabled api.conf [list_exitnodes],[machinelist] you're done. thank you.

doomedraven commented 3 years ago

ok thanks for feedback, i will update docs, btw just bear in mind if that will be public facing master, disable list machines so nobody can get your master as his worker

ai-suzuki commented 3 years ago

thank you! got it. Be careful when using it publicly

doomedraven commented 3 years ago

here also some nice stuff to get more secure and get rid of bots just in case you will make it public https://capev2.readthedocs.io/en/latest/usage/web.html#some-extra-security-tip-s

ai-suzuki commented 3 years ago

Hello I made the following settings for the distributed Cape by looking at the documentation and posts by others in the past. ・ Distributed setting of mongodb

mongos console
  shards:
        {  "_id" : "rs0",  "host" : "rs0/10.64.180.154:27017,10.64.180.155:27017",  "state" : 1,  "topologyTime" : Timestamp(1628559258, 1) }
  active mongoses:
        "5.0.1" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled: yes
        Currently running: no

・Modify report.conf

[mongodb] 
host = masterIP(only worker conf)
port = 27020

[distributed]
enabled = yes
master_storage_only = yes
remove_task_on_worker = yes

・ Worker node registration curl http://localhost:9003/node -F name=worker -F url=http://<workerIP>:8000/apiv2/

However, even if I uploaded multiple files to master, they were not distributed to workers. Also, even if you specify the worker node from the master API as shown below and upload the file, it seems that it will be analyzed by the master VM. curl -F file=@<pathtofile> -F options="node=worker" http://localhost:8000/apiv2/tasks/create/file/

So I have a question ① Is there any other conf to look at? ② Distributed Cape is an image that if multiple files are uploaded to master, they will be distributed to workers. Is my perception correct?

doomedraven commented 3 years ago

1 unique conf is reporting.conf for everything, i basiclly using it with master as webgui to see reports and all the powerfull servers are workers(is basically i totally forgot about case like your or capesandbox.com where both are workers)

  1. yes is correct

as it done right now, if master is running cape it will consume tasks and it not really check if task is for worker(easy to fix and already looking on this), give me few mins

doomedraven commented 3 years ago

@ai-suzuki do git pull, it was done here, if node= is in options it won't pick that task anymore :) thanks to remember me to finish this https://github.com/kevoreilly/CAPEv2/commit/4170f6ee29ab2201d114d389fc39b416b731c143

ai-suzuki commented 3 years ago

thank you! I will pull

doomedraven commented 3 years ago

you are welcome, let me know if something

ai-suzuki commented 3 years ago

I'm sorry, it seems that I'm still going to master. Is there something wrong with my settings ...

doomedraven commented 3 years ago

did you restart cape service after git pull?

ai-suzuki commented 3 years ago

yes

doomedraven commented 3 years ago

weird for me it shows no tasks for master if node=x is set in options, i will try to check that, but that will be later have to do my $dayjob

ai-suzuki commented 3 years ago

OK. Not just when node = x is specified ,when I uploaded multiple files to master, they were not distributed . So,I will also review my settings… Thank you for being busy

doomedraven commented 3 years ago

btw when you set node = X it won't works as it checks for node=X so i guess that is the case, can you try it?

ai-suzuki commented 3 years ago

That's right… but when i set node = X(my node is worker5) on master,it doesn't work. task send master cape. curl -F file=@test.txt -F node="worker5" http://localhost:8000/apiv2/tasks/create/file/

ai-suzuki commented 3 years ago

When node = X is set, master is pending and parsing is no longer done, but it does not go to worker.

By the way, is it necessary to use different names and IPs for windows virtual machines in master and worker?

doomedraven commented 3 years ago

maybe you set incorrectly the node=? it should use node name. no, no need of different, as they are different

ai-suzuki commented 3 years ago

node = uses a unique node name. because node does not disappear in the middle, there are various things, but the node name that worker5 wants to use now. Since enabled ": false, does the analysis not start?

cape@master2:/opt/CAPEv2/utils$ python3 dist.py --node worker5
You using old version of sflock! Upgrade: pip3 install -U SFlock2
2021-08-13 07:18:48,785 INFO:dist:MainThread - Available VM's on worker5:
2021-08-13 07:18:48,797 INFO:dist:MainThread - -->      win10
2021-08-13 07:18:48,801 INFO:dist:MainThread - Updated the machine table for node: worker5
cape@master2:/opt/CAPEv2/utils$ curl http://localhost:9003/node
{"nodes": {"master": {"name": "master", "url": "http://localhost:8000/apiv2/", "machines": [{"name": "win10", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker": {"name": "worker", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10-worker", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker2": {"name": "worker2", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10-worker", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker3": {"name": "worker3", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10-worker", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker4": {"name": "worker4", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker5": {"name": "worker5", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10", "platform": "windows", "tags": ["x64"]}], "enabled": false}}}
doomedraven commented 3 years ago

exactly when you register a worker they are added in disabled mode just in case, so you need to enable them, i guess this is why analysis doens't go to worker

ai-suzuki commented 3 years ago

Did I have to do that,… sudo python3 dist.py --node worker5 --enable and set true . but doesn't change ... Hopefully it will be pending on the master and sent to the worker, right?

doomedraven commented 3 years ago

if you set node=x it will be pending on master. well i can't really can help as i don't have access to your system, but check here why it not return those tasks https://github.com/kevoreilly/CAPEv2/blob/master/utils/dist.py#L686

ai-suzuki commented 3 years ago

I looked at dist.log. SQL is strange ...?

2021-08-13 07:13:20,529 INFO:dist:Retriever - Thread: free_space_mon - Alive: True
2021-08-13 07:16:10,799 ERROR:base:fetcher - Exception during reset or similar
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
    dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303504033536 and this is thread id 140303117821696.
2021-08-13 07:16:10,802 ERROR:base:fetcher - Exception closing connection <sqlite3.Connection object at 0x7f9af68ab030>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
    dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303504033536 and this is thread id 140303117821696.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 244, in _close_connection
    self._dialect.do_close(connection)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 673, in do_close
    dbapi_connection.close()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303504033536 and this is thread id 140303117821696.
2021-08-13 07:16:20,838 ERROR:base:StatusThread - Exception during reset or similar
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
    dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303529473792.
2021-08-13 07:16:20,839 ERROR:base:StatusThread - Exception closing connection <sqlite3.Connection object at 0x7f9af68ab210>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
    dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303529473792.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 244, in _close_connection
    self._dialect.do_close(connection)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 673, in do_close
    dbapi_connection.close()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303529473792.
2021-08-13 07:18:48,785 INFO:dist:MainThread - Available VM's on worker5:
2021-08-13 07:18:48,797 INFO:dist:MainThread - -->      win10
2021-08-13 07:18:48,801 INFO:dist:MainThread - Updated the machine table for node: worker5
2021-08-13 07:31:16,826 ERROR:base:fetcher - Exception during reset or similar
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
    dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303117821696.
2021-08-13 07:31:16,827 ERROR:base:fetcher - Exception closing connection <sqlite3.Connection object at 0x7f9af68ab030>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
    fairy._reset(pool)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
    pool._dialect.do_rollback(self)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
    dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303117821696.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 244, in _close_connection
    self._dialect.do_close(connection)
  File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 673, in do_close
    dbapi_connection.close()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303117821696.
2021-08-13 07:32:51,947 INFO:dist:StatusThread - [-] worker5 dead
cape@master2:~$
doomedraven commented 3 years ago

NEVER USE this sqlite in cape , is the worst that you can use, and for cluster, use posrgresql, as you can see, sqlite is not thread safe, doesn't perform well on load and doesn't support db upgrade

ai-suzuki commented 3 years ago

ok. cuckoo.conf set to connection = postgresql://cape:**@localhost:5432/cape But same error

doomedraven commented 3 years ago

cuckoo.conf doens't have nothing to see with distributed, you need to create lets say capedist db and set connection in reporting.conf under distributed https://capev2.readthedocs.io/en/latest/usage/dist.html#conf-reporting-conf

but use psql for both, as sqllite as i told isn't support db upgrade etc, so if we need to add new row you will be doomed with sqlite

ai-suzuki commented 3 years ago

I'm sorry, I didn't understand ... Isn't it just a matter of setting the distributed mongodb database(https://capev2.readthedocs.io/en/latest/usage/dist.html#good-practice-for-production) that has been set here, but is it postgresql?

[distributed]
enabled = no
# save results on master, not analyze binaries
master_storage_only = no
remove_task_on_worker = no
failed_clean = no
# distributed cuckoo database, to store nodes and tasks info
db = sqlite:///dist.db
doomedraven commented 3 years ago

mongo is to show data, did you see the db=? there you need to set that

ai-suzuki commented 3 years ago

create a capedist db with postgresql and set it todb =.