Closed ai-suzuki closed 3 years ago
hello
yara is there https://github.com/kevoreilly/CAPEv2/blob/master/lib/cuckoo/common/objects.py#L37
your problem is that you run uwsgi without setting properly venv you need to add something like, fixing the path, google and uwsgi docs can provide more info about that
venv = /var/www/venv
thank you. I tried modifying dist.ini but it didn't work. In a virtual environment, uwsgi also seems to need to be installed with pip.
yes, to ensure that everything works run with python3 manage.py runserver 0.0.0.0:8000
so you can see if that is python problem or cape, as venv is nice but sometime is just a pain from my experience
python3 manage.py runserver 0.0.0.0:8000
is working fine.
I was facing the same problem as them. https://github.com/unbit/uwsgi/issues/1688
I can't fix it, so maybe I'll try again without venv.
uff, let us know how you solve so i can update docs, but ya is hard to get it properly working sometimes, i have tried recently gunicorn
, but it didn't worked better for me, but is simplied in configuration, maybe you want to check it
OK. I'll tell you.
btw pay attention to the plugins, they might be in different folder if you install in venv
plugins-dir = /usr/lib/uwsgi/plugins
plugins = python38
check if plugins dir is correct adn try to add that to your config also as uwsgi started to work better for me after set that
Even in my environment, the plugin was in / usr / lib / uwsgi / plugins. ⇒ I fixed it to, but it didn't work /root/venv-cape/plugins
I felt the limit of venv, so I recreated the Cape directly on the OS(I made it a cape user instead of root). I was able to start uwsgi without the fatal error that was annoying when using venv. However, it is said that there is no module.
CAPEv2/modules/processing/parsers/CAPE/PredatorPain.py_disabled.py seems to be correct, but is CAPEv2 /cuckoo.pyproj relevant?
Modify this
cape@cape-master2:~$ sudo uwsgi --ini /opt/CAPEv2/utils/dist.ini
[uWSGI] getting INI configuration from /opt/CAPEv2/utils/dist.ini
*** Starting uWSGI 2.0.18-debian (64bit) on [Wed Aug 4 07:27:17 2021] ***
compiled with version: 10.0.1 20200405 (experimental) [master revision 0be9efad938:fcb98e4978a:705510a708d3642c9c962beb663c476167e4e8a4] on 11 April 2020 11:15:55
os: Linux-5.8.0-63-generic #71~20.04.1-Ubuntu SMP Thu Jul 15 17:46:08 UTC 2021
nodename: cape-master2
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/cape
detected binary path: /usr/bin/uwsgi-core
chdir() to /opt/CAPEv2/utils
your processes number limit is 15505
your memory page size is 4096 bytes
*** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers ***
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 0.0.0.0:9003 fd 3
setuid() to 1001
Python version: 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
Python main interpreter initialized at 0x55c643f9daf0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 229488 bytes (224 KB) for 5 cores
*** Operational MODE: threaded ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 9353)
spawned uWSGI worker 1 (pid: 9354, cores: 5)
writing pidfile to /tmp/dist.pid
writing pidfile to /tmp/dist.pid
*** Stats server enabled on 127.0.0.1:9191 fd: 9 ***
mounting dist.py on /
CAPE parser: No module named PredatorPain.py_disabled - No module named 'modules.processing.parsers.CAPE.PredatorPain'
The flask-restful package is required: pip3 install flask-restful
DAMN ! worker 1 (pid: 9354) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 9462)
mounting dist.py on /
thanks for feedback
about predator don't worry i renamed that to .py_disabled as i don't want extra dependencies on dead extractors(outdated), about /cuckoo.pyproj
that is for visual studio, so no not relavant
btw it looks like you have missed dependencies
The flask-restful package is required: pip3 install flask-restful
Flask-restful was installed but I got a message
hm then for some reason it can't import it i guess
Could you check whether your python packages are installed for the user or for root? You may have mistakenly installed the python packages for the user but not for the root user. Try sudo pip3 install flask-restful
and see if it installs for the root user
true! I had it installed for the cape user ... I installed it for root and it works fine. thank you very much!!!
After that, proceed with the replication settings of mongodb
I tried to register masters and workers as below,
curl http://localhost:9003/node -F name=master -F url=http://<MASTER_IP>:8000/api/
curl http://localhost:9003/node -F name=worker -F url=http://<WORKER_IP>:8000/api/
I got an error like this.
I didn't know what was wrong.
{"message": "Invalid CAPE node (http://<WORKER_IP>:8000/apiv2): 'data'"}
Does part -F name=master
need to be associated with Conf?
Have you enabled in api the machine list endpoint? I think is disabled by default and i need to ads note about this in docs
El mar., 10 ago. 2021 6:27, ai-suzuki @.***> escribió:
I tried to register masters and workers as below,
curl http://localhost:9003/node -F name=master -F url=http://
:8000/api/ curl http://localhost:9003/node -F name=worker -F url=http:// :8000/api/ I got an error like this. I didn't know what was wrong. {"message": "Invalid CAPE node (http://
:8000/apiv2): 'data'"} — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/kevoreilly/CAPEv2/issues/552#issuecomment-895719329, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH37F56BZJPQGYB47C7LT4CTCFANCNFSM5BMLPLCA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .
You were right. Once you have enabled api.conf [list_exitnodes],[machinelist] you're done. thank you.
ok thanks for feedback, i will update docs, btw just bear in mind if that will be public facing master, disable list machines so nobody can get your master as his worker
thank you! got it. Be careful when using it publicly
here also some nice stuff to get more secure and get rid of bots just in case you will make it public https://capev2.readthedocs.io/en/latest/usage/web.html#some-extra-security-tip-s
Hello I made the following settings for the distributed Cape by looking at the documentation and posts by others in the past. ・ Distributed setting of mongodb
mongos console
shards:
{ "_id" : "rs0", "host" : "rs0/10.64.180.154:27017,10.64.180.155:27017", "state" : 1, "topologyTime" : Timestamp(1628559258, 1) }
active mongoses:
"5.0.1" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
・Modify report.conf
[mongodb]
host = masterIP(only worker conf)
port = 27020
[distributed]
enabled = yes
master_storage_only = yes
remove_task_on_worker = yes
・ Worker node registration
curl http://localhost:9003/node -F name=worker -F url=http://<workerIP>:8000/apiv2/
However, even if I uploaded multiple files to master, they were not distributed to workers.
Also, even if you specify the worker node from the master API as shown below and upload the file, it seems that it will be analyzed by the master VM.
curl -F file=@<pathtofile> -F options="node=worker" http://localhost:8000/apiv2/tasks/create/file/
So I have a question ① Is there any other conf to look at? ② Distributed Cape is an image that if multiple files are uploaded to master, they will be distributed to workers. Is my perception correct?
1 unique conf is reporting.conf for everything, i basiclly using it with master as webgui to see reports and all the powerfull servers are workers(is basically i totally forgot about case like your or capesandbox.com where both are workers)
as it done right now, if master is running cape it will consume tasks and it not really check if task is for worker(easy to fix and already looking on this), give me few mins
@ai-suzuki do git pull, it was done here, if node=
is in options it won't pick that task anymore :) thanks to remember me to finish this https://github.com/kevoreilly/CAPEv2/commit/4170f6ee29ab2201d114d389fc39b416b731c143
thank you! I will pull
you are welcome, let me know if something
I'm sorry, it seems that I'm still going to master. Is there something wrong with my settings ...
did you restart cape service after git pull?
yes
weird for me it shows no tasks for master if node=x
is set in options, i will try to check that, but that will be later have to do my $dayjob
OK. Not just when node = x is specified ,when I uploaded multiple files to master, they were not distributed . So,I will also review my settings… Thank you for being busy
btw when you set node = X
it won't works as it checks for node=X
so i guess that is the case, can you try it?
That's right…
but when i set node = X
(my node is worker5) on master,it doesn't work.
task send master cape.
curl -F file=@test.txt -F node="worker5" http://localhost:8000/apiv2/tasks/create/file/
When node = X is set, master is pending and parsing is no longer done, but it does not go to worker.
By the way, is it necessary to use different names and IPs for windows virtual machines in master and worker?
maybe you set incorrectly the node=
? it should use node name. no, no need of different, as they are different
node =
uses a unique node name.
because node does not disappear in the middle, there are various things, but the node name that worker5 wants to use now.
Since enabled ": false, does the analysis not start?
cape@master2:/opt/CAPEv2/utils$ python3 dist.py --node worker5
You using old version of sflock! Upgrade: pip3 install -U SFlock2
2021-08-13 07:18:48,785 INFO:dist:MainThread - Available VM's on worker5:
2021-08-13 07:18:48,797 INFO:dist:MainThread - --> win10
2021-08-13 07:18:48,801 INFO:dist:MainThread - Updated the machine table for node: worker5
cape@master2:/opt/CAPEv2/utils$ curl http://localhost:9003/node
{"nodes": {"master": {"name": "master", "url": "http://localhost:8000/apiv2/", "machines": [{"name": "win10", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker": {"name": "worker", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10-worker", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker2": {"name": "worker2", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10-worker", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker3": {"name": "worker3", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10-worker", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker4": {"name": "worker4", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10", "platform": "windows", "tags": ["x64"]}], "enabled": false}, "worker5": {"name": "worker5", "url": "http://10.64.180.155:8000/apiv2/", "machines": [{"name": "win10", "platform": "windows", "tags": ["x64"]}], "enabled": false}}}
exactly when you register a worker they are added in disabled mode just in case, so you need to enable them, i guess this is why analysis doens't go to worker
Did I have to do that,…
sudo python3 dist.py --node worker5 --enable
and set true .
but doesn't change ...
Hopefully it will be pending on the master and sent to the worker, right?
if you set node=x
it will be pending on master.
well i can't really can help as i don't have access to your system, but check here why it not return those tasks https://github.com/kevoreilly/CAPEv2/blob/master/utils/dist.py#L686
I looked at dist.log. SQL is strange ...?
2021-08-13 07:13:20,529 INFO:dist:Retriever - Thread: free_space_mon - Alive: True
2021-08-13 07:16:10,799 ERROR:base:fetcher - Exception during reset or similar
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
fairy._reset(pool)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
pool._dialect.do_rollback(self)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303504033536 and this is thread id 140303117821696.
2021-08-13 07:16:10,802 ERROR:base:fetcher - Exception closing connection <sqlite3.Connection object at 0x7f9af68ab030>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
fairy._reset(pool)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
pool._dialect.do_rollback(self)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303504033536 and this is thread id 140303117821696.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 244, in _close_connection
self._dialect.do_close(connection)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 673, in do_close
dbapi_connection.close()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303504033536 and this is thread id 140303117821696.
2021-08-13 07:16:20,838 ERROR:base:StatusThread - Exception during reset or similar
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
fairy._reset(pool)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
pool._dialect.do_rollback(self)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303529473792.
2021-08-13 07:16:20,839 ERROR:base:StatusThread - Exception closing connection <sqlite3.Connection object at 0x7f9af68ab210>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
fairy._reset(pool)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
pool._dialect.do_rollback(self)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303529473792.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 244, in _close_connection
self._dialect.do_close(connection)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 673, in do_close
dbapi_connection.close()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303529473792.
2021-08-13 07:18:48,785 INFO:dist:MainThread - Available VM's on worker5:
2021-08-13 07:18:48,797 INFO:dist:MainThread - --> win10
2021-08-13 07:18:48,801 INFO:dist:MainThread - Updated the machine table for node: worker5
2021-08-13 07:31:16,826 ERROR:base:fetcher - Exception during reset or similar
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
fairy._reset(pool)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
pool._dialect.do_rollback(self)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303117821696.
2021-08-13 07:31:16,827 ERROR:base:fetcher - Exception closing connection <sqlite3.Connection object at 0x7f9af68ab030>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 682, in _finalize_fairy
fairy._reset(pool)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 887, in _reset
pool._dialect.do_rollback(self)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 667, in do_rollback
dbapi_connection.rollback()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303117821696.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 244, in _close_connection
self._dialect.do_close(connection)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 673, in do_close
dbapi_connection.close()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140303151392512 and this is thread id 140303117821696.
2021-08-13 07:32:51,947 INFO:dist:StatusThread - [-] worker5 dead
cape@master2:~$
NEVER USE this sqlite in cape , is the worst that you can use, and for cluster, use posrgresql, as you can see, sqlite is not thread safe, doesn't perform well on load and doesn't support db upgrade
ok.
cuckoo.conf set to connection = postgresql://cape:**@localhost:5432/cape
But same error
cuckoo.conf doens't have nothing to see with distributed, you need to create lets say capedist db and set connection in reporting.conf under distributed https://capev2.readthedocs.io/en/latest/usage/dist.html#conf-reporting-conf
but use psql for both, as sqllite as i told isn't support db upgrade etc, so if we need to add new row you will be doomed with sqlite
I'm sorry, I didn't understand ... Isn't it just a matter of setting the distributed mongodb database(https://capev2.readthedocs.io/en/latest/usage/dist.html#good-practice-for-production) that has been set here, but is it postgresql?
[distributed]
enabled = no
# save results on master, not analyze binaries
master_storage_only = no
remove_task_on_worker = no
failed_clean = no
# distributed cuckoo database, to store nodes and tasks info
db = sqlite:///dist.db
mongo is to show data, did you see the db=
? there you need to set that
create a capedist db with postgresql and set it todb =
.
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Ultimately, I want to make a Distributed CAPE. I created dist.ini for that, so I want to run the API and add a node from a worker.
Current Behavior
It was set with reference to this manual. https://capev2.readthedocs.io/en/latest/usage/dist.html?highlight=dist
When I execute the API with the following command, I get an error that yara is not defined. Of course, the worker doesn't recognize it either ...
uwsgi --ini /opt/CAPEv2/utils/dist.ini
yara-python is installed.
yara can also be imported.
However, when I run the API, the following debug appears.
I can't even try to add a node from a worker
curl http://X.X.X.X:9003/node -F name=worker -F url=http://10.64.180.161:8000/apiv2/
dist.ini is set like this.