Closed 5470u2k closed 1 year ago
so why you just not increasing timeout to satisfy your hardware load?
It seems that only the extensions docx and docm took a long time to process. Other extensions finish processing in about 100 seconds, but docx and docm take more than 1000 seconds to complete. Checking the log does not show any errors that could be the cause.
Any idea what's causing this?
This is the Statistics screen that I checked on the web.
do you have a lot of different yaras? how many files are in general in that analysis like dropped/payloads/etc?
The amount of time spent processing is directly proportional to the amount of data captured during the run, yet there is no mention of what is happening inside the sandbox, nor any example sample hash so we can attempt to recreate.
Take a look at the process tree. The number of processes spawned, the number of pages of behavior captured. How many dropped files, payloads just doomed says. This is where the answer lies.
The timeout is also very relevant for a sample that spawns a ton of activity. Try a timeout of say 30 seconds and you will see a big difference.
Thank you both for your reply.
I checked the number of Yara rules.
$ pwd
/opt/CAPEv2/data/yara/CAPE
$ find . -type f | wc -l
487
I also checked the process tree. What information can I get from this screen?
28 files + initial one. that is a lot. but still not that much, i guess you have just a lot of random yara putted into cape?, that is the most common problem
try to scan with yara just directly that analysis dropped folder, you will probably see which yara is time to wipe
The capture of Edge Update & 'Security Health Host' services is definitely not desirable. These could be contributing to the oversized output. I recommend you disable these services aggressively in your snaphot.
Thank you both for your reply.
I need to investigate yara, so I stopped EdgeUpdate on the VM and checked the docx processing time. As a result, the required time was cut in half. 820 seconds with the service disabled and 1561 seconds with the service enabled.
Check to see if there are any other services that should be disabled.
Yes hpu have to disable all the noise, as you can see your suricata procesing time is 2.5min
El jue, 6 jul 2023, 13:34, 5470u2k @.***> escribió:
Thank you both for your reply.
I need to investigate yara, so I stopped EdgeUpdate on the VM and checked the docx processing time. As a result, the required time was cut in half. 820 seconds with the service disabled and 1561 seconds with the service enabled. [image: image] https://user-images.githubusercontent.com/138567592/251437302-2b40e6f0-567d-4381-aee3-da82ad1adc9b.png [image: image] https://user-images.githubusercontent.com/138567592/251437393-27d0ac14-42be-4a3a-bbf9-896e3ff969d5.png
Check to see if there are any other services that should be disabled.
— Reply to this email directly, view it on GitHub https://github.com/kevoreilly/CAPEv2/issues/1645#issuecomment-1623522277, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH33N5M35LNUQUAGTXFTXO2PC5ANCNFSM6AAAAAAZ5Q47AI . You are receiving this because you commented.Message ID: @.***>
im going close this as this is VM configuration problems more than cape problem as you can see, but feel free to post your msgs in this thread
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Current Behavior
I changed ram_boost to yes in processing.conf and changed the timeout to 1800 seconds in ExecStart of cape-processor.service by referring to the following page, but it did not solve the problem. Cape-processor.service and cape.service are started in debug mode with the -d option to understand the problem.
https://github.com/kevoreilly/CAPEv2/issues/1024 https://github.com/kevoreilly/CAPEv2/issues/331 https://capev2.readthedocs.io/en/latest/usage/performance.html#processing
/opt/CAPEv2/conf/processing.conf [behavior] ram_boost = yes
/lib/systemd/system/cape-processor.service [Service] ExecStart=/usr/bin/python3 -m poetry run python process.py -d -p20 auto -pt 1800
/lib/systemd/system/cape.service [Service] ExecStart=/usr/bin/python3 -m poetry run python cuckoo.py -d
Failure Information (for bugs)
The following log is output during analysis.
/opt/CAPEv2/log/cuckoo.log 2023-07-04 16:44:25,917 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,917 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,917 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,917 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,917 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,918 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'LOG'> 2023-07-04 16:44:25,918 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,918 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,918 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,918 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'> 2023-07-04 16:44:25,918 [lib.cuckoo.core.resultserver] DEBUG: Task #149: Cancel <Context for b'BSON'>
/opt/CAPEv2/log/process.log 2023-07-04 17:14:22,946 [root] ERROR: Processing Timeout ('Task timeout', 1800). Function: 1800