Closed marsomx closed 6 months ago
Task #1: Analysis results folder already exists at path '/opt/CAPEv2/storage/analyses/1', analysis aborted
thats all, you didn-t clean properly you cape
нд, 21 квіт. 2024 р. о 12:14 br0pi @.***> пише:
Expected Behavior
Along with static analysis i also expect also behavioral and network analysis, but got no result. I set up a win10 vm to kvm as the guide suggested. Current Behavior
I got only static analysis. it seems that the analysis is not started in the win10 virtual machine. Steps to Reproduce
- i submitted sample to win10 VM with this command
sudo -u cape poetry run python utils/submit.py --machine win10 /tmp/rfqmemdump.exe
- i got static analysis
- no result for behavioral and network analysis even with win10 machine up and running and even if a got this message Success: File "/tmp/rfqmemdump.exe" added as task with ID 1
- i found same error in the log file as reported below
Context
Sorry if i messed up with config files, btw i tried to following documentation in every part. I also tried to change the network configuration option from hostonly to nat, modifying the routing and cuckoo configurations, but no luck. the config files:
cuckoo
[cuckoo]
Which category of tasks do you want to analyze?
categories = static, pcap, url, file
If turned on, Cuckoo will delete the original file after its analysis
has been completed.
delete_original = off
Archives are not deleted by default, as it extracts and "original file" become extracted file
delete_archive = on
If turned on, Cuckoo will delete the copy of the original file in the
local binaries repository after the analysis has finished. (On *nix this
will also invalidate the file called "binary" in each analysis directory,
as this is a symlink.)
delete_bin_copy = off
Specify the name of the machinery module to use, this module will
define the interaction between Cuckoo and your virtualization software
of choice.
machinery = kvm
Enable screenshots of analysis machines while running.
machinery_screenshots = off
Specify if a scaling bounded semaphore should be used by the scheduler for tasking the VMs.
This is only applicable to auto-scaling machineries such as Azure and AWS.
There is a specific configuration key in each machinery that is used to initialize the semaphore.
For Azure, this configuration key is "total_machines_limit"
For AWS, this configuration key is "dynamic_machines_limit"
scaling_semaphore = off
A configurable wait time between updating the limit value of the scaling bounded semaphore
scaling_semaphore_update_timer = 10
Allow more than one task scheduled to be assigned at once for better scaling
A switch to allow batch task assignment, a method that can more efficiently assign tasks to available machines
batch_scheduling = off
The maximum number of tasks assigned to machines per batch, optimal value dependent on deployment
max_batch_count = 20
Enable creation of memory dump of the analysis machine before shutting
down. Even if turned off, this functionality can also be enabled at
submission. Currently available for: VirtualBox and libvirt modules (KVM).
memory_dump = off
When the timeout of an analysis is hit, the VM is just killed by default.
For some long-running setups it might be interesting to terminate the
moinitored processes before killing the VM so that connections are closed.
terminate_processes = off
Enable automatically re-schedule of "broken" tasks each startup.
Each task found in status "processing" is re-queued for analysis.
reschedule = off
Fail "unserviceable" tasks as they are queued.
Any task found that will never be analyzed based on the available analysis machines
will have its status set to "failed".
fail_unserviceable = on
Limit the amount of analysis jobs a Cuckoo process goes through.
This can be used together with a watchdog to mitigate risk of memory leaks.
max_analysis_count = 0
Limit the number of concurrently executing analysis machines.
This may be useful on systems with limited resources.
Set to 0 to disable any limits.
max_machines_count = 10
Limit the amount of VMs that are allowed to start in parallel. Generally
speaking starting the VMs is one of the more CPU intensive parts of the
actual analysis. This option tries to avoid maxing out the CPU completely.
This configuration option is only relevant for machineries that have a set
amount of VMs and are restricted by CPU usage.
If you are using an auto-scaling machinery such as Azure or AWS,
set this value to 0.
max_vmstartup_count = 5
Minimum amount of free space (in MB) available before starting a new task.
This tries to avoid failing an analysis because the reports can't be written
due out-of-diskspace errors. Setting this value to 0 disables the check.
(Note: this feature is currently not supported under Windows.)
freespace = 0
Process tasks, but not reach out of memory
freespace_processing = 15000
Temporary directory containing the files uploaded through Cuckoo interfaces
(web.py, api.py, Django web interface).
tmppath = /tmp
Delta in days from current time to set the guest clocks to for file analyses
A negative value sets the clock back, a positive value sets it forward.
The default of 0 disables this option
Note that this can still be overridden by the per-analysis clock setting
and it is not performed by default for URL analysis as it will generally
result in SSL errors
daydelta = 0
Path to the unix socket for running root commands.
rooter = /tmp/cuckoo-rooter
Enable if you want to see a DEBUG log periodically containing backlog of pending tasks, locked vs unlocked machines.
NOTE: Enabling this feature adds 4 database calls every 10 seconds.
periodic_log = off
Max filename length for submissions, before truncation. 196 is arbitrary.
max_len = 196
If it is greater than this, call truncate the filename further for sanitizing purposes.
Length truncated to is controlled by sanitize_to_len.
#
This is to prevent long filenames such as files named by hash.
sanitize_len = 32 sanitize_to_len = 24
[resultserver]
The Result Server is used to receive in real time the behavioral logs
produced by the analyzer.
Specify the IP address of the host. The analysis machines should be able
to contact the host through such address, so make sure it's valid.
NOTE: if you set resultserver IP to 0.0.0.0 you have to set the option
resultserver_ip
for all your virtual machines in machinery configuration.ip = 192.168.1.39
Specify a port number to bind the result server on.
port = 2042
Force the port chosen above, don't try another one (we can select another
port dynamically if we can not bind this one, but that is not an option
in some setups)
force_port = yes
pool_size = 0
Should the server write the legacy CSV format?
(if you have any custom processing on those, switch this on)
store_csvs = off
Maximum size of uploaded files from VM (screenshots, dropped files, log)
The value is expressed in bytes, by default 100MB.
upload_max_size = 500000000
To enable trimming of huge binaries go to -> web.conf -> general -> enable_trim
Prevent upload of files that passes upload_max_size?
do_upload_max_size = no
[processing]
Set the maximum size of analyses generated files to process. This is used
to avoid the processing of big files which may take a lot of processing
time. The value is expressed in bytes, by default 200MB.
analysis_size_limit = 200000000
Enable or disable DNS lookups.
resolve_dns = on
Enable or disable reverse DNS lookups
This information currently is not displayed in the web interface
reverse_dns = off
Enable PCAP sorting, needed for the connection content view in the web interface.
sort_pcap = on
[database]
Specify the database connection string.
Examples, see documentation for more:
sqlite:///foo.db
@.***:5432/mydatabase
@.***/mydatabase
If empty, default is a SQLite in db/cuckoo.db.
SQLite doens't support database upgrades!
For production we strongly suggest go with PostgreSQL
connection = @.***:5432/cape
If you use PostgreSQL: SSL mode
https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS
psql_ssl_mode = disable
Database connection timeout in seconds.
If empty, default is set to 60 seconds.
timeout =
Log all SQL statements issued to the database.
log_statements = off
[timeouts]
Set the default analysis timeout expressed in seconds. This value will be
used to define after how many seconds the analysis will terminate unless
otherwise specified at submission.
default = 200
Set the critical timeout expressed in (relative!) seconds. It will be added
to the default timeout above and after this timeout is hit
Cuckoo will consider the analysis failed and it will shutdown the machine
no matter what. When this happens the analysis results will most likely
be lost.
critical = 60
Maximum time to wait for virtual machine status change. For example when
shutting down a vm. Default is 300 seconds.
vm_state = 300
[tmpfs]
only if you using volatility to speedup IO
mkdir -p /mnt/tmpfs
mount -t tmpfs -o size=50g ramfs /mnt/tmpfs
chown cape:cape /mnt/tmpfs
#
vim /etc/fstab
tmpfs /mnt/tmpfs tmpfs nodev,nosuid,noexec,nodiratime,size=50g 0 0
#
Add crontab with
@reboot chown cape:cape /mnt/tmpfs -R
enabled = off path = /mnt/tmpfs/
in mb
freespace = 2000
[cleaner]
Invoke cleanup if <= of free space detected. see/set freespace/freespace_processing
enabled = no
set any value to 0 to disable it. In days
binaries_days = 5 tmp_days = 5
Remove analysis folder
analysis_days = 5
Delete mongo data
mongo = no
kvm
[kvm]
Specify a comma-separated list of available machines to be used. For each
specified ID you have to define a dedicated section containing the details
on the respective machine. (E.g. cuckoo1,cuckoo2,cuckoo3)
machines = cuckoo1
interface = virbr0
To connect to local or remote host
dsn = qemu:///system
To allow copy & paste. For details see example below
[cape1] label = cape1 platform = windows ip = 192.168.122.105 arch = x86
tags = winxp,acrobat_reader_6
snapshot = Snapshot1
resultserver_ip = 192.168.122.101
reserved = no
[cuckoo1]
Specify the label name of the current machine as specified in your
libvirt configuration.
label = win10
Specify the operating system platform used by current machine
[windows/darwin/linux].
platform = windows
Specify the IP address of the current virtual machine. Make sure that the
IP address is valid and that the host machine is able to reach it. If not,
the analysis will fail. You may want to configure your network settings in
/etc/libvirt/
/networks/ ip = 192.168.122.127
Specify tags to display
Tags may be used to specify on which guest machines a sample should be run
NOTE - One of the following OS version tags MUST be included for Windows VMs:
winxp,win7, win10, win11
Some samples will only detonate on specific versions of Windows (see web.conf packages for more info)
Example: MSIX - Windows >= 10
tags = winxp,acrobat_reader_6
(Optional) Specify the snapshot name to use. If you do not specify a snapshot
name, the KVM MachineManager will use the current snapshot.
Example (Snapshot1 is the snapshot name):
snapshot = Snapshot1
(Optional) Specify the name of the network interface that should be used
when dumping network traffic from this machine with tcpdump. If specified,
overrides the default interface specified in auxiliary.conf
Example (virbr0 is the interface name):
interface = virbr1
(Optional) Specify the IP of the Result Server, as your virtual machine sees it.
The Result Server will always bind to the address and port specified in cuckoo.conf,
however you could set up your virtual network to use NAT/PAT, so you can specify here
the IP address for the Result Server as your machine sees it. If you don't specify an
address here, the machine will use the default value from cuckoo.conf.
NOTE: if you set this option you have to set result server IP to 0.0.0.0 in cuckoo.conf.
Example:
resultserver_ip = 192.168.122.101
(Optional) Specify the port for the Result Server, as your virtual machine sees it.
The Result Server will always bind to the address and port specified in cuckoo.conf,
however you could set up your virtual network to use NAT/PAT, so you can specify here
the port for the Result Server as your machine sees it. If you don't specify a port
here, the machine will use the default value from cuckoo.conf.
Example:
resultserver_port = 2042
Set the machine architecture
Required to auto select proper machine architecture for sample
x64 or x86
arch = x64
(Optional) Specify whether or not the machine should be reserved, meaning that it will
only be used for a detonation if specifically requested by its label.
reserved = no
routing
[routing]
Enable pcap generation for non live connections?
If you have huge number of VMs, pcap generation can be a bottleneck
enable_pcap = no
Default network routing mode; "none", "internet", or "vpn_name".
In none mode we don't do any special routing - the VM doesn't have any
network access (this has been the default actually for quite a while).
In internet mode by default all the VMs will be routed through the network
interface configured below (the "dirty line").
And in VPN mode by default the VMs will be routed through the VPN identified
by the given name of the VPN.
Note that just like enabling VPN configuration setting this option to
anything other than "none" requires one to run utils/rooter.py as root next
to the CAPE instance (as it's required for setting up the routing).
route = internet
Network interface that allows a VM to connect to the entire internet, the
"dirty line" so to say. Note that, just like with the VPNs, this will allow
malicious traffic through your network. So think twice before enabling it.
(For example, to route all VMs through eth0 by default: "internet = eth0").
internet = none
Routing table name/id for "dirty line" interface. If "dirty line" is
also default gateway in the system you can leave "main" value. Otherwise add
new routing table by adding "
" line to /etc/iproute2/rt_tables (e.g., "200 eth0"). ID and name must be unique across the system (refer to
/etc/iproute2/rt_tables for existing names and IDs).
rt_table = main
When using "dirty line", you can reject forwarding to a certain network segment.
For example, a request targeting 192.168.12.1/24,172.16.22.1/24 will not be
forwarded, but will be rejected:
"reject_segments = 192.168.12.1/24,172.16.22.1/24"
reject_segments = none
When ussing "dirty line", you can reject guest access a certain port.
For example, a request targeting host's port 8000 and 8080 will be rejected:
"reject_hostports = 8000,8080"
reject_hostports = none
To route traffic through multiple network interfaces CAPE uses
Policy Routing with separate routing table for each output interface
(VPN or "dirty line"). If this option is enabled CAPE on start will try
to automatically initialise routing tables by copying routing entries from
main routing table to the new routing tables. Depending on your network/vpn
configuration this might not be sufficient. In such case you would need to
initialise routing tables manually. Note that enabling this option won't
affect main routing table.
auto_rt = no
The drop route basically drops any outgoing network (except for CAPE
traffic) whereas the regular none route still allows a VM to access its own
subnet (e.g., 192.168.122.1/24). It is disabled by default as it does require
the optional rooter to run (unlike the none route, where literally nothing
happens). One can either explicitly enable the drop route or if the rooter
is enabled anyway, it is automatically enabled.
drop = no
Should check if the inteface is up
verify_interface = yes
Should check if rt_table exists before initializing
verify_rt_table = yes
[inetsim]
Inetsim quick deploy, chose your vm manager if is not kvm
wget https://googledrive.com/host/0B6fULLT_NpxMQ1Rrb1drdW42SkE/remnux-6.0-ova-public.ova
tar xvf remnux-6.0-ova-public.ova
qemu-img convert -O qcow2 REMnuxV6-disk1.vmdk remnux.qcow2
enabled = no server = 192.168.122.1 dnsport = 53 interface = virbr0
Redirect TCP ports (should we also support UDP?). If specified, this should
represent whitespace-separated src:dst pairs. E.g., "80:8080 443:8080" will
redirect all 80/443 traffic to 8080 on the specified InetSim host.
Source port range redirection is also supported. E.g., "996-2041:80" will
redirect all traffic directed at ports between 996 and 2041 inclusive to port 80
on the specified InetSim host.
ports =
[tor] enabled = no dnsport = 5353 proxyport = 9040 interface = virbr1
[vpn]
By default we disable VPN support as it requires running utils/rooter.py as
root next to cuckoo.py (which should run as regular user).
enabled = no
select one of the configured vpns randomly
random_vpn = no
Comma-separated list of the available VPNs.
vpns = vpn0
[vpn0]
Name of this VPN. The name is represented by the filepath to the
configuration file, e.g., cuckoo would represent /etc/openvpn/cuckoo.conf
Note that you can't assign the names "none" and "internet" as those would
conflict with the routing section in cuckoo.conf.
name = vpn0
The description of this VPN which will be displayed in the web interface.
Can be used to for example describe the country where this VPN ends up.
description = openvpn_tunnel
The tun device hardcoded for this VPN. Each VPN must be configured to use
a hardcoded/persistent tun device by explicitly adding the line "dev tunX"
to its configuration (e.g., /etc/openvpn/vpn1.conf) where X in tunX is a
unique number between 0 and your lucky number of choice.
interface = tun0
Routing table name/id for this VPN. If table name is used it must be
added to /etc/iproute2/rt_tables as "
" line (e.g., "201 tun0"). ID and name must be unique across the system (refer /etc/iproute2/rt_tables
for existing names and IDs).
rt_table = tun0
[socks5]
By default we disable socks5 support as it requires running utils/rooter.py as
root next to cuckoo.py (which should run as regular user).
enabled = no
select one of the configured socks5 proxies randomly
random_socks5 = no
Comma-separated list of the available proxies.
proxies = socks_ch
[socks_ch] name = ch_socks description = ch_socks proxyport = 5008 dnsport = 10053
Failure Logs
cuckoo log
2024-04-21 09:30:24,739 [lib.cuckoo.core.scheduler] INFO: Using "kvm" machine manager with max_analysis_count=0, max_machines_count=10, and max_vmstartup_count=5 2024-04-21 09:30:24,744 [lib.cuckoo.core.scheduler] INFO: Loaded 1 machine/s 2024-04-21 09:30:24,748 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks 2024-04-21 09:36:27,688 [lib.cuckoo.core.scheduler] ERROR: Task #1: Analysis results folder already exists at path '/opt/CAPEv2/storage/analyses/1', analysis aborted 2024-04-21 09:36:27,716 [lib.cuckoo.core.scheduler] INFO: Task #1: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_51n1fona/rfqmemdump.exe' 2024-04-21 09:36:27,717 [lib.cuckoo.core.scheduler] INFO: Task #1: analysis procedure completed
process log
2024-04-21 09:36:46,886 [Task 1] [modules.processing.analysisinfo] CRITICAL: Failed to get start/end time from Task 2024-04-21 09:36:46,904 [Task 1] [modules.processing.behavior] WARNING: Analysis results folder does not exist at path "/opt/CAPEv2/storage/analyses/1/logs" 2024-04-21 09:36:46,907 [Task 1] [lib.cuckoo.core.plugins] INFO: Logs folder doesn't exist, maybe something with with analyzer folder, any change?
Can anyone help me?
— Reply to this email directly, view it on GitHub https://github.com/kevoreilly/CAPEv2/issues/2077, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH32HTGGYZJLOLIYOBO3Y6OGRBAVCNFSM6AAAAABGRICY5SVHI2DSMVQWIX3LMV43ASLTON2WKOZSGI2TKMBQGI2DINY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
@doomedraven Thanks for the reply... I had an error with poetry run python cleaners.py --clean
due to a permission denied error when the script tried to remove some pyc files in the pycache folders. So I proceeded to remove those files manually and running the above command was successful. plus i checked postgres and all tasks was deleted. but nevertheless I get some error in process log:
2024-04-22 06:04:08,889 [Task 1] [modules.processing.analysisinfo] CRITICAL: Failed to get start/end time from Task
2024-04-22 06:04:09,001 [Task 1] [modules.processing.behavior] WARNING: Analysis results folder does not exist at path "/opt/CAPEv2/storage/analyses/1/logs"
2024-04-22 06:04:09,005 [Task 1] [lib.cuckoo.core.plugins] INFO: Logs folder doesn't exist, maybe something with with analyzer folder, any change?
2024-04-22 06:04:09,228 [Task 1] [lib.cuckoo.core.plugins] ERROR: Failed to run the reporting module "CAPASummary": 'NoneType' object has no attribute 'enabled'
Traceback (most recent call last):
File "/opt/CAPEv2/utils/../lib/cuckoo/core/plugins.py", line 738, in process
current.run(self.results)
File "/opt/CAPEv2/utils/../modules/reporting/flare_capa_summary.py", line 26, in run
if HAVE_FLARE_CAPA and self.options.flare_capa_summary.enabled and not self.options.flare_capa_summary.on_demand:
AttributeError: 'NoneType' object has no attribute 'enabled'
and also same error in cuckoo log even if no analysis were already performed:
2024-04-22 05:33:27,587 [lib.cuckoo.core.scheduler] INFO: Using "kvm" machine manager with max_analysis_count=0, max_machines_count=10, and max_vmstartup_count=5
2024-04-22 05:33:27,590 [lib.cuckoo.core.scheduler] INFO: Loaded 1 machine/s
2024-04-22 05:33:27,592 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks
2024-04-22 06:04:04,060 [lib.cuckoo.core.scheduler] ERROR: Task #1: Analysis results folder already exists at path '/opt/CAPEv2/storage/analyses/1', analysis aborted
2024-04-22 06:04:04,075 [lib.cuckoo.core.scheduler] INFO: Task #1: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_8kpl83ir/rfqmemdump.exe'
2024-04-22 06:04:04,099 [lib.cuckoo.core.scheduler] INFO: Task #1: analysis procedure completed
could you please tell me if my configuration files are set correctly? or ther's something i missed? thanks in advance
do you have conf/processing.conf
?
@doomedraven i didn't touch processing.config (execpt for vt key) so it is pretty the same as the default file. Anyway i tried to lunch cape with debug option and i got two error generated by scheduler.py:
init storage: Task #%s: Analysis results folder already exists at path '%s', analysis aborted", self.task.id, self.storage)
even if i proceeded to clean cape as i said before and
acquire_machine: Task #%s: no machine available yet for machine '%s', platform '%s' or tags '%s'.
i checked kvm and cuckoo cinfig files many times but no luck
No, the file does exist in conf folder or is in conf/defaults?
El mar, 23 abr 2024, 17:28, br0pi @.***> escribió:
@doomedraven https://github.com/doomedraven i didn't touch processing.config (execpt for vt key) so it is pretty the same as the default file. Anyway i tried to lunch cape with debug option and i got two error generated by scheduler.py:
-
init storage: Task #%s: Analysis results folder already exists at path '%s', analysis aborted", self.task.id, self.storage) even if i proceeded to clean cape as i said before and
acquire_machine: Task #%s: no machine available yet for machine '%s', platform '%s' or tags '%s'.
i checked kvm and cuckoo many times but no luck
— Reply to this email directly, view it on GitHub https://github.com/kevoreilly/CAPEv2/issues/2077#issuecomment-2072706296, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH36ENAYW4FXLY3KC2ODY6Z42RAVCNFSM6AAAAABGRICY5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZSG4YDMMRZGY . You are receiving this because you were mentioned.Message ID: @.***>
No, the file does exist in conf folder or is in conf/defaults? El mar, 23 abr 2024, 17:28, br0pi @.> escribió: … @doomedraven https://github.com/doomedraven i didn't touch processing.config (execpt for vt key) so it is pretty the same as the default file. Anyway i tried to lunch cape with debug option and i got two error generated by scheduler.py: - init storage: Task #%s: Analysis results folder already exists at path '%s', analysis aborted", self.task.id, self.storage) even if i proceeded to clean cape as i said before and - acquire_machine: Task #%s: no machine available yet for machine '%s', platform '%s' or tags '%s'. i checked kvm and cuckoo many times but no luck — Reply to this email directly, view it on GitHub <#2077 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH36ENAYW4FXLY3KC2ODY6Z42RAVCNFSM6AAAAABGRICY5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZSG4YDMMRZGY . You are receiving this because you were mentioned.Message ID: @.>
It is under /config: /opt/CAPEv2/config/processing.conf
config is wrong name, rename folder to conf, I guess that part of the problem, but still it should load original default config, I can't properly check this as I'm on phone only those days
El mar, 23 abr 2024, 19:30, br0pi @.***> escribió:
No, the file does exist in conf folder or is in conf/defaults? El mar, 23 abr 2024, 17:28, br0pi @.
> escribió: … <#m_-6514755220629336508_m2718037564909705231> @doomedraven https://github.com/doomedraven https://github.com/doomedraven https://github.com/doomedraven i didn't touch processing.config (execpt for vt key) so it is pretty the same as the default file. Anyway i tried to lunch cape with debug option and i got two error generated by scheduler.py: - init storage: Task #%s: Analysis results folder already exists at path '%s', analysis aborted", self.task.id http://self.task.id, self.storage) even if i proceeded to clean cape as i said before and - acquire_machine: Task #%s: no machine available yet for machine '%s', platform '%s' or tags '%s'. i checked kvm and cuckoo many times but no luck — Reply to this email directly, view it on GitHub <#2077 (comment) https://github.com/kevoreilly/CAPEv2/issues/2077#issuecomment-2072706296>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH36ENAYW4FXLY3KC2ODY6Z42RAVCNFSM6AAAAABGRICY5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZSG4YDMMRZGY https://github.com/notifications/unsubscribe-auth/AAOFH36ENAYW4FXLY3KC2ODY6Z42RAVCNFSM6AAAAABGRICY5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZSG4YDMMRZGY . You are receiving this because you were mentioned.Message ID: @.>
It is under /config: /opt/CAPEv2/config/processing.conf
— Reply to this email directly, view it on GitHub https://github.com/kevoreilly/CAPEv2/issues/2077#issuecomment-2072992759, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH35E65A4ENNH6MW4WKDY62LC7AVCNFSM6AAAAABGRICY5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZSHE4TENZVHE . You are receiving this because you were mentioned.Message ID: @.***>
Sorry it was a typo.. it is under conf folder: conf/processing.py... i reproduced clean step again but i got same error:
Analysis results folder already exists at path '/opt/CAPEv2/storage/analyses/1', analysis aborted
but the static analysis gets completed anyway. Meanwhile it entering this loop:
# Starts a loop to acquire a machine on which to run the analysis.
while True:
machine_lock.acquire()
generates the error not machinery.availables
as I posted earlier
You def has some problems with your setup, I would suggest to reinstall
El mié, 24 abr 2024, 9:49, br0pi @.***> escribió:
Sorry it was a typo.. it is under conf folder: conf/processing.py... i reproduced clean step again but i got same error:
Analysis results folder already exists at path '/opt/CAPEv2/storage/analyses/1', analysis aborted
but the static analysis gets completed anyway. Meanwhile it entering this loop:
Starts a loop to acquire a machine on which to run the analysis.
while True: machine_lock.acquire()
generates the error not machinery.availables as I posted earlier
— Reply to this email directly, view it on GitHub https://github.com/kevoreilly/CAPEv2/issues/2077#issuecomment-2074300645, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFH35BLMMWIHEQMBN7FZDY65PW7AVCNFSM6AAAAABGRICY5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZUGMYDANRUGU . You are receiving this because you were mentioned.Message ID: @.***>
I'm having this same exact error on a clean reinstall of CAPE (with my custom config), there seems to be an issue installing the flare-capa module, but even after disabling CAPA everywhere in the config this still happens.
On my prod installation which has an older version of CAPE it still works fine with the same config. I couldn't point out when exactly it broke, might have to do with things outside the repo (packages or something like that)
In cuckoo.py:
In process.py:
The analysis goes into processing mode immediately after being launched and has no behavioral/network results.
The machine works and the agent is running, and the cape server has connectivity to it (checked with curl agent_ip:8000
)
That's interesting, investigation will be delayed as we are at conference and after I'm on vacations
@dfr-fands Thanks for reply... at least I know it is not a problem with my config files. @doomedraven I will be waiting for your check.. thanks
For me the no machine available yet for machine
is caused by bfce3fdda4803605a8e65bdbbd88cb062e8655d2 and 8ecbf33d84aeb0b23a82017dce0215534b37bb54. Reverting them fixes it for me. The problem is that these commits are skipping the is_relevant_machine_available()
call that should set the scheduled
status in the database
https://github.com/kevoreilly/CAPEv2/blob/d93258abd35f4dfec4b43a5b228695916615ea69/lib/cuckoo/core/database.py#L908
That status is checked during acquire()
and because the label is not set it can't acquire it.
https://github.com/kevoreilly/CAPEv2/blob/d93258abd35f4dfec4b43a5b228695916615ea69/lib/cuckoo/core/database.py#L1179-L1180
oh thank you, there is a typo, "file" shouldnt be in that list, i have removed it from there, try to git pull
and now it should be solved
It works for me, but the last word is @br0pi 's
I confirm that it works as expected, thanks to everyone who contributed to this issue.
Before I close the case, I would like to ask if anyone can clarify how to get the 4 digits that correspond with real hardware in order to replace the WOOT value on kvm install file. i got dsdt.dat file from acpidump than i ran iasl -d dsdt.dat
command but I am not so sure that i've identified the correct value. thanks in advance
Expected Behavior
Along with static analysis i also expect also behavioral and network analysis, but got no result. I set up a win10 vm to kvm as the guide suggested.
Current Behavior
I got only static analysis. it seems that the analysis is not started in the win10 virtual machine.
Steps to Reproduce
Success: File "/tmp/rfqmemdump.exe" added as task with ID 1
Context
Sorry if i messed up with config files, btw i tried to following documentation in every part. I also tried to change the network configuration option from hostonly to nat, modifying the routing and cuckoo configurations, but no luck. the config files:
cuckoo
kvm
routing
[routing]
Failure Logs
cuckoo log
process log
Can anyone help me?