cuckoosandbox / cuckoo

Cuckoo Sandbox is an automated dynamic malware analysis system
http://www.cuckoosandbox.org
Other
5.53k stars 1.7k forks source link

VMWare Fusion Machinery Error - VMX is already running #1191

Open keithjjones opened 7 years ago

keithjjones commented 7 years ago

I've been seeing these errors randomly when I analyze samples:

2016-11-29 14:34:51,938 [lib.cuckoo.core.scheduler] INFO: Task #88: acquired machine cuckoo2 (label=/Users/malwaredemo/Documents/VMWare Fusion Virtual Machines/Windows 7 x64 Plain.vmwarevm/Windows 7 x64 Plain.vmx)
2016-11-29 14:34:51,948 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 81298 (interface=vmnet8, host=192.168.241.102, pcap=/Source/cuckoo-dev/storage/analyses/88/dump.pcap)
2016-11-29 14:34:52,191 [lib.cuckoo.core.scheduler] ERROR: Machinery error: Machine /Users/malwaredemo/Documents/VMWare Fusion Virtual Machines/Windows 7 x64 Plain.vmwarevm/Windows 7 x64 Plain.vmx is already running
2016-11-29 14:34:52,200 [lib.cuckoo.core.scheduler] CRITICAL: A critical error has occurred trying to use the machine with name cuckoo2 during an analysis due to which it is no longer in a working state, please report this issue and all of the related environment details to the developers so we can improve this situation. (Note that before we would simply remove this VM from doing any more analyses, but as all the VMs will eventually be depleted that way, hopefully we'll find a better solution now).

I am reporting as the message above recommended. If you need any additional data, please let me know. I have the timeouts set pretty high - like around 300 seconds so I don't know why this error comes up. I haven't had the same problem with cuckoo-modified so I don't think it is a Fusion specific issue. Any recommendations are welcome.

This problem comes up with Cuckoo v2-RC2 and the current master branch.

Thanks for a great project to work with!

keithjjones commented 7 years ago

From Cuckoo.conf:

[timeouts]
# Set the default analysis timeout expressed in seconds. This value will be
# used to define after how many seconds the analysis will terminate unless
# otherwise specified at submission.
default = 300

# Set the critical timeout expressed in (relative!) seconds. It will be added
# to the default timeout above and after this timeout is hit
# Cuckoo will consider the analysis failed and it will shutdown the machine
# no matter what. When this happens the analysis results will most likely
# be lost.
critical = 300

# Maximum time to wait for virtual machine status change. For example when
# shutting down a vm. Default is 60 seconds.
vm_state = 300
keithjjones commented 7 years ago

After the above error, I tend to see a lot of these errors as well:

2016-11-29 14:54:12,536 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:12,536 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:12,661 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.102.
2016-11-29 14:54:12,661 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.102.
2016-11-29 14:54:12,735 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:12,735 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:12,736 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:12,736 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:12,829 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:12,830 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:13,144 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:13,145 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:13,367 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.
2016-11-29 14:54:13,367 [lib.cuckoo.core.resultserver] CRITICAL: ResultServer unable to map ip to context: 192.168.241.101.

192.168.241.101 and 102 are my cuckoo1 and cuckoo2 VMs in Fusion.

allthemalwarz commented 7 years ago

Judging by the notes in the vmware.conf file:

Specify which Vmware Workstation mode you want to run your machines on.

I'm assuming that cuckoo does not support Vmware fusion - But rather it supports Vmware Workstation.

keithjjones commented 7 years ago

It runs well most of the time. Cuckoo-modified runs well on it too. I don't think the command line tool for Fusion is much different than workstation. So I'm betting there is something small that is being missed regarding the state of the machine. I'll get these reports when the machines are shut down. Sometimes it will run through 100's of samples just fine, and other times it will fail on 5/5 samples.

I'm running it as nogui.

keithjjones commented 7 years ago

FWIW - I set it up with VirutalBox and I'm getting similar errors. The same message to report this issue is printed in red, so here is the data.

2016-11-30 20:16:03,454 [lib.cuckoo.core.scheduler] INFO: Task #2: acquired machine cuckoo1 (label=cuckoo1)
2016-11-30 20:16:03,463 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 52281 (interface=vboxnet0, host=10.0.2.100, pcap=/Source/cuckoo-dev/storage/analyses/2/dump.pcap)
2016-11-30 20:16:03,643 [lib.cuckoo.core.scheduler] ERROR: Machinery error: Trying to start an already started vm cuckoo1
2016-11-30 20:16:03,646 [lib.cuckoo.core.scheduler] CRITICAL: A critical error has occurred trying to use the machine with name cuckoo1 during an analysis due to which it is no longer in a working state, please report this issue and all of the related environment details to the developers so we can improve this situation. (Note that before we would simply remove this VM from doing any more analyses, but as all the VMs will eventually be depleted that way, hopefully we'll find a better solution now).