nearform / autopsy

dissect your dead node services with mdb via a smart os vm
89 stars 9 forks source link

Unable to load big coredump file #7

Closed axot closed 8 years ago

axot commented 8 years ago

I want to use mdb to analysis a big coredump file (12GB), autopsy will hang because low disk space.

davidmarkclements commented 8 years ago

So this is a common challenge with core files in general - you can modify the vm by hand (using virtualbox) and if you have enough resources it should work - but it's quite likely that your laptop simply doesn't have the power to process this - in which case you might want to try Joyent's triton (maybe via Manta) - since they can supply the resources you need

sharq88 commented 6 years ago

@davidmarkclements can you tell us how to do so? I tried to modify the VM settings by hand, but whenever I start via autopsy command it reverts the state to the last settings - effectively removing my changes. I'm trying to load in a 1.5GB dump and I'm getting file system full errors all over the place. :(

sharq88 commented 6 years ago

Okay - got it... just had to remove the snapshot, update settings and try again. ;)

sharq88 commented 6 years ago

Added 8GB of ram and 4CPU's. Still getting file system full errors. :( image Unfortunately it adds the space to swap but the autopsy ./node ./core.22938 command copies the file to the /devices/ramdisk effectively adding more memory is pointless.

sharq88 commented 6 years ago

In case someone else needs to open big files, here is how I worked around.

# start VM/SSH
autopsy start

# copy core file to VM
scp -P 2222 ./core.22938 root@localhost:/etc/svc/volatile

# copy node binary to VM
scp -P 2222 ./node root@localhost:/etc/svc/volatile

# SSH in to VM
ssh root@localhost -p 2222

# go to dir
cd /etc/svc/volatile

# run mdb
mdb ./node ./core.22938

::load v8

(password is root for ssh)