rizkimunawir / volatility

Automatically exported from code.google.com/p/volatility
GNU General Public License v2.0
0 stars 0 forks source link

[vmware] vtop and 5GB 64bit memory dump problem #272

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Reported by Sebastien Bourdon-Richard on Vol-dev: 

I'm playing with a 5GB Windows 7 SP0 64bit memory dump and I have some
problems with processes mapped over 4GB.

Pslist only shows System process. Maybe it's because System is the
only process mapped under 4GB?

H:\Volatility>python vol.py -f Windows Seven.vmem --profile=Win7SP0x64 pslist
Volatile Systems Volatility Framework 2.1_alpha
Offset(V)          Name                    PID   PPID   Thds     Hnds
 Sess  Wow64 Start                Exit
------------------ -------------------- ------ ------ ------ --------
------ ------ -------------------- --------------------
0xfffffa8005355b30 System                    4      0     91 --------
------      0 2012-06-14 19:42:15

H:\Volatility>python vol.py -f Windows Seven.vmem --profile=Win7SP0x64 psscan
Volatile Systems Volatility Framework 2.1_alpha
Offset(P)          Name                PID   PPID PDB
Time created         Time exited
------------------ ---------------- ------ ------ ------------------
-------------------- --------------------
0x0000000008755b30 System                4      0 0x0000000000187000
2012-06-14 19:42:15
0x0000000174a4a800 VMwareTray.exe     2028   1164 0x000000018c7c3000
2012-06-14 19:42:47
0x0000000174a55b30 VMwareUser.exe     1324   1164 0x000000018af49000
2012-06-14 19:42:47
0x0000000174a923f0 SearchIndexer.      240    552 0x000000018895a000
2012-06-14 19:42:53
0x0000000174aebb30 SearchFilterHo     2108    240 0x0000000188146000
2012-06-14 19:42:54
0x0000000174b00060 SearchProtocol     2076    240 0x00000001964b2000
2012-06-14 19:42:54
0x0000000174db7630 explorer.exe       1164   1976 0x000000019a477000
2012-06-14 19:42:46
0x0000000174f52b30 vmtoolsd.exe       1392    552 0x0000000190f88000
2012-06-14 19:42:39
0x0000000175015060 svchost.exe         316    552 0x000000019bdd9000
2012-06-14 19:42:33
0x0000000175021b30 spoolsv.exe        1088    552 0x00000001a2a65000
2012-06-14 19:42:38
0x00000001750fb060 svchost.exe         816    552 0x0000000196cde000
2012-06-14 19:42:36
0x0000000175241060 svchost.exe         900    552 0x00000001a020e000
2012-06-14 19:42:29
0x0000000175276060 svchost.exe         828    552 0x00000001a6621000
2012-06-14 19:42:29
0x000000017528c060 audiodg.exe         968    828 0x000000019ddc2000
2012-06-14 19:42:32
0x0000000175294630 VMUpgradeHelpe     1460    552 0x000000019f527000
2012-06-14 19:42:40
0x00000001752b8b30 svchost.exe        1284    552 0x0000000192402000
2012-06-14 19:42:39
0x000000017531b9b0 taskhost.exe       1936    552 0x000000019a163000
2012-06-14 19:42:45
0x000000017531c880 dwm.exe            2008    868 0x000000019b4fc000
2012-06-14 19:42:46
0x000000017538a5f0 svchost.exe        1136    552 0x00000001a3870000
2012-06-14 19:42:38
0x0000000175517b30 csrss.exe           364    356 0x000000000032c000
2012-06-14 19:42:22
0x0000000175521060 svchost.exe         668    552 0x00000001a2b4d000
2012-06-14 19:42:27
0x000000017554bb30 wininit.exe         436    356 0x00000000be0b2000
2012-06-14 19:42:24
0x0000000175553b30 csrss.exe           460    448 0x00000000be284000
2012-06-14 19:42:24
0x0000000175592960 winlogon.exe        504    448 0x00000000bff4a000
2012-06-14 19:42:24
0x00000001755bf060 svchost.exe         744    552 0x00000001a1251000
2012-06-14 19:42:28
0x00000001755d1b30 services.exe        552    436 0x00000001a7184000
2012-06-14 19:42:25
0x00000001755d8060 svchost.exe         868    552 0x00000001a0309000
2012-06-14 19:42:29
0x00000001755e2b30 lsass.exe           560    436 0x00000001a9128000
2012-06-14 19:42:25
0x00000001755e6910 lsm.exe             568    436 0x00000001a9470000
2012-06-14 19:42:25
0x0000000175deb060 userinit.exe       1976    504 0x000000019b3ee000
2012-06-14 19:42:45
0x0000000176263b30 WmiPrvSE.exe       1848    668 0x00000001a8947000
2012-06-14 19:42:45
0x00000001763ff950 smss.exe            264      4 0x0000000041470000
2012-06-14 19:42:15

Does the problem can be related to the vtop function in amd64?

For a physical page size of 4KB, vtop() in amd64.py convert the address:
  1- get pml4e
  2- get pdpte
  3- get pde
  4- get pte
  5- return get_phys_addr

The function get_phys_addr() in step #5 is in intel.py and is 32bit only:

   def get_phys_addr(self, vaddr, pte_value):
       return (pte_value & 0xfffff000) | (vaddr & 0xfff)

If the pte_value is 64bit, it get cuts in the get_phys_addr()?

Original issue reported on code.google.com by michael.hale@gmail.com on 15 Jun 2012 at 12:51

GoogleCodeExporter commented 9 years ago
Hiya,

So AMD64PagedMemory inherits from JKIA32PagedMemoryPae (defined in intel.py).  
JKIA32PagedMemoryPae inherits from JKIA32PagedMemory, but overrides the 
get_phys_addr function with the following:

    def get_phys_addr(self, vaddr, pte):
        '''
        Return the offset in a 4KB memory page from the given virtual
        address and Page Table Entry.

        Bits 51:12 are from the PTE
        Bits 11:0 are from the original linear address
        '''
        return (pte & 0xffffffffff000) | (vaddr & 0xfff)

So I don't believe that's the cause of your problem.  It is suspicious that 
it's only happening to programs mapped above 4Gb, perhaps you could examine the 
list pointers using volshell?  Could I ask what architecture the analysis box 
is?  If you run python, could you please tell us what the output of "import 
sys; hex(sys.maxint)" is?

@MHL, could you provide instructions for investigating the pslist and try to 
figure out where/why the list isn't full?  I'm thinking there may be something 
else that blats masks addresses at 32 bits, and that could be causing it, but I 
couldn't see it with a quick grep...

Original comment by mike.auty@gmail.com on 15 Jun 2012 at 3:02

GoogleCodeExporter commented 9 years ago
One thing that's suspicious besides the incomplete process list is that the 
System process in Sebastien's pslist has "-----" handles which means the 
_EPROCESS.ObjectTable pointer is invalid. I have never seen a System process 
with an invalid handle table. 

Sebastien, here are a few commands to run in volshell to help us investigate. 
I'm using an example Win7SP0x64 of my own, but its < 4 GB RAM. Note that 
volshell starts out in the System process space. 

$ python vol.py -f win7x64cmd.dd --profile=Win7SP0x64 volshell
Volatile Systems Volatility Framework 2.1_alpha
Current context: process System, pid=4, ppid=0 DTB=0x187000  <==== we're in the 
System process already
Welcome to volshell! Current memory image is:
file:///Users/Michael/Desktop/memory/win7x64cmd.dd
To get help, type 'hh()'

>>> hex(self.eproc.obj_offset) <==== This should be 0xfffffa8005355b30 in 
Sebastien's image
'0xfffffa80018ac040L'

### Display the System's _LIST_ENTRY

>>> hex(self.eproc.ActiveProcessLinks.Flink)
'0xfffffa8002929b18L'

>>> hex(self.eproc.ActiveProcessLinks.Blink)
'0xfffff8000286db30L'

### Dump the next Flink and Blink 

>>> dq(0xfffffa8002929b18L, length = 0x10)
0xfffffa8002929b18 0xfffffa8002330988
0xfffffa8002929b20 0xfffffa80018ac1c8

>>> dq(0xfffff8000286db30L, length = 0x10) 
0xfffff8000286db30 0xfffffa80018ac1c8
0xfffff8000286db38 0xfffffa8003d5d598

### If the Flink looks reasonable, try to follow it

>>> dt("_EPROCESS", self.eproc.obj_offset - 
self.addrspace.profile.get_obj_offset("_EPROCESS", "ActiveProcessLinks"))
[MalwareEPROCESS _EPROCESS] @ 0xFFFFFA8002929990
0x0   : Pcb                            18446738026438760848
0x160 : ProcessLock                    18446738026438761200
0x168 : CreateTime                     2011-12-30 08:25:29 
0x170 : ExitTime                       1970-01-01 00:00:00 
0x178 : RundownProtect                 18446738026438761224
0x180 : UniqueProcessId                224
0x188 : ActiveProcessLinks             18446738026438761240 
...
0x2d8 : Session                        0
0x2e0 : ImageFileName                  smss.exe
0x2ef : PriorityClass                  2
....

### Check out the handle table pointer 

>>> hex(self.eproc.ObjectTable)
'0xfffff8a0000018c0L'

### I'm guessing this hits some type of error on Sebastien's image

>>> dd(self.eproc.ObjectTable)
fffff8a0000018c0  01fcb001 fffff8a0 00000000 00000000
fffff8a0000018d0  00000004 00000000 00000000 00000000
fffff8a0000018e0  009e3f10 fffff8a0 028637f0 fffff800
fffff8a0000018f0  00000000 00000000 00000000 00000000

Let's try that for a start and see if it gives us enough information to figure 
out what's going on. Also just FYI we have an open issue for "pslist does not 
handle smeared images" (see Issue #198) which may be related (I've never 
personally seen that, but scudette has). So it may be a good troubleshooting 
step for Sebastien to try pslist in scudette's branch:

$ svn checkout https://volatility.googlecode.com/svn/branches/scudette 
volatility_scudette 

However, since the ObjectTable pointer is showing up invalid, I have a 
suspicious that its *not* related to smeared _LIST_ENTRYs. 

Thanks Sebastien!

Original comment by michael.hale@gmail.com on 15 Jun 2012 at 3:52

GoogleCodeExporter commented 9 years ago
>Could I ask what architecture the analysis box is?

Win 7 SP1 x64

>could you please tell us what the output of "import sys; hex(sys.maxint)" is?

Using python 2.6.5 and the result is 0x7fffffff

>What tool was used to collect the sample?

VmWare 7 snapshot

>Was this a fresh checkout?

Yes, revision 1871

Here's my weird results:

C:\Volatility>python vol.py -f "C:\Windows Seven.vmem" --profile=Win7SP0x64 
volshell
Volatile Systems Volatility Framework 2.1_alpha
Current context: process System, pid=4, ppid=0 DTB=0x187000
Welcome to volshell! Current memory image is:
file:///C:/Windows%20Seven.vmem
To get help, type 'hh()'

>>> hex(self.eproc.obj_offset)
'0xfffffa8005355b30L'

>>> hex(self.eproc.ActiveProcessLinks.Flink)
'0xfffffa80061ffad8L'

>>> hex(self.eproc.ActiveProcessLinks.Blink)
'0xfffff80002828b30L'

>>> dq(0xfffffa80061ffad8L, length = 0x10)
Memory unreadable at fffffa80061ffad8

>>> dq(0xfffff80002828b30L, length = 0x10)
0xfffff80002828b30 0xfffffa8005355cb8
0xfffff80002828b38 0xfffffa80078ebcb8

>>> dt("_EPROCESS", 
self.eproc.ActiveProcessLinks.Blink-self.addrspace.profile.get_obj_offset("_EPRO
CESS", "ActiveProcessLinks"))
[MalwareEPROCESS _EPROCESS] @ 0xFFFFF800028289A8
0x0   : Pcb                            18446735277658638760
0x160 : ProcessLock                    18446735277658639112
0x168 : CreateTime                     1970-01-01 00:00:00
0x170 : ExitTime                       1970-01-01 00:00:00
0x178 : RundownProtect                 18446735277658639136
0x180 : UniqueProcessId                126975560
0x188 : ActiveProcessLinks             18446735277658639152
...
0x2d8 : Session                        393217
0x2e0 : ImageFileName                  ?C;??????C;?????
0x2ef : PriorityClass                  255
...

>>> 
cc(offset=self.eproc.ActiveProcessLinks.Blink-self.addrspace.profile.get_obj_off
set("_EPROCESS", "ActiveProcessLinks"))
Current context: process ?C;??????C;?????, pid=126975560, ppid=0 DTB=0x64

>>> self.eproc.ActiveProcessLinks.Blink
<volatility.obj.NoneObject object at 0x044B1850>

>>> self.eproc.ActiveProcessLinks.Flink
<volatility.obj.NoneObject object at 0x044B1850>

Thank you all for your help ;)

Sébastien

Original comment by sebastie...@gmail.com on 15 Jun 2012 at 5:38

GoogleCodeExporter commented 9 years ago
Thank you Sébastien! 

So for whatever reason, the ActiveProcessLinks.Flink cannot be accessed and 
gives the "Memory unreadable" error. This usually means its paged, but 
_EPROCESS objects should always be in the non-paged pool. Since by default our 
pslist command walks the _LIST_ENTRY forward, it halts after the System process 
because it can't go any further. 

There are a few things you can try. 

1) Its a little strange that sys.maxint is 0x7fffffff on your x64 analysis 
system. You may have better luck with a 64-bit version of Python. I have a 
feeling its *not* related to sys.maxint however, since you can find the KDBG 
and get to PsActiveProcessHead which requires dereferencing 64-bit pointers 
(otherwise you wouldn't even see the System process)

2) Can you try running the modules command which also walks a _LIST_ENTRY and 
let us know if that works?

3) Below are instructions on walking the process list backwards. To simulate 
your situation, I made a copy of my Win7SP0x64 memory dump and manually 
corrupted the System process's ActiveProcessLinks.Flink using volatility's 
write support. Then I tested walking the list forward (per normal) and made 
sure I only saw the single process. Then I applied a small patch to make pslist 
walk the _LIST_ENTRY backward and confirmed that the entire list of processes 
is shown. 

So what I'd like to do is quickly show the commands and then have you apply the 
same patch to walk the _LIST_ENTRY backward and see if it works. If it shows 
your entire process tree, then we know there's no issue with vtop/4GB. 

$ python vol.py -f win7x64cmd_copy.dd --profile=Win7SP0x64 volshell --write
Volatile Systems Volatility Framework 2.1_alpha
Write support requested.  Please type "Yes, I want to enable write support" 
below precisely (case-sensitive):
Yes, I want to enable write support
Current context: process System, pid=4, ppid=0 DTB=0x187000
Welcome to volshell! Current memory image is:
file:///Users/Michael/Desktop/win7x64cmd_copy.dd
To get help, type 'hh()'

### This uses the same API as pslist so we're currently walking forward:

>>> ps()
Name             PID    PPID   Offset  
System           4      0      0xfffffa80018ac040
smss.exe         224    4      0xfffffa8002929990
csrss.exe        316    308    0xfffffa8002330800
wininit.exe      352    308    0xfffffa80018b2220
csrss.exe        360    344    0xfffffa80018b2750
winlogon.exe     388    344    0xfffffa8002a00630
services.exe     448    352    0xfffffa800363f910
....

### Now let's simulate the memory unreadable error that Sébastien is seeing, 
basically just set Flink = NULL:

>>> self.addrspace.write(self.eproc.ActiveProcessLinks.Flink.obj_offset, '\0' * 
8)
True

### Confirm we successfully broke the list entry:

>>> ps()
Name             PID    PPID   Offset  
System           4      0      0xfffffa80018ac040

### Test walking the list backwards now:

>>> kdbg = win32.tasks.get_kdbg(self.addrspace)
>>> list_head = kdbg.PsActiveProcessHead.dereference_as("_LIST_ENTRY")
>>> for proc in list_head.list_of_type("_EPROCESS", "ActiveProcessLinks", 
forward = False):
...   print hex(proc.obj_offset), proc.ImageFileName, proc.UniqueProcessId 
... 
0xfffffa8003d5d410L f-response-ent 2284
0xfffffa80027f9060L hh.exe 1952
0xfffffa8003c99060L conhost.exe 2348
0xfffffa8003a33b30L cmd.exe 2068
0xfffffa8003aa0b30L windbg.exe 1700
0xfffffa8001dd7060L livekd64.exe 1324
0xfffffa8001f59060L livekd.exe 1632
...
0xfffffa80018b2750L csrss.exe 360
0xfffffa80018b2220L wininit.exe 352
0xfffffa8002330800L csrss.exe 316
0xfffffa8002929990L smss.exe 224
0xfffffa80018ac040L System 4 <==== We end up at the start 

So, here is a temporary patch to test the same thing on Sébastien's image. 
Just make the following change and re-run pslist. 

$ svn diff volatility/plugins/overlays/windows/kdbg_vtypes.py
Index: volatility/plugins/overlays/windows/kdbg_vtypes.py
===================================================================
--- volatility/plugins/overlays/windows/kdbg_vtypes.py  (revision 1871)
+++ volatility/plugins/overlays/windows/kdbg_vtypes.py  (working copy)
@@ -39,7 +39,7 @@
         if not list_head:
             raise AttributeError("Could not list tasks, please verify your --profile with kdbgscan")

-        for l in list_head.list_of_type("_EPROCESS", "ActiveProcessLinks"):
+        for l in list_head.list_of_type("_EPROCESS", "ActiveProcessLinks", 
forward = False):
             yield l

4) As a last resort, you could convert your raw memory dump to a crash dump 
(use the command below) and then open the crash dump with Windbg. Then try to 
vtop 0xfffffa80061ffad8L. If Windbg also complains, then we know its not a 
specific volatility issue. Here are some instructions for you. 

  a) convert your raw dump to a crash 

  C:\Volatility>python vol.py -f "C:\Windows Seven.vmem" --profile=Win7SP0x64 raw2dmp --output-image=C:\Win7Crash.dmp

  b) open the crash in windbg 

  C:\PathToWindbg\windbg.exe -z C:\Win7Crash.dmp 

  c) switch to the System process's context so we use a kernel DTB and then try to vtop 

  kd> !process 0 0 

**** NT ACTIVE PROCESS DUMP ****
PROCESS fffffa8000c77040
    SessionId: none  Cid: 0004    Peb: 00000000  ParentCid: 0000
    DirBase: 00124000  ObjectTable: fffff88000001e70  HandleCount: 541.
    Image: System

  kd> .process /p /r fffffa8000c77040
Implicit process is now fffffa80`00c77040
Loading User Symbols

  kd> !vtop 0 fffffa8000c77040
Amd64VtoP: Virt fffffa80`00c77040, pagedir 124000
Amd64VtoP: PML4E 124fa8
Amd64VtoP: PDPE 3a00000
Amd64VtoP: PDE 3a01030
Amd64VtoP: Large page mapped phys 3877040
Virtual address fffffa8000c77040 translates to physical address 3877040.

If you get that far, the 3877040 physical address should match 
self.addrspace.vtop(0xfffffa8000c77040) in volshell. If windbg works and 
volshell doesn't then we know there's a clear issue with our address space. 

Thanks again for your testing Sébastien! 

Original comment by michael.hale@gmail.com on 15 Jun 2012 at 6:39

GoogleCodeExporter commented 9 years ago
Hey guys, 

I dropped this to low priority. While we haven't exactly figured out the issue, 
I'm relatively sure its not a volatility problem. The unreadable 
ActiveProcessLInks.Flink vtops to 7352613592L which is almost 1 GB higher than 
the size of the file on disk. When Sébastien is back from vacation (3 weeks), 
he's going to try acquiring memory using a different tool/technique. We're also 
going to look into a few more things, but this shouldn't be a 2.1 release 
blocker. 

Original comment by michael.hale@gmail.com on 19 Jun 2012 at 4:43

GoogleCodeExporter commented 9 years ago
Hey guys,

I'm back and I was able to do some testing. The bug is related to memory 
acquisition process because I didn't acquired the
full physical address space with devices memory 
(http://blogs.technet.com/b/markrussinovich/archive/2008/07/21/3092070.aspx,
ref: 32-bit Client Effective Memory Limits).

Vmem file only contains the physical memory and not the full physical address 
space. 

Here's the result of my tests:
(**These tests have been made with a 6GB memory dump, not with the 5GB one 
posted June 15th**)

With the software meminfo.exe 
(http://www.winsiderss.com/tools/meminfo/meminfo.htm ) I can see that my 
physical address space is:

C:\>Memlnfo.exe  -r
MemInfo v2.10 - Show PFN database information
Copyright (C) 2007-2009 Alex Ionescu
www.alex-ionescu.com

Physical Memory Range: 0000000000001000 to 000000000009F000 (158 pages, 632 KB)
Physical Memory Range: 0000000000100000 to 00000000BFEF0000 (785904 pages, 
3143616 KB)
Physical Memory Range: 00000000BFF00000 to 00000000C0000000 (256 pages, 1024 KB)
Physical Memory Range: 0000000100000000 to 00000001B7000000 (749568 pages, 
2998272 KB)
MmHighestPhysicalPage: 1798144

We can see that the HighestPhysicalPage is 1798144 (i.e: 7024MB).

The vmem file contains the full address space between 0x0 - 0xC0000000 and then 
there's a gap missing between   0xC0000000 - 0x100000000.
If I look in my device manager (view-> resource by connection), almost all my 
hardware is mapped in the 0xC0000000-0xFEBFFFFF range. 

I'm able to "recreate" the physical address space if I fill my dump with 0x00 
in the hardware range (i.e: C0001000 - 100000000).

When I recreate the physical address space, my dump can be analyzed with 
volatility because I can address all of the 1798144 pages

H:\Volatility>python vol.py -f Windows Seven.vmem --profile=Win7SP0x64 pslist
Volatile Systems Volatility Framework 2.1_alpha
Offset(V)          Name                    PID   PPID   Thds     Hnds   Sess  
Wow64 Start                Exit
------------------ -------------------- ------ ------ ------ -------- ------ 
------ -------------------- --------------------
0xfffffa800533ab30 System                    4      0     75      335 ------    
  0 2012-06-15 18:26:29
0xfffffa8006649370 smss.exe                224      4      3       30 ------    
  0 2012-06-15 18:26:29
0xfffffa8006e21060 csrss.exe               308    300      8      294      0    
  0 2012-06-15 18:26:33
0xfffffa800533f2c0 wininit.exe             344    300      7       92      0    
  0 2012-06-15 18:26:34
0xfffffa80053467a0 csrss.exe               352    336      7       82      1    
  0 2012-06-15 18:26:34
0xfffffa8006d38060 winlogon.exe            384    336      6      101      1    
  0 2012-06-15 18:26:34
0xfffffa8006e5d2a0 services.exe            444    344     17      186      0    
  0 2012-06-15 18:26:36
0xfffffa8006e64910 lsass.exe               452    344      9      434      0    
  0 2012-06-15 18:26:37
0xfffffa8006e666e0 lsm.exe                 464    344     11      144      0    
  0 2012-06-15 18:26:37
0xfffffa8006eb4640 svchost.exe             552    444     14      346      0    
  0 2012-06-15 18:26:39
0xfffffa8006ee7290 svchost.exe             628    444      8      191      0    
  0 2012-06-15 18:26:40
0xfffffa8006eecb30 LogonUI.exe             712    384      8      190      1    
  0 2012-06-15 18:26:40
0xfffffa8006f1d500 svchost.exe             720    444     19      350      0    
  0 2012-06-15 18:26:40
0xfffffa8006f2ab30 svchost.exe             756    444     20      299      0    
  0 2012-06-15 18:26:40
0xfffffa8006f4bb30 svchost.exe             780    444     40      683      0    
  0 2012-06-15 18:26:40
0xfffffa8006f9a9e0 svchost.exe             904    444     14      257      0    
  0 2012-06-15 18:26:41
0xfffffa8006fc4060 svchost.exe             992    444     18      363      0    
  0 2012-06-15 18:26:42
0xfffffa800702a060 spoolsv.exe             252    444      6       79      0    
  0 2012-06-15 18:26:43
0xfffffa80070572d0 svchost.exe             776    444     27      319      0    
  0 2012-06-15 18:26:43
0xfffffa80070c5060 svchost.exe            1108    444     13      186      0    
  0 2012-06-15 18:26:44

Sorry about that, the bug was not related to volatility but only to the memory 
acquisition process! 

Thanks again for help!

Sebastien

Original comment by sebastie...@gmail.com on 9 Jul 2012 at 10:02

GoogleCodeExporter commented 9 years ago
Sorry I just saw an error with my comment: the hardware range filled with 0x00 
is 0xC0000000 - 0x100000000.

I add 0x00 to the 0x40000000 gap, so I add 1 073 741 824 bytes to my dump.

  6 291 456 000 bytes (6GB)
+ 1 073 741 824 bytes
-----------------------
  7 365 197 824 bytes (7024 MB or 1 798 144 4K pages)

Original comment by sebastie...@gmail.com on 9 Jul 2012 at 11:12

GoogleCodeExporter commented 9 years ago
Sebastien,
   Can you please confirm that this is a raw image or is it a vmss image? Can you please look at the first 0x1000 bytes and confirm they are completely NULL? It looks to me as though this image is sparse - i.e. vmware server only wrote the addressable memory ranges back to back without filling in the holes with zeros using a sparse dump format (the ranges reported by meminfo are approximately the file size you report). Without using a special address space that handles sparse files (e.g. vmss), volatility will try to use the raw format which requires explicit null padding - this means your image will be a total of 7.3gb.

If you could paste the hexdump of the first say 512 bytes this will confirm 
what we are seeing.

Thanks
Michael.

Original comment by scude...@gmail.com on 9 Jul 2012 at 11:26

GoogleCodeExporter commented 9 years ago
Michael,

It is a raw image (vmem) created with VmWare Workstation 7. I don't have the 
image with me for the moment but If I remember correctly, the first 0x1000 
bytes were not null (I can paste tomorrow the first 512 bytes).

My guess is good as yours, but I don't think vmem file are sparse. I think it's 
just the way Windows allocate memory. If you have a PCI hardware memory imager 
or if you are able to acquire more than 4GB of memory with firewire, I guess we 
would be able to reproduce the same "bug".

Yes the range reported by meminfo are approximately the size of physical memory 

632 KB + 3 143 616 KB + 1024 KB + 2 998 272 KB = 6 143 544 KB (5999,55 MB)

If I remember correctly, the missing part is used by Windows as Loader Reserved 
or Reserved (i.e: HKLM\HARDWARE\RESOURCEMAP\System Resources\[Loader 
]Reserved\.raw). 

Meminfo seems only to display the range from HKLM\HARDWARE\RESOURCEMAP\System 
Resources\Physical Memory\.translated 

Will look at that tomorrow!

Thanks

Sebastien

Original comment by sebastie...@gmail.com on 10 Jul 2012 at 12:12

GoogleCodeExporter commented 9 years ago
Michael,

Here's the first 512 bytes of my vmem file:

000000000  53 FF 00 F0 53 FF 00 F0 C3 E2 00 F0 53 FF 00 F0  
Sÿ.ðSÿ.ðÃâ.ðSÿ.ð
000000010  53 FF 00 F0 54 FF 00 F0 4A 82 00 F0 53 FF 00 F0  
Sÿ.ðTÿ.ðJ‚.ðSÿ.ð
000000020  A5 FE 00 F0 87 E9 00 F0 01 0B 00 F0 01 0B 00 F0  
¥þ.ð‡é.ð...ð...ð
000000030  01 0B 00 F0 01 0B 00 F0 57 EF 00 F0 50 F5 00 F0  
...ð...ðWï.ðPõ.ð
000000040  00 0B 00 C0 4D F8 00 F0 41 F8 00 F0 2E 01 00 C8  
...ÀMø.ðAø.ð...È
000000050  39 E7 00 F0 59 F8 00 F0 5B 22 30 EA D2 EF 00 F0  
9ç.ðYø.ð["0êÒï.ð
000000060  59 FF 00 F0 F2 E6 00 F0 6E FE 00 F0 53 FF 00 F0  
Yÿ.ðòæ.ðnþ.ðSÿ.ð
000000070  53 FF 00 F0 A4 F0 00 F0 57 80 00 F0 24 13 00 C0  
Sÿ.ð¤ð.ðW€.ð$..À
000000080  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000090  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
0000000A0  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
0000000B0  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
0000000C0  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
0000000D0  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
0000000E0  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
0000000F0  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000100  59 EC 00 F0 01 0B 00 F0 65 F0 00 F0 51 26 00 C0  
Yì.ð...ðeð.ðQ&.À
000000110  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000120  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000130  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000140  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000150  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000160  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000170  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
000000180  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
000000190  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0000001A0  01 0B 00 F0 01 0B 00 F0 01 0B 00 F0 01 0B 00 F0  ...ð...ð...ð...ð
0000001B0  01 0B 00 F0 30 0B 00 C0 01 0B 00 F0 01 0B 00 F0  ...ð0..À...ð...ð
0000001C0  32 F2 00 F0 32 F6 00 F0 01 0B 00 F0 01 0B 00 F0  
2ò.ð2ö.ð...ð...ð
0000001D0  9E 12 30 EA 37 82 00 F0 01 0B 00 F0 3B 2E 30 EA  
ž.0ê7‚.ð...ð;.0ê
0000001E0  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
0000001F0  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................

Also, here's what my physical address space looks like (data comes from 
HKLM\HARDWARE\RESOURCEMAP\System Resources and device manager). There's 3 gaps 
I did not found what it was.

Loader Reserved     0       A0000
PCI Bus         A0000       BFFFF
???         C0000       CA000
Loader Reserved     ca000       CC000
PCI Bus         CC000       CFFFF
PCI Bus         D0000       D3FFF
PCI Bus         D4000       D7FFF
PCI Bus         D8000       DBFFF
Loader Reserved     dc000       E4000
PCI Bus         E4000       E7FFF
Loader Reserved     e8000       100000
Physical Memory     100000      BFEF0000
Loader Reserved     bfef0000    BFF00000
Physical Memory     bff00000    C0000000
PCI Bus         C0000000    FEBFFFFF
Loader Reserved     fec00000    FEC10000
ACPI x64 Isa        FEC00000    FEC00400
???         FEC00400    FEE00000
Loader Reserved     fee00000    FEE01000
???         FEE01000    FFFE0000
Loader Reserved     fffe0000    100000000
Physical Memory     100000000   1B7000000

In the book Windows Internal 5th edition (P.820), Mark said
" [...] physical address map includes not only RAM but device memory, and x86 
and x64 systems typically map all device memory below the 4GB address boundary 
to remain compatible with 32-bit operating system [...] If a system has 4 GB of 
RAM and devices such as video, audio, and network adapters that implement 
windows into their device memory that sum to 500 MB, 500 MB of the 4GB of RAM 
will reside above the 4 GB address boundary [...]"

I think the vmem file is the RAM but doesn't include device memory. Virtual to 
physical address translation done by Windows and Volatility are done on the 
physical address space (including devices memory) and not only on RAM 
addresses. 

When I acquire memory with FastDump Pro or win64dd, my memory dump is 7024MB 
(even if I only have 6GB of RAM installed) and I am able to analyze it with 
Volatility. 

Cheers,

Sebastien

Original comment by sebastie...@gmail.com on 10 Jul 2012 at 4:40

GoogleCodeExporter commented 9 years ago
Thanks Sebastien. I'm just popping my head in to update the description now 
that we know its vmware specific. 

Original comment by michael.hale@gmail.com on 11 Jul 2012 at 1:54

GoogleCodeExporter commented 9 years ago
Hey guys, 

Here's new information about the problem.

I suspect the address 0xC0000000 is hardcoded in Vmware for 64bit VM. Maybe 
this behavior is related to the memoryHotplug feature in Vmware (feature to add 
new memory and CPU while VM is running). 

In the vmware.log file (located in the virtual machine folder), Vmware seems to 
know there's a gap in memory:

[…]
Aug 06 14:29:29.877: vmx| memoryHotplug: Current size = 4500MB, Minimum size = 
4500MB, Maximum size = 4500MB
Aug 06 14:29:29.877: vmx| memoryHotplug: Entry[0]: 
00000000000000A0-00000000000A0000
Aug 06 14:29:29.877: vmx| memoryHotplug: Entry[1]: 
00000000001000A0-00000000C0000000
Aug 06 14:29:29.877: vmx| memoryHotplug: Entry[2]: 
00000001000000A0-0000000159400000
[…]

(Note: This log comes from another VM, not the 5GB or 6GB one. This time, it's 
a Vista 64bit VM with 4.5GB.)

In vmware-tools for Linux (C:\Program Files (x86)\VMware\VMware 
Workstation\linux.iso), the script 
vmware-tools-distrib\bin\vmware-config-tools.pl contains these lines:

    if (is64BitKernel()) {
      $gSystem{'page_offset'} = '0000010000000000';
    } else {
      $gSystem{'page_offset'} = 'C0000000';
    }

There's also this KB from Vmware that talks about it:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&e
xternalId=1898&sliceId=1&docTypeID=DT_KB_1_1&dialogID=193274562&stateId=0%200%20
193276701 

    "Note: PageOffset is C0000000 for 32-bit kernels, or 0000010000000000 for 64-bit kernels."

Maybe this explain why I need to pad my vmem file between 0xC0000000 - 
0x100000000?

Sebastien

Original comment by sebastie...@gmail.com on 9 Aug 2012 at 3:04

GoogleCodeExporter commented 9 years ago
Hey Sebastien, 

Great detective work. Its really useful that you've found this information, 
especially for when we integrate the vmss/vmsn address spaces from issue #288. 
I suppose if vmware hard-codes the ranges in their own stuff, we can feel 
relatively comfortable hard-coding them in an address space (at least until 
there's a reason not to). 

So hmm, with the vmss/vmsn address spaces from issue #288 it would be pretty 
simple to configure the address space to skip/pad the ranges. However, with the 
vmem of yours (which looks just like any other raw memory dump and so won't end 
up using the vmss/vmsn AS), it may be a little tricky. I wonder how we'll 
configure the AS to pad the range for vmem but not for other raw memory dumps. 

Anybody have thoughts on that?

Original comment by michael.hale@gmail.com on 9 Aug 2012 at 6:43

GoogleCodeExporter commented 9 years ago
Hey guys, 

I think I have found a way to deal with >4GB vmem files. Thanks to Andrew from 
Pikewerks for putting me on the right track.

With VmWare Workstation, I use two methods to create my vmem files:
  1- Suspend my VM
  2- Take a snapshot

In the first case, a vmss file is created with a raw vmem file. In the second 
case, a vmsn file is created. 

Vmss/vmsn files contains information about the vmem region. If the vmem file 
needs to be padded before analysis, RegionsCount in the vmsn/vmss file will not 
be 0.

Here's an example with a 5GB Virtual Machine. The vmss file contains:

0001D420  72 65 67 69 6F 6E 73 43 6F 75 6E 74 02 00 00 00  regionsCount....
0001D430  44 0D 72 65 67 69 6F 6E 50 61 67 65 4E 75 6D 00  D.regionPageNum.
0001D440  00 00 00 00 00 00 00 44 09 72 65 67 69 6F 6E 50  .......D.regionP
0001D450  50 4E 00 00 00 00 00 00 00 00 44 0A 72 65 67 69  PN........D.regi
0001D460  6F 6E 53 69 7A 65 00 00 00 00 00 00 0C 00 44 0D  onSize........D.
0001D470  72 65 67 69 6F 6E 50 61 67 65 4E 75 6D 01 00 00  regionPageNum...
0001D480  00 00 00 0C 00 44 09 72 65 67 69 6F 6E 50 50 4E  .....D.regionPPN
0001D490  01 00 00 00 00 00 10 00 44 0A 72 65 67 69 6F 6E  ........D.region
0001D4A0  53 69 7A 65 01 00 00 00 00 88 07 00 04 18 4D 61  Size.....ˆ....Ma

When we extract data in those _VMWARE_TAG structures, we have:

RegionsCount: 02 00 00 00 
regionPageNum: 00 00 00 00 00 00 00 00 
regionPPN: 00 00 00 00 00 00 00 00 
regionSize: 00 00 00 00 00 00 0C 00

regionPageNum: 01 00 00 00 00 00 0C 00
regionPPN: 01 00 00 00 00 00 10 00 
regionSize:  01 00 00 00 00 88 07 00

That means:

The RegionsCount is 2. 

The first region starts at 0 (regionPPN) and the size of this region is 
0xC0000000 (regionSize). 
The second region starts at 0x100000000 (second regionPPN) and the size is 
0x78800000 (second regionSize).

So in that case, the total physical address space of my VM is 0x178800000 (6 
316 621 824 bytes). The regions are

region[0]: start=0 end=c0000000.
region[1]: start=100000000 end=178800000.

In this example, my Vmem file size is only 5 242 880 000 bytes. With the 
information contained in the vmss file, we can pad the vmem file (or maybe use 
a volatility address space) from range 0xC0000000 to 0x100000000 (i.e: 1 073 
741 824 bytes). 

The _VMWARE_TAG structure related to this problem can already be parsed by 
Volatility:

https://code.google.com/p/volatility/source/browse/trunk/volatility/plugins/addr
spaces/vmware.py#183

Is there a way to use the vmware address space to also analyze vmem file over 
4GB?

Sebastien

Original comment by sebastie...@gmail.com on 11 Apr 2013 at 9:20

GoogleCodeExporter commented 9 years ago
Hey Sebastien, 

Thanks for the extra info. So in the first case when you Suspend the VM and it 
creates both a vmss and vmem file, how big is the vmss? Does the vmss *only* 
contain metadata for the vmem file or does the vmss file also contain the raw 
memory runs? The vmss files I've seen contain both the metadata and the raw 
memory runs (i.e. same content as the vmem) so you can just analyze the vmss in 
volatility. 

If your vmss *only* contains metadata, can you tell me what version of 
Workstation you're using? Host OS is Windows? 

Original comment by michael.hale@gmail.com on 12 Apr 2013 at 2:31

GoogleCodeExporter commented 9 years ago
Michael,

My vmss file was approximately 170 MB (I'm not at my office today, so I don't 
have the exact size with me).

The vmss file does not contain the raw memory dump. However, because it's 
170MB, I don't think it contains only metadata. Maybe it also contains 
something related to hardware memory state? Not sure about this... 

I have looked at different vmss/vmsn yesterday that were created by VmWare 
Workstation 7, 8 and 9 (under Windows). 
Most of my virtual machines were build with VmWare Workstation 7 and 
snapshots/suspended states were took with Workstation 7, 8 or 9. However, I'm 
not sure if the virtual hardware was updated for Workstation 8 and 9 for all my 
VMs...   

Sebastien

P.S: I have also saw vmss files that contain both the metadata and the raw 
memory, but in my case, these were created with ESX. 

Original comment by sebastie...@gmail.com on 12 Apr 2013 at 2:47

GoogleCodeExporter commented 9 years ago
Yes it's me again with that topic :)

This time, I would like to submit a solution to the problem. Here's a vmem 
address space for Volatility.

I'm not a python guru, so comments are welcome!

Example on how to use it:

python vol.py -f memory.vmem --vm_metadata=metadata.vmss pslist

I have also attached are some tests I did with Volatility SVN 3460 and multiple 
version of VmWare Workstation. 

Tests are basic: I have looked if pslist was working with and without the 
address space.  

Regards,

Sebastien

Original comment by sebastie...@gmail.com on 1 Aug 2013 at 4:15

Attachments:

GoogleCodeExporter commented 9 years ago
All set with this...great work Sebastien. We will speak further via email and 
this will be implemented in then next release! 

Original comment by michael.hale@gmail.com on 14 Mar 2014 at 4:02