ppd6016 / volatility

Automatically exported from code.google.com/p/volatility
GNU General Public License v2.0
0 stars 0 forks source link

vad tree parsing on x64 wow64 processes [use get_available_ranges?] #306

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
While using the yarascan plugin on a win7x64 image, Scudette noticed the vol 
process consuming 5.5 gb of memory (leading to it being killed due to out of 
memory). He narrowed it down to this code:

"""
http://code.google.com/p/volatility/source/browse/trunk/volatility/plugins/malwa
re/malfind.py#343

for vad, data in task.get_vads():
   for hit in rules.match(data = data):

Which calls:
   for vad in self.VadRoot.traverse():
...
      data = process_space.zread(vad.Start, vad.End - vad.Start + 1)

i.e. there is no bound on the size of this buffer, and if the vad covers a 
large range, this reads it all in at once.
"""

So there is a vad node in a process claiming to be so large that it consumes 
(at least) 5.5 gb of memory during the read. I added a debug statement in 
vaddump and traced the problem back to the following vad node in process 1892 
(iexplore.exe) of the suspect memory dump:

$ python vol.py -f ~/Desktop/win7_trial_64bit.raw --profile=Win7SP0x64 vaddump 
-D out/ -p 1892
[snip]
0x7efd8000L 0x7efdafffL 0x3000L
0x7efd5000L 0x7efd7fffL 0x3000L
0x7efdb000L 0x7efddfffL 0x3000L
 0x7efdf000L 0x7efdffffL 0x1000L
0x7efe0000L 0x7f0dffffL 0x100000L
0x7ffe0000L 0x7ffeffffL 0x10000L
0x7fff0000L 0x7fffffeffffL 0x7ff80000000L <=== This is where it "hangs"

The vad node claims to be 0x7ff80000000L bytes (about 8 TB).

Other processes in this memory dump that have the same issue are 2820 (also 
iexplore.exe) and 2860 (DumpIt.exe).

$ python vol.py -f ~/Desktop/win7_trial_64bit.raw --profile=Win7SP0x64 vaddump 
-D out/ -p 2820
[snip]
0x7efd8000L 0x7efdafffL 0x3000L
0x7efd5000L 0x7efd7fffL 0x3000L
0x7efdb000L 0x7efddfffL 0x3000L
0x7efdf000L 0x7efdffffL 0x1000L
0x7efe0000L 0x7f0dffffL 0x100000L
0x7ffe0000L 0x7ffeffffL 0x10000L
0x7fff0000L 0x7fffffeffffL 0x7ff80000000L

$ python vol.py -f ~/Desktop/win7_trial_64bit.raw --profile=Win7SP0x64 vaddump 
-D out/ -p 2860
[snip]
0x7efde000L 0x7efdefffL 0x1000L
0x7efd8000L 0x7efdafffL 0x3000L
0x7efdb000L 0x7efddfffL 0x3000L
0x7efdf000L 0x7efdffffL 0x1000L
0x7efe0000L 0x7f0dffffL 0x100000L
0x7ffe0000L 0x7ffeffffL 0x10000L
0x7fff0000L 0x7fffffeffffL 0x7ff80000000L

The one thing that all 3 of these processes have in common is they're Wow64:

$ python vol.py -f ~/Desktop/win7_trial_64bit.raw --profile=Win7SP0x64 pslist
Offset(V)          Name                    PID   PPID   Thds     Hnds   Sess  
Wow64
------------------ -------------------- ------ ------ ------ -------- ------ 
------
0xfffffa8000a03b30 rundll32.exe           2016    568      3       67      1    
  0                
0xfffffa8000a4f630 svchost.exe            1432    428     12      350      0    
  0                    
0xfffffa8000999780 iexplore.exe           1892   1652     19      688      1    
  1  <=== WOW64
0xfffffa80010c9060 iexplore.exe           2820   1892     23      733      1    
  1  <=== WOW64              
0xfffffa8001016060 DumpIt.exe             2860   1652      2       42      1    
  1  <==== WOW64              
0xfffffa8000acab30 conhost.exe            2236    344      2       51      1    
  0

Just to eliminate this being a "bad" memory dump, I checked one of my own Win7 
x64 systems and went directly to a WOW64 process:

$ python vol.py -f ~/Desktop/memory/win7x64cmd.dd --profile=Win7SP1x64 vaddump 
-D out -p 1632
[snip]
0x7efdb000L 0x7efddfffL 0x3000L
0x7efdf000L 0x7efdffffL 0x1000L
0x7efe0000L 0x7f0dffffL 0x100000L
0x7ffe0000L 0x7ffeffffL 0x10000L
0x7fff0000L 0x7fffffeffffL 0x7ff80000000L <=== ACK!

OK so its more than just a problem with Scudette's image. Next thing I wanted 
to see is how Windbg parses the VAD tree. On a Vista x64 system I started 
iexplore.exe and froze it in a debugger, so it cannot allocate/free memory 
while I'm working.

kd> !process 0 0 iexplore.exe
PROCESS fffffa80016bf040
    SessionId: 1  Cid: 0bc8    Peb: 7efdf000  ParentCid: 05b4
    DirBase: 189b3000  ObjectTable: fffff880077d8630  HandleCount: 389.
    Image: iexplore.exe

kd> !vad fffffa80016bf040 + 380
VAD             level      start      end    commit
[snip]
fffffa8002990be0 ( 8)      7efdf    7efdf         1 Private      READWRITE      

fffffa80016221e0 ( 9)      7efe0    7f0df         0 Mapped       READONLY       
    Pagefile-backed section
fffffa8001667b10 ( 6)      7f0e0    7ffdf         0 Private      READONLY       

fffffa80016d4430 ( 7)      7ffe0    7ffef        -1 Private      READONLY       

fffffa800482a920 ( 8)      7fff0 7fffffef        -1 Private      READONLY    
<==== HERE it is

As you can see, there's an MMVAD at fffffa80016d4430 which is for the region 
7ffe0 to 7ffef. That sounds OK. Let's take a look at its full structure:

kd> dt _MMVAD fffffa80016d4430
nt!_MMVAD
   +0x000 u1               : <unnamed-tag>
   +0x008 LeftChild        : (null)
   +0x010 RightChild       : 0xfffffa80`0482a920 _MMVAD
   +0x018 StartingVpn      : 0x7ffe0
   +0x020 EndingVpn        : 0x7ffef
[snip]

The first thing to verify is whether or not there is really an allocation at 
7ffe0 in the process. It can be confirmed with SysInternals VMMap. Logs are 
shown below. NOTE: 7ffe0 is the highest allocation shown by VMMap.

[snip]
"7FFE0000" "Private Data" "64" "4" "4" "4" "" "4" "4" "" "2" "Read" ""
"  7FFE0000" "Private Data" "4" "4" "4" "4" "" "4" "4" "" "" "Read" ""
"  7FFE1000" "Private Data" "60" "" "" "" "" "" "" "" "" "Reserved" ""
[END]

The other thing is the RightChild member points to 0xfffffa80`0482a920 - and 
that is the one whose addresses seem off:

kd> dt _MMVAD fffffa800482a920
nt!_MMVAD
   +0x000 u1               : <unnamed-tag>
   +0x008 LeftChild        : (null)
   +0x010 RightChild       : (null)
   +0x018 StartingVpn      : 0x7fff0
   +0x020 EndingVpn        : 0x7fffffef  <===== ACK!!
   +0x028 u                : <unnamed-tag>
   +0x030 PushLock         : _EX_PUSH_LOCK
   +0x038 u5               : <unnamed-tag>
   +0x040 u2               : <unnamed-tag>
   +0x048 Subsection       : 0xfffffa80`010997c0 _SUBSECTION
   +0x048 MappedSubsection : 0xfffffa80`010997c0 _MSUBSECTION
   +0x050 FirstPrototypePte : (null)
   +0x058 LastContiguousPte : 0xffffffff`ffffffff _MMPTE

So now we know the parsing problem is not necessarily with volatility. WinDbg 
shows the same nodes in the tree. The last "valid" MMVAD does in fact have a 
RightChild that points to the invalid MMVAD. However, VMMap is smart enough to 
know that 7ffe0 is the highest allocation, and despite 0x7fff0 being that 
node's RightChild, it doesn't display the 0x7fff0 one.

Here are the two entries in volatility's vadinfo:

VAD node @ 0xfffffa80016d4430 Start 0x000000007ffe0000 End 0x000000007ffeffff 
Tag Vadl
Flags: CommitCharge: 2251799813685247, NoChange: 1, PrivateMemory: 1, 
Protection: 1
Protection: PAGE_READONLY
Vad Type: VadNone
First prototype PTE: fffffa80028a13b0 Last contiguous PTE: fffff800019e22c8
Flags2: LongVad: 1, OneSecured: 1

VAD node @ 0xfffffa800482a920 Start 0x000000007fff0000 End 0x000007fffffeffff 
Tag Vadl
Flags: CommitCharge: 2251799813685247, NoChange: 1, PrivateMemory: 1, 
Protection: 1
Protection: PAGE_READONLY
Vad Type: VadNone
First prototype PTE: 00000000 Last contiguous PTE: ffffffffffffffff
Flags2: LongVad: 1, OneSecured: 1

So I'm wondering if we need to have a contraint that checks if the ending vad 
address is greater than 0xFFFFFFFF (32 bits) on WOW64 processes. Actually, 
since vads describe process memory, and 32-bit process memory stops at 
0x7FFFFFFF, we might use that as the contraint instead? 

Any other ideas on how to address this problem?

Original issue reported on code.google.com by michael.hale@gmail.com on 18 Jul 2012 at 5:18

GoogleCodeExporter commented 9 years ago
Hmm maybe its not even "invalid" per se. It seems quite ironic that the very 
last vad entry in all wow64 processes is in the range 0x7fff0 - 0x7fffffef. 
Thus it can't be an accident? 

Original comment by michael.hale@gmail.com on 18 Jul 2012 at 5:24

GoogleCodeExporter commented 9 years ago
Marked this as release blocking because it will prevent vaddump, malfind, 
yarascan all from working on x64 dumps. 

Original comment by michael.hale@gmail.com on 18 Jul 2012 at 5:25

GoogleCodeExporter commented 9 years ago
From http://www.osronline.com/showthread.cfm?link=162289

"""
Today, there is a single VAD above the highest 32-bit application address for a 
given process that helps to prevent stray allocations from going into that 
space:

fffffa80029a9010 ( 5)      fffe0 7fffffef        -1 Private      READONLY 

How this works is subject to change and it's possible that there may one way be 
other reservations or allocations to support Wow64 one day.  Unless you're 
manually grunging around in the VAD tree or doing other undocumented and unsafe 
operations which you really shouldn't be doing, you should be insulated from 
that by the above contract.
"""

Original comment by michael.hale@gmail.com on 18 Jul 2012 at 5:32

GoogleCodeExporter commented 9 years ago
K getting a little closer. From http://www.mista.nu/research/nullpage.pdf

"""
Fortunately, the memory manager has a special flag that prevents memory from 
being committed. If VadFlags.CommitCharge is set to MM MAX COMMIT (0x7ffff on 
x86 or 0x7ffffffffffff on x64), any attempt at com-
mitting memory in the range will result in a STATUS CONFLICTING ADDRESSES error.
"""

OK so considering that, it would be easy enough to check VadFlags.CommitCharge 
== MM_MAX_COMMIT and then make vaddump not attempt to extract that region. But 
the part I don't get is why *both* of these regions have MM_MAX_COMMIT (-1 in 
the commit column):

(pasted from the output above)

kd> !vad fffffa80016bf040 + 380
VAD             level      start      end    commit
[snip]
fffffa8002990be0 ( 8)      7efdf    7efdf         1 Private      READWRITE      

fffffa80016221e0 ( 9)      7efe0    7f0df         0 Mapped       READONLY       
    Pagefile-backed section
fffffa8001667b10 ( 6)      7f0e0    7ffdf         0 Private      READONLY       

fffffa80016d4430 ( 7)      7ffe0    7ffef        -1 Private      READONLY   
<--- this is MM_MAX_COMMIT 
fffffa800482a920 ( 8)      7fff0 7fffffef        -1 Private      READONLY    
<--- this is MM_MAX_COMMIT

And sure enough, in 7ffe0 there is accessible memory, so it must have been 
committed at some point, no?

kd> dd 0x000000007ffe0000
00000000`7ffe0000  00000000 0fa00000 8d99c789 000034f0
00000000`7ffe0010  000034f0 3c9fca2c 01cd0202 01cd0202
00000000`7ffe0020  ac5ed800 0000003a 0000003a 86648664
00000000`7ffe0030  003a0043 0057005c 006e0069 006f0064
00000000`7ffe0040  00730077 00000000 00000000 00000000
00000000`7ffe0050  00000000 00000000 00000000 00000000
00000000`7ffe0060  00000000 00000000 00000000 00000000

Original comment by michael.hale@gmail.com on 18 Jul 2012 at 5:47

GoogleCodeExporter commented 9 years ago
So i think this can be fixed by using the discontiguous scanner on vad nodes as 
well - although the vad range will be huge, the scanner will only read the 
allocated ranges in the page table.

It seems that the purpose of this last allocation is to make sure the wow64 
process does not write or read outside this range (e.g. by running 64 bit mov 
instrcutions directly). I suspect that that range is not mapped into the 
address space, thus forcing a page fault on invalid access. If this is the case 
a discontiguous scanner will never see this range anyway.

There are a number of cases where we read the entire vad range into memory at 
once - this is kind of bad since it means that we use a lot of memory in our 
own process. Notably the get_vad() method does this.

I am currently refactoring this code a little and will have the patch ready 
later today with the discontiguous scanner included for issue 304.

Original comment by scude...@gmail.com on 18 Jul 2012 at 6:05

GoogleCodeExporter commented 9 years ago
Yeah there are no pages in the 7fff0 - 7fffffef range in the process's address 
space. You can see with memmap it skips right over that region:

0x000000007efe3000 0x000000000592d000             0x1000          0x202a000
0x000000007efe4000 0x0000000004be9000             0x1000          0x202b000
0x000000007ffe0000 0x00000000001e3000             0x1000          0x202c000 
0x0000f68000000000 0x000000000f566000             0x1000          0x202d000
0x0000f68000001000 0x000000000e38a000             0x1000          0x202e000
0x0000f68000002000 0x000000000e245000             0x1000          0x202f000
0x0000f68000003000 0x0000000016b2c000             0x1000          0x2030000

The scanner enhancement sounds great. We'll still need some way to fix vaddump 
though, because it uses zread and so the output file will be 8 TB whether the 
pages are there or not. 

Up until now, I thought process_space.zread(vad.Start, vad.End - vad.Start + 1) 
was the best way to acquire a vad region (I actually copied that from your 
branch ;-)) but yeah I suppose we didn't account for possibly huge vads. The 
old way was something like this:

http://code.google.com/p/volatility/source/browse/branches/Volatility-2.0.1/vola
tility/plugins/vadinfo.py#197

The placement of that code wasn't the best since it couldn't be used outside of 
the vaddump plugin. Plus it appears to do everything zread does. The 
interesting thing is that, although both methods would result in an 8 TB output 
file, the old method doesn't consume 5.5 gb of memory (stays around 130-200 MB 
on my system). 

Thanks for discussing and helping with the refactors - they are much needed and 
appreciated ;-)

Original comment by michael.hale@gmail.com on 18 Jul 2012 at 6:30

GoogleCodeExporter commented 9 years ago
So toward solving the vaddump issue (at least temporarily), here is a potential 
patch (all criticism welcome ;-))

Index: volatility/plugins/vadinfo.py
===================================================================
--- volatility/plugins/vadinfo.py   (revision 2070)
+++ volatility/plugins/vadinfo.py   (working copy)
@@ -281,6 +281,11 @@
                     self._config.DUMP_DIR, "{0}.{1:x}.{2}-{3}.dmp".format(
                     name, offset, vad_start, vad_end))

+                if task.IsWow64:
+                    if vad.u.VadFlags.CommitCharge == 0x7ffffffffffff:
+                        outfd.write("Skipping {0} - {1}\n".format(vad_start, 
vad_end))
+                        continue
+
                 f = open(path, 'wb')
                 if f:
                     range_data = task_space.zread(vad.Start, vad.End - vad.Start + 1)

When run, it displays something like this:

$ python vol.py -f win7_trial_64bit.raw --profile=Win7SP0x64 vaddump -D out -p 
1892
Volatile Systems Volatility Framework 2.1_rc1
Pid:   1892
************************************************************************
Skipping 0x000000007ffe0000 - 0x000000007ffeffff 
Skipping 0x000000007fff0000 - 0x000007fffffeffff  <=== The "bad" one

So the big disadvantage is it skips that other vad range starting at 
0x000000007ffe0000. Here's another potential patch that checks the vad ending 
address: 

Index: volatility/plugins/vadinfo.py
===================================================================
--- volatility/plugins/vadinfo.py   (revision 2070)
+++ volatility/plugins/vadinfo.py   (working copy)
@@ -281,6 +281,12 @@
                     self._config.DUMP_DIR, "{0}.{1:x}.{2}-{3}.dmp".format(
                     name, offset, vad_start, vad_end))

+                if task.IsWow64:
+                    if (vad.u.VadFlags.CommitCharge == 0x7ffffffffffff and 
+                            vad.End > 0x7fffffff):
+                        outfd.write("Skipping {0} - {1}\n".format(vad_start, 
vad_end))
+                        continue
+
                 f = open(path, 'wb')
                 if f:
                     range_data = task_space.zread(vad.Start, vad.End - vad.Start + 1)

When run this one shows:

$ python vol.py -f win7_trial_64bit.raw --profile=Win7SP0x64 vaddump -D out -p 
1892
Volatile Systems Volatility Framework 2.1_rc1
Pid:   1892
************************************************************************
Skipping 0x000000007fff0000 - 0x000007fffffeffff

In that case, it only skips the one bad range, and we can still gather the one 
at 0x000000007ffe0000 despite it being marked as MM_MAX_COMMIT. 

$ xxd out/iexplore.exe.17b99780.0x000000007ffe0000-0x000000007ffeffff.dmp
0000000: 0000 0000 27a0 990f 9006 974b 0400 0000  ....'......K....
0000010: 0400 0000 a0a1 8a2d 55f1 cc01 55f1 cc01  .......-U...U...
0000020: 0040 230e 4300 0000 4300 0000 6486 6486  .@#.C...C...d.d.
0000030: 4300 3a00 5c00 5700 6900 6e00 6400 6f00  C.:.\.W.i.n.d.o.
0000040: 7700 7300 0000 0000 0000 0000 0000 0000  w.s.............
0000050: 0000 0000 0000 0000 0000 0000 0000 0000  ................
[snip]

Likes/dislikes about either patch? 

Original comment by michael.hale@gmail.com on 18 Jul 2012 at 6:55

GoogleCodeExporter commented 9 years ago
Hey guys, so after discussing this with a friend, I think the 2nd patch is the 
best way to go so far. In particular, we'll be doing something like this 
(pseudo code):

if process.is_wow_64:
    if vad.commit_charge == mm_max_commit and vad.end > 0x7FFFFFFF:
        continue_but_report
dump_vad_node

I would classify our possible scenarios into 4 classes:

1) processes on x86 systems
2) 64bit processes on 64bit systems
3) 32bit processes on 64bit systems (wow64)
4) 32bit processes on 64bit systems (wow64) but large address aware (4GT) [1]

And now a quick discussion of each case. I'll speak strictly in terms of how 
the patch will affect each case. 

1). processes on x86 systems

This patch will not affect x86 at all due to the "process.is_wow_64" check 

2) 64bit processes on 64bit systems

This patch will not affect x64 processes on x64 kernels, due to the 
"process.is_wow_64" check

3) 32bit processes on 64bit systems (wow64)

OK I've looked through *all* wow64 processes on *all* x64 memory dumps in my 
collection and the following layout applies:

kd> !vad fffffa80016bf040 + 380
VAD             level      start      end    commit
[snip]
fffffa8002990be0 ( 8)      7efdf    7efdf         1 Private      READWRITE      

fffffa80016221e0 ( 9)      7efe0    7f0df         0 Mapped       READONLY       
    Pagefile-backed section
fffffa8001667b10 ( 6)      7f0e0    7ffdf         0 Private      READONLY       

fffffa80016d4430 ( 7)      7ffe0    7ffef        -1 Private      READONLY   
<--- this is MM_MAX_COMMIT 
fffffa800482a920 ( 8)      7fff0 7fffffef        -1 Private      READONLY    
<--- this is MM_MAX_COMMIT

So there is one MM_MAX_COMMIT allocation at 7ffe0 (7ffe0<<12) which is 
KUSER_SHARE_DATA and then there's a 2nd MM_MAX_COMMIT something like 
7fff0000-7fffffef000. The 2nd range is what vaddump chokes on. If you consider 
the patch, the process being analyzed is wow64 and MM_MAX_COMMIT is set (-1 in 
the column) but only the 2nd range satisfies vad.end > 0x7FFFFFFF. Thus we'll 
end up dumping the KUSER_SHARE_DATA range, which is good, we want to acquire 
that range since its accessible to the process. But the patch will prevent us 
from dumping the 2nd range since vad.End == 7fffffef000 which is > 0x7FFFFFFF. 

This memory layout is consistent in all of my samples, however my friend who 
was simply helping out showed me his layout is slightly different:

fffffa801c5fccb0 ( 5)      7e8bb       7e8bb         1 Private      READWRITE 
fffffa801c579cb0 ( 4)     7e8be       7e8be         1 Private      READWRITE 
fffffa801a494180 ( 5)    7ffe0         7ffef         -1 Private      READONLY 
fffffa801adf05d0 ( 3)     7fff0         7fed2f7f       -1 Private      READONLY 
fffffa801b4aed80 ( 5)    7fed2f80   7fed313e    11 Mapped  Exe  
EXECUTE_WRITECOPY  \Windows\System32\ntdll.dll 
fffffa801c246530 ( 4)    7fed3140  7fffffef        -1 Private  READONLY

So in his case, the 7ffe0 allocation for KUSER_SHARE_DATA is there, all good, 
it'll be acquired by vaddump both with and without the patch. The strange thing 
is there's an allocation for the x64 ntdll.dll that breaks up the normally 
contiguous  MM_MAX_COMMIT range. I don't fully understand how memory at 
7fed2f80000 is "visible" to a wow64 process (since they can only see either 2 
GB or 4 GB), but that's besides the point. What matters is that our patch will 
acquire the KUSER_SHARE_DATA range (good), it will skip the 7fff0-7fed2f7f 
range (good), it will acquire the 7fed2f80-7fed313e range for ntdll.dll (good), 
and it will skip the 7fed3140-7fffffef range (good). In summary, all good ;-)

4) 32bit processes on 64bit systems (wow64) but large address aware (4GT)

Special case here regarding 32bit processes linked with /LARGEADDRESSAWARE that 
run on x64 kernels. This is the equivalent to the /3GB boot switch in that it 
allows 32bit processes to "see" 4GB of usermode. This of course changes 
usermode memory layout a bit, but should not be affected by the patch (which 
was the whole reason for testing it in the first place). So I wrote a tiny exe 
that takes a number on command line and it allocates that many 1GB regions with 
VirtualAlloc. 

C:\Users\Jimmy\Desktop>bigalloc.exe 3
Alloc: 0xec0000, LastError: 0
Alloc: 0x7fff0000, LastError: 0
Alloc: 0x0, LastError: 8 <=== ERROR_NOT_ENOUGH_MEMORY

So the first two allocations succeeded. Here are the important details:

[snip]
fffffa800519cbe0 ( 5)        ec0    40ebf      262144 Private READWRITE
[snip]
fffffa8004a39130 ( 5)      7ffe0    7ffef        -1 Private READONLY
fffffa800519cb90 ( 6)      7fff0    bffef      262144 Private READWRITE
fffffa8004703910 ( 4)      fffb0    fffd2         0 Mapped READONLY           
Pagefile-backed section
fffffa80042fa860 ( 5)      fffdb    fffdd         3 Private READWRITE
fffffa800491d730 ( 3)      fffde    fffde         1 Private READWRITE
fffffa80042e96f0 ( 4)      fffdf    fffdf         1 Private READWRITE
fffffa8006bfab90 ( 5)      fffe0 7fffffef        -1 Private READONLY
[end]

The first 1GB alloc at ec0 is OK. The KUSER_SHARE at 7ffe0 is normal. The 
second 1GB alloc is OK also, and you can see it goes above the 2GB limit as 
expected 7fff0 - bffef. Then there are a few small allocs fffb0, fffdb, fffde, 
fffdf which  assume are also OK since the process can see up to 4GB. Then of 
course the last fffe0-7fffffef MM_MAX_COMMIT is the expected 
"protected/no-commit" region. Of these regions, our patch will acquire 
everything except the last one, which is what I believe we want it to do. 

If you believe there are other cases we should be catching or other ways to 
deal with problem, please drop me a note! 

[1]. 
http://msdn.microsoft.com/en-us/library/windows/desktop/bb613473(v=vs.85).aspx

Original comment by michael.hale@gmail.com on 21 Jul 2012 at 11:01

GoogleCodeExporter commented 9 years ago
Looks good to me.

Original comment by mike.auty@gmail.com on 22 Jul 2012 at 2:10

GoogleCodeExporter commented 9 years ago
Hi Michael,
  This patch looks good. Its nice to understand why these ranges appear.

As another data point, the weird region does not have any valid pages.

I wonder if in addition we should have a check against the address space.vtop() 
function. We could do this during the dump step itself. i.e. go through all the 
pages and when a page is valid, seek in the output file (thus null padding all 
invalid pages before hand), and write it. This has no additional overhead. So 
if a region has no mapped pages, we just do not dump it.

Original comment by scude...@gmail.com on 23 Jul 2012 at 1:53

GoogleCodeExporter commented 9 years ago
Hey Scudette, that's a good idea, I see what you did with r2075. I think what 
I'll do is commit the patch above to fix the major bug at hand, and I'll drop 
the priority from high to medium so its not release blocking anymore, but we'll 
keep it open as a reminder to check out your changes to the AS (like 
AS.get_available_ranges and AS._get_address_ranges) and the age based cache 
from r2075. Sound ok?

Original comment by michael.hale@gmail.com on 24 Jul 2012 at 2:44

GoogleCodeExporter commented 9 years ago
Yeah sounds good. I agree that we dont want too much feature creep at this 
point.

Original comment by scude...@gmail.com on 24 Jul 2012 at 3:38

GoogleCodeExporter commented 9 years ago
Based on the current state of accessing wow64 process memory, no further 
changes are required at this time. 

Original comment by michael.hale@gmail.com on 7 Mar 2014 at 6:13