krw54 / volatility

Automatically exported from code.google.com/p/volatility
0 stars 0 forks source link

Severe performance hit caused by is_valid_kernelAS #95

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Hi everybody,

Just thought I'd file an actual issue for this, so we can keep a record of the 
discussion.

So the problem is that is_valid_kernelAS [1] currently carries out a large 
number of reads from the underlying AS to determine whether it is valid.  It 
does this because the check calls get_available_addresses, and only returns 
when it finds an offset > 0x80000000.

This is particularly a problem when accessing a process's _Peb member, because 
it instantiates a process address space [2] as a validity test.  Therefore 
plugins [4] which ask for the _Peb of many processes will repeatedly do large 
address spaces scans over the same space.

Some points to note: Currently in the non-legacy intel address space, the pde's 
are cached, but the pte's are not [3].  This means that there are re-reads made 
of the underlying file for every table entry.

Also, there's some confusion over whether this test is necessary to 
differentiate between PAE and non-PAE mode.  This test is present and run in 
both PAE/non-PAE mode, so the only differentiation is in the results from 
get_available_addresses (which will give different results due to the different 
paging methods).

So the discussion on channel so far has identified the following options:

1) Remove the test without a replacement
2) Improve the test using for pde in range(0x3FF, 0, -5)
3) Improve the test by asserting that each PDE is <= sizeof(base)
4) Improve the test by checking the references of a known process (EPROCESS 
Physical Offset -> Thread -> EPROCESS Virtual Offset -> EPROCESS Physical 
offset)
4) Accept the performance hit

On top of one of these options, there has also been the suggestion to move the 
check into a VolatilityMagic object which verifies whether the address space is 
valid.  This allows the check to be profile specific, and also allows the check 
to be overridden by the use of a dummy profile if necessary (note that such 
overriding would be ugly).

So that's the situation, please do correct me if I've made any inaccuracies, 
and once we've come to a good solution I'll be happy to code it up and post it 
here for review before checking it in.  I've set this as blocking 1.4.x, but 
I've set it to low priority and as such suggest that it should be seen as vital 
to a 1.4.x release...

[1] 
http://code.google.com/p/volatility/source/browse/branches/Volatility-1.4_rc1/vo
latility/plugins/addrspaces/intel.py#109
[2] 
http://code.google.com/p/volatility/source/browse/branches/Volatility-1.4_rc1/vo
latility/plugins/overlays/windows/windows.py#168
[3] 
http://code.google.com/p/volatility/source/browse/branches/Volatility-1.4_rc1/vo
latility/plugins/addrspaces/intel.py#181
[4] 
http://code.google.com/p/volatility/source/browse/branches/Volatility-1.4_rc1/vo
latility/plugins/taskmods.py#54

Original issue reported on code.google.com by mike.auty@gmail.com on 13 Mar 2011 at 8:21

GoogleCodeExporter commented 9 years ago
Ok, so here's a patch that shifts the ValidAS check from inside the address 
space out to the profile.  If the check isn't found in VolatilityMagic, then 
the check *is assumed to have passed*, so this fails open rather than closed.  

Some things to note:

* Currently the check for Pae and NonPae are the same.  We can change that by 
making it use the __class__.__name__ for the VolatilityMagic lookup (and they 
can still point to the same check object if necessary).
* I've moved it to after the JK caching function setup, because otherwise the 
cache wouldn't ever be used for the check.
* The caching is disabled by default because people were worried about it.  To 
enable it requires adding --cache-dtb to the command line.
* The cache only stores the first lookup table, so doesn't offer a huge saving 
on Pae spaces.
* This patch only affects the new intel code.  The legacy stuff still checks a 
hardwired address.

So this won't do anything to improve the performance hit, but will make it 
easier on things like linux profiles and so on.  The next patch needs to come 
from someone else to offer a better check, or better caching (although the 
cache needs building on every check, or the check becomes useless)...

I'm probably also going to try working up a patch that make the dtb caching 
function make use of the existing cache framework (so that it's 
enabled/disabled with the single --no-cache function).  Since we've been 
getting by without it for such a long time, I don't think it'll have that much 
of an impact, but it will mean that ASes such as Ieee1394 will never try to 
cache the DTB, which is good.  5:)

Original comment by mike.auty@gmail.com on 25 Mar 2011 at 11:11

Attachments:

GoogleCodeExporter commented 9 years ago
Ok, so I noticed something interesting and have reworked the DTB caching code 
based on this.  It now caches each read_long_phys/read_long_long_phys (memoizes 
the results), when config.NO_CACHE is false.  Those functions are private and 
not used anywhere else throughout the code, so it should only affect address 
lookup times.  I've also removed two instances from the hive ASes since they 
weren't using them.

This will now do a few more reads (because rather than reading the whole 0x1000 
block, it'll read them as necessary), but it will ensure that it reads them 
only once (which wasn't true previously for ptes.  I don't really have a 
windows machine to test out whether this improves read performance, so I'd 
appreciate someone testing out this patch.

I also need to talk to scudette sometime about allowing address spaces to block 
caches that involve them.  Ideally there'd be a tainting system so that data 
derived from a non-cachable address space doesn't get cached.  Unfortunately at 
the moment, I'm not sure there's anything stopping a live AS from being cached. 
 Also, since this doesn't use the cache mechanism (because that pickles to/from 
disk so wouldn't help the disk IO issue), it can't determine whether it should 
cache or not.  I'm hoping to add a cachable attribute to ASes, true by default, 
and then anything that needs to check can traverse the AS stack with AND.  The 
trick then is to work that into the cache system...

Original comment by mike.auty@gmail.com on 26 Mar 2011 at 2:13

Attachments:

GoogleCodeExporter commented 9 years ago
So scudette quite rightly pointed out that the above memoizing patch won't 
really help with the dlllist situation.  Other wild options include cloning the 
existing AS, rather than making a new one, and just setting the dtb in order to 
keep the cache (although since the problem is the check traversing the entire 
DTB, there isn't much lookup reuse), or we could get the lower layer to cache 
reads (which would solve the lower AS being able to say whether it caches or 
nto) but that wouldn't solve the root problem again, since there'd still be 
little lookup reuse.

So in the interim, we've got to get the check working more efficiently.  
However, since I haven't heard any complaints against moving the check into the 
profile, I'll commit patch 1 shortly (I just need to check the inheritance of 
the VOLATILITY_MAGIC from winxpsp2)...

Original comment by mike.auty@gmail.com on 28 Mar 2011 at 9:05

GoogleCodeExporter commented 9 years ago
Ok, so patch one has just been commited as r906.  We could still do with 
improvement to the check itself, and we'll also need a new check for linux once 
we merge the more recent changes in the linux branch.  Just a friendly 
reminder, this will fail open, ie the linux profiles will not carry out any 
check and will succeed without a problem until a check is added...  5:)

So anyone made any progress on getting a more efficient check in place?

Original comment by mike.auty@gmail.com on 28 Mar 2011 at 6:36

GoogleCodeExporter commented 9 years ago
Ok, Labarum_X committed and it all looks good, therefore:

This issue was fixed by r943.  5:)

Original comment by mike.auty@gmail.com on 13 Apr 2011 at 6:04