ClaudeZoo / volatility

Automatically exported from code.google.com/p/volatility
GNU General Public License v2.0
0 stars 0 forks source link

Using pyvmiaddressspace.py from LibVMI #214

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Has anyone had any success using LibVMI (http://vmitools.sandia.gov/) with 
Volatility yet? The example codes provided with LibVMI work fine including the 
python example but with Volatility I just see a crash.

I copied pyvmiaddressspace.py to volatility/plugins/addrspaces/ and called it 
with python vol.py -l honey-xp --profile=WinXPSP3x86 modules.

This is what I get:

Volatile Systems Volatility Framework 2.1_alpha
Traceback (most recent call last):
  File "vol.py", line 135, in <module>
    main()
  File "vol.py", line 126, in main
    command.execute()
  File "/share/src/volatility-svn/volatility/commands.py", line 101, in execute
    func(outfd, data)
  File "/share/src/volatility-svn/volatility/plugins/modules.py", line 38, in render_text
    for module in data:
  File "/share/src/volatility-svn/volatility/win32/modules.py", line 33, in lsmod
    PsLoadedModuleList = tasks.get_kdbg(addr_space).PsLoadedModuleList
  File "/share/src/volatility-svn/volatility/win32/tasks.py", line 48, in get_kdbg
    kdbgo = obj.VolMagic(addr_space).KDBG.v()
  File "/share/src/volatility-svn/volatility/obj.py", line 808, in v
    return self.get_best_suggestion()
  File "/share/src/volatility-svn/volatility/obj.py", line 834, in get_best_suggestion
    for val in self.get_suggestions():
  File "/share/src/volatility-svn/volatility/obj.py", line 826, in get_suggestions
    for x in self.generate_suggestions():
  File "/share/src/volatility-svn/volatility/plugins/overlays/windows/windows.py", line 661, in generate_suggestions
    for val in scanner.scan(self.obj_vm):
  File "/share/src/volatility-svn/volatility/plugins/kdbgscan.py", line 67, in scan
    for offset in scan.DiscontigScanner.scan(self, address_space, offset, maxlen):
  File "/share/src/volatility-svn/volatility/scan.py", line 145, in scan
    for match in BaseScanner.scan(self, address_space, o, l):
  File "/share/src/volatility-svn/volatility/scan.py", line 103, in scan
    data = address_space.read(self.base_offset, l)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/intel.py", line 295, in read
    return self.__read_bytes(vaddr, length, pad = False)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/intel.py", line 276, in __read_bytes
    buf = self.__read_chunk(vaddr, chunk_len)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/intel.py", line 260, in __read_chunk
    return self.base.read(paddr, length)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/pyvmiaddressspace.py", line 52, in read
    return self.vmi.read_pa(addr, length)
ValueError: Unable to read memory at specified address

Original issue reported on code.google.com by tamas.k....@gmail.com on 20 Feb 2012 at 9:45

GoogleCodeExporter commented 9 years ago
Hi,

This is the first time I've ever seen the address space you're talking about.  
It looks very simplistic, but it should probably catch the exception and return 
either None or to more closely match the existing file mechanism return ''.

Also, if you're contacting the author, please let him know that it would be 
much better to accept names such as "-l vmi://vmname" rather than just "-l 
vmname" using the urlparse module.  It would then allow the plugin to very 
quickly throw out the location value if it isn't designed for the vmi 
interface...

Original comment by mike.auty@gmail.com on 20 Feb 2012 at 7:27

GoogleCodeExporter commented 9 years ago
I'm just posting the link to the vmitools listserv for the thread on this issue 
in case anyone else wants it:

https://groups.google.com/group/vmitools/browse_thread/thread/a46984ae3af46f0a

Original comment by jamie.l...@gmail.com on 21 Feb 2012 at 8:11

GoogleCodeExporter commented 9 years ago
I'm the author of the pyvmi address space.  The code was put together quickly 
as a proof of concept.  It does work just fine on my end, so debugging this 
particular problem is a bit challenging.  But I'm looking into what's going on.

As a side note, I'm planning for better integration between LibVMI and 
Volatility down the road.  This will involve improvements to this address 
space.  So the suggestions above are appreciated.

Original comment by bdpa...@gmail.com on 24 Feb 2012 at 6:10

GoogleCodeExporter commented 9 years ago
Hiya Bryan!  Nice to have you here!  5:)

If you've got any questions or want anyone to review your code, do please get 
in contact with us, either directly by email or IRC, or over the mailing list.  
We're sometimes a bit slow to respond, but we're always very happy to help, and 
appreciate you extending volatility to handle new address spaces!  Awesome 
work, thanks!  5:)

Original comment by mike.auty@gmail.com on 24 Feb 2012 at 9:26

GoogleCodeExporter commented 9 years ago
Mike, your suggestion of catching the exception and returning None fixed the 
above problem.  So thanks for that.  I'll probably be pinging you guys in a bit 
as I think about the way to get the best performance out of a 
Volatility--pyvmi--LibVMI tool chain.  Until then, cheers!

Original comment by bdpa...@gmail.com on 24 Feb 2012 at 10:28

GoogleCodeExporter commented 9 years ago
Cool, Tamas, does that solve your problem?  I'm going to close this bug in a 
few days unless someone indicates they're still suffering from this problem...

Original comment by mike.auty@gmail.com on 10 Mar 2012 at 7:28

GoogleCodeExporter commented 9 years ago
It works partially. Seems like the "scan" plugins don't work with this 
address-space. 

Original comment by tamas.k....@gmail.com on 10 Mar 2012 at 7:47

GoogleCodeExporter commented 9 years ago
Hmmm, ok.  Could you run imageinfo (which runs kdbgscan as part of it) on one 
of the images, and please paste the output here?

Original comment by mike.auty@gmail.com on 10 Mar 2012 at 7:49

GoogleCodeExporter commented 9 years ago
Mike, the imageinfo output is:

volatility-2.0#> python vol.py -l vmi://honey-xp-sp2 imageinfo
Volatile Systems Volatility Framework 2.0
Determining profile based on KDBG search...

          Suggested Profile(s) : WinXPSP3x86, WinXPSP2x86 (Instantiated with WinXPSP2x86)
                     AS Layer1 : JKIA32PagedMemoryPae (Kernel AS)
Traceback (most recent call last):
  File "vol.py", line 135, in <module>
    main()
  File "vol.py", line 126, in main
    command.execute()
  File "/share/src/volatility-2.0/volatility/commands.py", line 101, in execute
    func(outfd, data)
  File "/share/src/volatility-2.0/volatility/plugins/imageinfo.py", line 37, in render_text
    for k, v in data:
  File "/share/src/volatility-2.0/volatility/cache.py", line 534, in generate
    for x in g:
  File "/share/src/volatility-2.0/volatility/plugins/imageinfo.py", line 81, in calculate
    yield ('AS Layer' + str(count), tmpas.__class__.__name__ + " (" + tmpas.name + ")")
AttributeError: 'PyVmiAddressSpace' object has no attribute 'name'

The pslist output is what's expected:

volatility-2.0#> python vol.py -l vmi://honey-xp-sp2 pslist
Volatile Systems Volatility Framework 2.0
 Offset(V)  Name                 PID    PPID   Thds   Hnds   Time 
---------- -------------------- ------ ------ ------ ------ ------------------- 
0x810a8bd0 System                    4      0     51    234 1970-01-01 00:00:00 

0x80f62da0 smss.exe                304      4      3     20 2012-03-12 03:10:11 

0x81073c08 csrss.exe               564    304      9    314 2012-03-12 03:10:11 

0x81045da0 winlogon.exe            588    304     24    528 2012-03-12 03:10:12 

0x80f0d430 services.exe            632    588     24    272 2012-03-12 03:10:12 

0x80f0abe8 lsass.exe               644    588     24    348 2012-03-12 03:10:12 

0x80efb020 svchost.exe             804    632     19    195 2012-03-12 03:10:14 

0xffbd3490 svchost.exe             908    632     13    229 2012-03-12 03:10:15 

0xffbbe7f8 svchost.exe            1008    632     66   1144 2012-03-12 03:10:15 

0xffbae520 svchost.exe            1172    632      5     57 2012-03-12 03:10:20 

0xffb937c0 svchost.exe            1300    632     14    210 2012-03-12 03:10:21 

0xffb8d3e8 explorer.exe           1384   1368     13    288 2012-03-12 03:10:21 

0xffb7c020 spoolsv.exe            1524    632     15    124 2012-03-12 03:10:22 

0xffb1e8e8 alg.exe                1260    632      7    103 2012-03-12 03:11:33 

0xffb10da0 wscntfy.exe            1880   1008      1     25 2012-03-12 03:11:34 

But for example psscan shows no result (but no crash either):

volatility-2.0#> python vol.py -l vmi://honey-xp-sp2 psscan
Volatile Systems Volatility Framework 2.0
 Offset     Name             PID    PPID   PDB        Time created             Time exited             
---------- ---------------- ------ ------ ---------- ------------------------ 
------------------------

Original comment by tamas.k....@gmail.com on 11 Mar 2012 at 10:14

GoogleCodeExporter commented 9 years ago
Ok, that's relatively easy to fix up.  The address space needs a ".name" 
attribute adding in the __init__ function somewhere, just a string, the 
contents shouldn't matter.  I'll be taking a look into those to figure out why 
they're necessary and possibly removing them (I think they're a nasty hack used 
to tell the difference between _EPROCESS spaces and kernel address spaces)...

@bdpayne, do you have the address space stored in a repository anywhere?  
Somewhere that we could both see the latest source to produce patches against?  
It would make collaboratively fixing it up much easier...  5:)

Original comment by mike.auty@gmail.com on 11 Mar 2012 at 10:24

GoogleCodeExporter commented 9 years ago
Ok Tamas, I've made a fix at our end to correct the lack of a name, if you're 
ok testing out our unstable trunk, please try the latest subversion revision 
r1537.

Original comment by mike.auty@gmail.com on 11 Mar 2012 at 11:33

GoogleCodeExporter commented 9 years ago
Hi Mike,
I checked out the latest svn code and I enabled debugging of libvmi. It still 
doesn't work but I think the problem is on LibVMI's end:

python vol.py -l vmi://honey-xp-sp2 psscan
Volatile Systems Volatility Framework 2.1_alpha
 Offset(P)  Name             PID    PPID   PDB        Time created             Time exited
---------- ---------------- ------ ------ ---------- ------------------------ 
------------------------
LibVMI Version 0.6
--found KVM
LibVMI Mode 4
--got id from name (honey-xp-sp2 --> 5)
**set image_type = honey-xp-sp2
--libvirt version 9010
--qmp: virsh qemu-monitor-command honey-xp-sp2 '{"execute": "pmemaccess", 
"arguments": {"path": "/tmp/vmiwVMZqg"}}'
--kvm: using custom patch for fast memory access
--completed driver init.
**set page_offset = 0x00000000
--qmp: virsh qemu-monitor-command honey-xp-sp2 '{"execute": 
"human-monitor-command", "arguments": {"command-line": "info registers"}}'
LibVMI Version 0.6
--found KVM
LibVMI Mode 4
--got id from name (honey-xp-sp2 --> 5)
**set image_type = honey-xp-sp2
--libvirt version 9010
--qmp: virsh qemu-monitor-command honey-xp-sp2 '{"execute": "pmemaccess", 
"arguments": {"path": "/tmp/vmiCEtBOi"}}'
--kvm: using custom patch for fast memory access
--completed driver init.
**set page_offset = 0x00000000
--qmp: virsh qemu-monitor-command honey-xp-sp2 '{"execute": 
"human-monitor-command", "arguments": {"command-line": "info registers"}}'
--MEMORY cache set 0x002f3000
--MEMORY cache set 0x002f4000
--MEMORY cache set 0x002f4000
--MEMORY cache set 0x002f4000
--MEMORY cache hit 0x002f3000
--MEMORY cache hit 0x002f4000
--MEMORY cache set 0x017c0000
--MEMORY cache hit 0x002f4000
--MEMORY cache set 0x017c0000
--MEMORY cache hit 0x002f4000
--MEMORY cache set 0x017c0000
--MEMORY cache set 0x002f7000
--MEMORY cache hit 0x002f7000
libvir: Domain error : invalid domain pointer in virDomainFree
libvir: error : invalid connection pointer in virConnectClose

Original comment by tamas.k....@gmail.com on 12 Mar 2012 at 2:51

GoogleCodeExporter commented 9 years ago
[deleted comment]
GoogleCodeExporter commented 9 years ago
imageinfo on the other hand is fixed:

 python vol.py -l vmi://honey-xp-sp2 imageinfo
Volatile Systems Volatility Framework 2.1_alpha
Determining profile based on KDBG search...

          Suggested Profile(s) : WinXPSP3x86, WinXPSP2x86 (Instantiated with WinXPSP2x86)
                     AS Layer1 : JKIA32PagedMemoryPae (Kernel AS)
                     AS Layer2 : PyVmiAddressSpace (Unnamed AS)
                      PAE type : PAE
                           DTB : 0x2f3000
                          KDBG : 0x80544ce0
                          KPCR : 0xffdff000
             KUSER_SHARED_DATA : 0xffdf0000
           Image date and time : 2012-03-12 07:52:00 UTC+0000
     Image local date and time : 2012-03-12 02:52:00 -0500
          Number of Processors : 1
                    Image Type : Service Pack 2

Original comment by tamas.k....@gmail.com on 12 Mar 2012 at 2:54

GoogleCodeExporter commented 9 years ago
The address space is not in a public repository just yet.  In the mean time, 
I'm attaching the latest version.  I've tweaked it slightly to see if I could 
resolve the issues described here.  But so far, I'm seeing the same result as 
Tamas: imageinfo works, but psscan breaks.  Note that I'm using Volatility 2.0 
so setting self.name seems to have helped a bit.

Original comment by bdpa...@gmail.com on 12 Mar 2012 at 8:23

Attachments:

GoogleCodeExporter commented 9 years ago
Brilliant, thanks!

Unfortunately I'm not sure what else we can do from this end.  It's possible 
we're masking an exception at some point, but given that libvir appears to 
return an error in comment 12 I'm not sure what else we can do.

I'll leave this open for now, do please post back here if you make any 
progress...  5:)

Original comment by mike.auty@gmail.com on 12 Mar 2012 at 10:40

GoogleCodeExporter commented 9 years ago
How is psscan supposed to work?  I've instrumented each function in the pyvmi 
address space to print a message when it is called.  What I see with psscan is 
that init completes successfully and then no other functions are ever called.

As a comparison, for pslist I see lots of calls to read and is_valid_address.  
And pslist outputs the expected information.

Thoughts?

Original comment by bdpa...@gmail.com on 13 Mar 2012 at 4:35

GoogleCodeExporter commented 9 years ago
Hmmmm, so psscan ought to load up a physical address space (so the pyvmi 
layer), then a virtual address space (IA32 on top of the pyvmi layer), and then 
start reading big chunks of the physical address space into buffers, after 
which the searching will go on through the buffer.  If you run volatility with 
"-d -d -d" you should see lots of additional debugging output letting you know 
how the image was instantiated.

One thing to note is that critically the Address Space interface requires you 
to provide a "get_available_addresses" function that yields a tuple for each 
run of continuous data in the form (offset, size).  Without that the scanner 
will probably assume that the physical address space doesn't contain any data 
and give up early.  You'll probably just want to yield (0, 
self.vmi.get_memsize()) or something similar...

Also, we've now added a requirement on implementing zread, which should always 
return as much data as was requested even if it can't be read, padded with null 
bytes if necessary.

Original comment by mike.auty@gmail.com on 13 Mar 2012 at 8:49

GoogleCodeExporter commented 9 years ago
FYI, we're working on this one now.  It's starting to looks like Volatility is 
getting upset when it attempts a read and it fails or only returns with a 
partial read.

One question.  If get_available_addresses provides a bunch of small ranges as 
tuples, will that force the scanner to read smaller chunks (i.e., one or more 
reads per range)?  Or is there some preset min chunk size that the scanner will 
always use?

Original comment by bdpa...@gmail.com on 15 Mar 2012 at 2:50

GoogleCodeExporter commented 9 years ago
So, that's a very good question.  The scan plugin reads in 
Constants.SCAN_BLOCKSIZE bytes of data (and that's currently set to 1024 * 1024 
* 10), and apparently expects read to return as much as it was able to read.  
In most address spaces, if a read failure occurs it'll return 0 bytes, so we'll 
potentially be missing some data from non-physical scans.  I'll try and look 
into that sometime soon.

get_available_addresses should only return non-continuous chunks of memory (so 
it should never return one tuple that ends where another one starts).  As such 
if get_available_addresses returns lots of small blocks, they'll all be 
distinct, and the scanner will scan each one individually.  Hope that makes 
sense and answers your question?

Original comment by mike.auty@gmail.com on 15 Mar 2012 at 5:51

GoogleCodeExporter commented 9 years ago
Thanks, this is helpful.  I believe we're almost there with a working address 
space.  Details soonish :-)

Original comment by bdpa...@gmail.com on 15 Mar 2012 at 7:48

GoogleCodeExporter commented 9 years ago
Thanks Bryan, I would be happy to test the new address space when it's
ready! =)

I also hope it would fix some stability issues that I currently see where
running a volatility check crashes the VM. I actually run multiple
volatility checks simultaneously which may trigger it so could it be that
volatility+libvmi is not "thread safe"?

Original comment by tamas.k....@gmail.com on 15 Mar 2012 at 8:50

GoogleCodeExporter commented 9 years ago
Actually, I think that LibVMI+KVM is still somewhat beta.  Esp if you are using 
the qemu patch.

Tamas, I'll give you the updated code for testing when it's ready.

Original comment by bdpa...@gmail.com on 15 Mar 2012 at 8:58

GoogleCodeExporter commented 9 years ago
I just sent Tamas some code to test out.  The underlying issue seemed to be 
that pyvmi would occasionally fail on memory reads.  When this happened, it 
would return either a truncated read or nothing at all.  Volatility didn't like 
this.

Now pyvmi supports a zread style function that will read as much of the 
requested memory as possible, filling in the rest with zeros.  Volatility seems 
much happier with this.

I'll let Tamas confirm after he tests, but I believe that this issue is now 
resolved.

Original comment by bdpa...@gmail.com on 15 Mar 2012 at 11:03

GoogleCodeExporter commented 9 years ago
Thanks, that sounds good.  When you say "zread style function", do you mean a 
separate function called zread?  The read function should fail, or return None 
or '' if the requested data can't be read, and zread should be the function 
that returns zero padded data without errors.

So here's a patch for a scan plugin that takes into account the available 
addresses (as returned by get_available_addresses), and will throw an error if 
it can't read a chunk of data that get_available_addresses thinks it should...

Since this is more than an insignificant change to the scanning framework could 
those who're currently CCed please give it a quick review to make sure it makes 
sense?  Thanks!  5:)

Original comment by mike.auty@gmail.com on 17 Mar 2012 at 12:08

GoogleCodeExporter commented 9 years ago
Ok, so as it turns out, the patch should probably do debug.warning instead of 
raise IOError, or we should use zread, because I've found many instances where 
pages have table entries outside of physical memory.  I'm attaching a simple 
pagechecker plugin for testing invalid pages, in case it's of use...

Original comment by mike.auty@gmail.com on 17 Mar 2012 at 5:01

Attachments:

GoogleCodeExporter commented 9 years ago
For those interested in the weird page problems, it turns out we already had an 
issue open about this (issue 182).  For everybody else, I'm attaching a new 
range-aware scanning plugin that uses zread (and deleting the old one).  It 
might be marginally faster on virtual address spaces, but it should at least be 
doing the right thing in more cases...

Original comment by mike.auty@gmail.com on 19 Mar 2012 at 9:28

Attachments:

GoogleCodeExporter commented 9 years ago
Mike, you said that read should just fail and zread should be the function that 
replaces failed bytes with zeros?  I just tested this with both Volatility 2.0 
and trunk and it fails for psscan in both cases.

However, if I just have read call zread, then it works with both.

Thoughts?

Original comment by bdpa...@gmail.com on 20 Mar 2012 at 3:20

GoogleCodeExporter commented 9 years ago
Ok, so this would need a bit of debugging to figure out what's going on and 
which read is returning the error.  Could you post the output from psscan when 
it fails (possibly including the output with "-d -d -d"?).  Also if you could 
provide instructions for converting a normal dd file into a vmi image, I could 
try and figure out what's going on that way too.  Thanks!  5:)

Original comment by mike.auty@gmail.com on 20 Mar 2012 at 4:36

GoogleCodeExporter commented 9 years ago
Debug output is attached.  You can't convert a dd image to a vmi image.  The 
vmi "image" is a running virtual machine :-)

Original comment by bdpa...@gmail.com on 20 Mar 2012 at 5:42

Attachments:

GoogleCodeExporter commented 9 years ago
In the recent release of LibVMI (http://code.google.com/p/vmitools/, version 
0.8) I have included an updated volatility address space (in 
tools/pyvmi/pyvmiaddressspace.py) to go along with the pyvmi wrapper.  The 
address space has read just calling zread, because this works for both 
Volatility 2.0 and trunk.  However, I have also commented out code for read 
that is closer to what I think read *should* be doing based on this discussion.

So, if you have a Xen (preferred) or KVM system that you want to play with, you 
can give this a shot and get a better feel for the error conditions that I'm 
seeing.

Original comment by bdpa...@gmail.com on 23 Mar 2012 at 4:38

GoogleCodeExporter commented 9 years ago
I'm afraid I don't have a Xen server, but I'm still investigating having the 
scanner calling zread rather than just read, which should also fix the problem.

@otherdevs, anyone had a chance to look over my above patch?

Original comment by mike.auty@gmail.com on 24 Mar 2012 at 12:26

GoogleCodeExporter commented 9 years ago

Original comment by mike.auty@gmail.com on 25 Mar 2012 at 10:52

GoogleCodeExporter commented 9 years ago
"You can't convert a dd image to a vmi image.  The vmi "image" is a running 
virtual machine"

Dumping the VM memory to a format that Volatility understands would be nice. 
Unfortunately I didn't find anything that would make plain xen memory dumps 
work with volatility so if we could use libvmi to dump it to such a format 
would be a nice feature. I've only seen info about virtualbox memory dumps in 
the forensics cookbook.

Original comment by tamas.k....@gmail.com on 1 Apr 2012 at 7:27

GoogleCodeExporter commented 9 years ago
http://code.google.com/p/vmitools/source/browse/examples/dump-memory.c

This example code should dump VM memory into a raw physical memory file, which 
can be used by Volatility.

Original comment by bdpa...@gmail.com on 1 Apr 2012 at 8:04

GoogleCodeExporter commented 9 years ago
Problems when trying to use Win7 64-bit with latest volatility:

python vol.py -l vmi://win7-sp1 --profile=Win7SP1x64 imageinfo
Volatile Systems Volatility Framework 2.1_alpha
Determining profile based on KDBG search...

Traceback (most recent call last):
  File "vol.py", line 173, in <module>
    main()
  File "vol.py", line 164, in main
    command.execute()
  File "/share/src/volatility-svn/volatility/commands.py", line 101, in execute
    func(outfd, data)
  File "/share/src/volatility-svn/volatility/plugins/imageinfo.py", line 34, in render_text
    for k, v in data:
  File "/share/src/volatility-svn/volatility/plugins/imageinfo.py", line 44, in calculate
    suglist = [ s for s, _, _ in kdbg.KDBGScan.calculate(self)]
  File "/share/src/volatility-svn/volatility/plugins/kdbgscan.py", line 104, in calculate
    for offset in scanner.scan(aspace):
  File "/share/src/volatility-svn/volatility/plugins/kdbgscan.py", line 67, in scan
    for offset in scan.DiscontigScanner.scan(self, address_space, offset, maxlen):
  File "/share/src/volatility-svn/volatility/scan.py", line 145, in scan
    for match in BaseScanner.scan(self, address_space, o, l):
  File "/share/src/volatility-svn/volatility/scan.py", line 103, in scan
    data = address_space.read(self.base_offset, l)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/intel.py", line 292, in read
    return self.__read_bytes(vaddr, length, pad = False)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/intel.py", line 273, in __read_bytes
    buf = self.__read_chunk(vaddr, chunk_len)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/intel.py", line 250, in __read_chunk
    paddr = self.vtop(vaddr)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/amd64.py", line 127, in vtop
    pdpte = self.get_pdpte(vaddr, pml4e)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/amd64.py", line 103, in get_pdpte
    return self._read_long_long_phys(pdpte_addr)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/intel.py", line 468, in _read_long_long_phys
    string = self.base.read(addr, 8)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/pyvmiaddressspace.py", line 54, in read
    return self.zread(addr, length)
  File "/share/src/volatility-svn/volatility/plugins/addrspaces/pyvmiaddressspace.py", line 71, in zread
    assert addr < self.vmi.get_memsize(), "addr too big"
AssertionError: addr too big

Original comment by tamas.k....@gmail.com on 3 Apr 2012 at 5:02

GoogleCodeExporter commented 9 years ago
That is your problem :-). An address space should just return null
bytes for reading outside its range, not raise. It is very common to
read addresses much larger than physical memory - e.g. when mapping IO
devices usually the address is right at the top of the addressable
memory.

See also issue http://code.google.com/p/volatility/issues/detail?id=182
for more information.

Original comment by scude...@gmail.com on 3 Apr 2012 at 6:43

GoogleCodeExporter commented 9 years ago
With the latest version of LibVMI both Volatility 2.1 and 2.2 works fine, but 
I'm observing a pretty significant performance degradation compared to 2.0.1:

Volatility-2.0.1# time python vol.py -l vmi://windows-xp-sp3.5 
--profile=WinXPSP3x86 psscan
...
real    0m3.845s
user    0m0.960s
sys     0m2.820s

volatility-2.1# time python vol.py -l vmi://windows-xp-sp3.5 
--profile=WinXPSP3x86 psscan
...
real    1m4.687s
user    1m0.850s
sys     0m2.210s

volatility-2.2# time python vol.py -l vmi://windows-xp-sp3.5 
--profile=WinXPSP3x86 psscan
...
real    1m9.516s
user    1m2.830s
sys     0m1.970s

It seems to me that after the scan prints the last line it finds it hangs for a 
long time without printing anything, then exits. The scan continues to do 
something as the CPU spins at 100% and LibVMI in debug mode reports that its 
being queried. Can you guys think of anything that changed from 2.0.1 upwards 
that could explain this behavior?

Original comment by tamas.k....@gmail.com on 17 Oct 2012 at 10:31

GoogleCodeExporter commented 9 years ago
Hey Tamas, 

Hmm, I'm going to CC Mike Auty to see if he has any ideas. 

Have you noticed a performance degradation on raw memory dumps also or just 
when using the VMI address space? 

Thanks!

Original comment by michael.hale@gmail.com on 20 Oct 2012 at 5:10

GoogleCodeExporter commented 9 years ago
I only see the issue with the VMI address space, memory dumps are fine.

Original comment by tamas.k....@gmail.com on 20 Oct 2012 at 11:58

GoogleCodeExporter commented 9 years ago
Hmm, alright. My first troubleshooting idea would be to log the reads in the AS 
like this:

def read(self, addr, length):
+  print hex(addr), hex(length)
    try:
        memory = self.vmi.read_pa(addr, length)
    except:
        return None
    return memory

Or instead of printing it, save the (addr, length) as a tuple in a pickle file. 
Then cut volatility out of the picture entirely by opening the pickle file in a 
separate python script that just calls vmi.read_pa(addr, length) and time that. 
Is that still slow?

Original comment by michael.hale@gmail.com on 20 Oct 2012 at 3:38

GoogleCodeExporter commented 9 years ago
Thanks, it seems to be an issue somewhere in libvmi's cache cleanup that makes 
it hang. When I manually read the addresses logged in the address space I see 
no performance issues.

Original comment by tamas.k....@gmail.com on 24 Oct 2012 at 4:39

GoogleCodeExporter commented 9 years ago
Hey guys, it seems safe to close this issue now. We look forward to working 
more with LibVMI and PyVMI in the very near future. Keep up the good work!

Original comment by michael.hale@gmail.com on 1 Feb 2013 at 4:44