Open Wenzel opened 3 years ago
Hi, yes, this would be very nice to add support for indeed :)
The existing plugin-based architecture for LeechCore should already allow for this. Best example is probably the LiveCloudKd which is already implemented as a separate plugin. It's found here: https://github.com/gerhart01/LiveCloudKd/tree/master/leechcore_device_hvmm
I'd be happy to offer whatever support you need to get this going; also if you really do need some changes in the LeechCore for me to better support it. I would think that most/all of the extra needs should have been surfaced already by the LiveCloudKd integration.
I would assume such an integration would work on both Windows and Linux (at least for VirtualBox)? "Linux Support" just means it's supported on Linux. It's just not possible to suport Hyper-V on Linux since there is no such product (yet anyway).
Please let me know how you wish to proceed around this.
Also, on Windows LeechCore have a remote agent allowing for remote connections to it (from other Windows-based LeechCore libraries). If this becomes a reality I should really look into making an agent for Linux as well and allow them to securely interact. Main issue for that is authentication though; I'm guessing it will have to be some certificate based scheme which quickly becomes somewhat complicated. At least more complicated than the Active-Directory Kerberos based one I use today. But even if complicated this would be a priority if I would get the livmicrovmi integration.
I'm however not interested in removing functionality from LeechCore. I see no reason why I should remove LiveKd/LiveCloudKd support even if it should be added to libmicrovmi.
LeechCore also have active volatility3 integration as a layer supported by the Volatility devs. In theory if you would add LeechCore support it could be supported this way.
But for memory analysis of live Windows systems MemProcFS is the best :) But it's Windows only unfortunately. But it's one of the main reasons why I'll be looking into "remote" leechcore support for linux hosts.
@ufrisk Sorry for the lack of feedback, I had small tasks to finish before looking at the plugin.
I started to have a look today, and I was able to compile a first version of the microvmi
LeechCore plugin (leechcore_device_microvmi.so
)
I'm now wondering how to actually load the plugin ?
I tried from the Python interface:
Should I use an API to register somewhere ?
How to associate a URL scheme to the plugin ? (ex: microvmi://
)
:arrow_right: if you want to have a look at the code: https://github.com/Wenzel/LeechCore-plugins/blob/microvmi_plugin/leechcore_device_microvmi/leechcore_device_microvmi.c
Thanks ! :wink:
This is most awesome; and no need to apologize for things taking time. I've been super busy as well.
Easiest to test your plugin is probably by downloading the PCILeech or MemProcFS and place your plugin alongside it.
The python (if you've installed LeechCore via pip) will load the .so files from the site-packages / pip plugin install location and not from current working directory. I think this is your error.
Otherwise the plugin looks OK except that you'll need to set the memory map and/or the max physical address upon initialization. The LiveCloudKd is a nice example on an external plugin (even though it's for Windows); and the memory map initialization is found here: https://github.com/gerhart01/LiveCloudKd/blob/67ecd35506d33119704ec63e96500f5ab029e1ab/leechcore_device_hvmm/leechcore_device_hvmm.c#L421
Also when you get it to work write support may be interesting as well :) About the Python I could probably co-bundle this in my Python package to make it super-easy for the user to use it. PCILeech, MemProcFS and also Volatility support LeechCore in the background.
Please let me know if you'll have any more questions and I'll try to do my best.
And huge thanks for looking into this awesome addition ❤️
I think I'm a bit confused by how Leechcore plugin loader works. Especially the location where I'm supposed to put the plugins.
In the screenshot above, I placed my plugin leechcore_device_microvmi.so
in the site-packages/leechcorepyc/
directory, and initialized leechcore, monitored by strace.
When i look at the output, I can see leechcore attempted to open my plugin, but failed because the path is wrong, as it has appended an absolute path to another absolute path.
Is there something wrong with my environment ? :thinking:
@ufrisk I believe you need to remove this line: https://github.com/ufrisk/LeechCore/blob/master/leechcore/leechcore.c#L259
since the LoadLibraryA wrapper is already responsible for calling Util_GetPathLib()
Thanks for this excellent bug report. Unfortunately the device plugin path is not that well tested on Linux as you notice. Apologies for this.
Anyway; I should have it updated now if you update the pip package or download the new sources/binaries from github (only in the LeechCore project atm).
I went with a slightly different way; which more closely emulates the LoadLibrary on Windows: https://github.com/ufrisk/LeechCore/blob/d067ee2aa7d9f7a87963c8e907d9f8bdf221dbff/leechcore/oscompatibility.c#L212 This to avoid strange behavior on WIndows. The result should be the same though.
Can you try to see if it's working and let me know?
Thanks for the bug fix !
I implemented init argument parsing, and debugged why I couldn't load my plugin (missing symbol)
But I'm facing an issue now because reading physical memory from leechcore always fail, and I don't know why.
I have a callback configured for pfnReadContigious
, isn't that enough ?
@ufrisk btw, I would suggest printing the loadlibrary error on your log somewhere in case something fails, without that it was impossible to know what went wrong in my plugin:
if((ctx->hDeviceModule = LoadLibraryA(szModule))) {
if((ctx->pfnCreate = (BOOL(*)(PLC_CONTEXT, PPLC_CONFIG_ERRORINFO))GetProcAddress(ctx->hDeviceModule, "LcPluginCreate"))) {
strncpy_s(ctx->Config.szDeviceName, sizeof(ctx->Config.szDeviceName), ctx->Config.szDevice, cszDevice);
return;
} else {
FreeLibrary(ctx->hDeviceModule);
ctx->hDeviceModule = NULL;
}
} else {
fprintf(stderr, "LoadLibrary failed: %s\n", dlerror());
}
Okay, seems the error is on my side afterall, my API call doesn't read any bytes, I'm looking at it :)
Edit: found my issue, fix is on its way: https://github.com/Wenzel/libmicrovmi/pull/210
@ufrisk do you know if it's possible to run MemProcFS and mount the FUSE filesystem as root ?
I'm getting an "Invalid argument" error here when mounting the filesystem. I didn't have this kind of error when working with KVM (which doesn't require to be root)
Thanks; I don't know how I could have missed this. It must be some security things in fuse I'd need to disable. I'll look into it.
About the LoadLibrary; I'll add an extra output there in extra verbose (-vv) mode; but in the next release of LeechCore. I don't think it's anything that affects the end user much; but I agree it would be very nice to have it there.
Anyway, I'll let you know when I fixed the MemProcFS.
The mount as root bug should now be fixed. Thanks for reporting this.
@ufrisk I confirm the fix is working :tada:
Here is a showcase of MemProcFS
running via libmicrovmi
:heavy_check_mark: KVM
:heavy_check_mark: Xen
Working on VirtualBox for Linux, and then I'll check to make libmicrovmi VirtualBox driver compatible with Windows.
Working on MemProcFS for Virtualbox via libmicrovmi through FDP, we are investigating a segfault: https://github.com/thalium/icebox/issues/38
:heavy_check_mark: Aaaand we have VirtualBox support now :wink:
wosh, this is totally awesome news and progress! I'm very much looking forward to the finished plugin :)
If you do need to chunk the memory reads to 4kB the ReadScatter() function may be a better fit than ReadContigious().
Also, if possible, it would be super nice to have write capabilities.
Please let me know if you would need anything from my side and I'll try to look into it right away :)
I believe i spotted an issue with verbosity command line handling:
Enabling -vv
works:
While enabling -vvv
hides -v
-vv
messages:
Is this by design, or is it an issue ? I'm working with your latest release here: https://github.com/ufrisk/MemProcFS/releases/tag/v4.2
:heavy_check_mark: This adds memflow support as well, passing the connector name. (cc @ko1N) Inspecting an unmodified QEMU instance via memflow-qemu-procfs
This should help to solve https://github.com/ufrisk/MemProcFS/issues/62
This is super nice; after this leechcore will be able to integrate with pretty much any virtualization software on the market 👍
About the verbosity; you have to add them -vv -vvv if you want both of them; I know it's a bit of a mess. Long term plan is to implement a proper logging system so that you may enable more fine grained logging of separate components if you so should wish. I just haven't gotten around doing that yet. There have always been other more important things...
update:
Do you wish to test the plugin maybe ?
wow, this looks super nice; I'll be super happy to test it; unfortunately I'm a bit too busy now during the weekdays; I'll do it in the weekend.
I'd be happy to include the package in my default binary packaging. I'm guessing for amd64 Linux only right? not aarch64 or windows?
wow, this looks super nice; I'll be super happy to test it; unfortunately I'm a bit too busy now during the weekdays; I'll do it in the weekend.
There is no rush, and week-ends are precious :)
I'd be happy to include the package in my default binary packaging. I'm guessing for amd64 Linux only right? not aarch64 or windows?
amd64 Linux only for now Windows is ongoing, to compile and distribute the VirtualBox driver and then package libmicrovmi. I'm not knowledgeable about cross-compilation with Cargo/Rust here, but it's an issue I can track
Apologies for some time to look into this.
The module compiles fine on Ubuntu 18.04.
It seems like my default PCILeech/MemProcFS binaries won't work though since I build on Ubuntu 20.04 (more recent GLIBC required). I'm still a bit undecided whether I should start building on 18.04 for better backwards compatibility or if I should keep things as-is.
The module builds to a very reasonable size so I'd be happy to include it in my default binaries. Still have the GLIBC issue if wanting to run it on 18.04 though.
I've also installed icebox, microvmi (the .deb package) and rust.
I however get some kind of rust error no matter what I do. Any ideas what this may be due to?
hey @ufrisk
I guess it's my turn to fix bugs in my code :) Indeed, the KVM driver was panicking in case the library it depends on couldn't be located and loaded. This was an early implementation behavior that stayed there for too long, and all the other drivers shouldn't panic anymore.
It has been fixed: https://github.com/Wenzel/libmicrovmi/pull/224 https://github.com/Wenzel/kvmi/pull/49
Along with a new release of libmicrovmi: https://github.com/Wenzel/libmicrovmi/releases/tag/v0.3.7
Also, you don't need to install the rust compiler to run the library.
On a side note, you can use export RUST_LOG=debug
to display a maximum of information when libmicrovmi is initializing.
I hope this should help get you going with running MemProcFS on icebox !
EDIT: congrats for going through the whole icebox setup and running a VM behind it, I'm sure the icebox team would be happy to hear that integration :wink: (cc @bamiaux)
Many thanks for the update and it's good to see the issue was fairly well known and is now resolved. There however seems to be more of the same issue in other places. I got a little longer ahead before getting another similar crash. After installing libxen-dev
it went away though so no worries.
After the install of the missing libxen-dev
it however fails to connect to icebox and I have no clue why that may be. I tried as both user and root both for MemProcFS and Icebox. Any ideas?
About icebox; I first tried it on Ubuntu 20.04 but it was not possible. Some dependencies have changed names and Virtualbox had a bug that caused it to fail compilation on gcc's with two-digit version number which wasn't back ported to icebox. On Ubuntu 18.04 everything went super smooth though thanks to the excellent install guide :)
On my way to fix the panic on https://github.com/Wenzel/xenstore/blob/master/src/libxenstore.rs#L41 :)
After the install of the missing libxen-dev it however fails to connect to icebox and I have no clue why that may be. I tried as both user and root both for MemProcFS and Icebox. Any ideas?
You shouldn't have to run it as root for icebox.
I suggest to run RUST_LOG=debug ./memprocfs ...
to get more debug info.
Thanks :)
How do you wish to proceed with this? Do you feel it's release ready as is or do you prefer to look into Windows support first as well? Also do you wish to keep it as a stand-alone plugin here and I'll link to it similar to what I do with LiveCloudKd; or would you like for me to co-bundle the .so in my x64 linux release? (In that case I'll have to start building on Ubuntu 18.04 I think for better backwards compatibility; but as long as it don't causes any issues on more recent Linuxes that's probably just a good thing).
Hi @ufrisk ,
You uncovered new bugs and I'm glad that I could fix them before an official release :)
Regarding the liblibFDP.so
, this bug has been introduced since I modified the crate to be compatible with Windows:
https://github.com/Wenzel/fdp/pull/18
I used a funtion to determine the library name based on the OS, and for Unix it also add the lib
prefix.
As for the xen driver, it shouldn't panic anymore: https://github.com/Wenzel/libmicrovmi/pull/228
How do you wish to proceed with this? Do you feel it's release ready as is or do you prefer to look into Windows support first as well?
Give me a few days to see if I can add Windows support as well, and then we can look at an official release.
Also do you wish to keep it as a stand-alone plugin here and I'll link to it similar to what I do with LiveCloudKd; or would you like for me to co-bundle the .so in my x64 linux release?
That's totally up to you, if you feel like your users could benefit from having the microvmi plugin directly bundled with leechcore / MemProcFS releases, I'm more than happy to see it used and integrated ! :100:
I've just triggered the release for v0.3.8
with the fixes I mentionned above:
https://github.com/Wenzel/libmicrovmi/actions/runs/1284801306
You can test it to confirm :)
Thanks. It seems to be working alright 👍
I also saw you released a Windows version; I was unable to easily install IceBox on Windows though; I think I may have to uninstall VMWare and also enable driver testing mode so I trust it's working.
If you feel the plugin is ready enough I'll be super happy to bundle it in the Linux version. It's tiny enough and it may be useful for some users.
As far as the Windows version goes it's probably better to co-bundle it with libmicrovmi and have it as a separate download. I'm not too keen on co-bundling it with my Windows releases since it's quite large.
Also, I've started to compile on Ubuntu 18.04 so it should now be more backwards compatible.
Please let me know when you feel it's ready enough and I'll co-bundle it and tweet something about it :) And thank you for this awesome work!
I also saw you released a Windows version; I was unable to easily install IceBox on Windows though; I think I may have to uninstall VMWare and also enable driver testing mode so I trust it's working.
I couldn't test it either, icebox is a complicated setup and I don't have much time on my hands for Windows here.
If you feel the plugin is ready enough I'll be super happy to bundle it in the Linux version. It's tiny enough and it may be useful for some users.
Awesome !
As far as the Windows version goes it's probably better to co-bundle it with libmicrovmi and have it as a separate download. I'm not too keen on co-bundling it with my Windows releases since it's quite large.
:+1:
Please let me know when you feel it's ready enough and I'll co-bundle it and tweet something about it :) And thank you for this awesome work!
I feel like we are close to ready now.
I suppose the next step would be for me to make a PR from mtarral/LeechCore-plugins
to ufrisk/LeechCore-plugins
?
I added a short tutorial in my documentation to use MemProcFS on QEMU: https://wenzel.github.io/libmicrovmi/tutorial/memprocfs_qemu.html
Awesome; please feel free to do a pull request at any time and I'll add the plugin to my binary linux version 👍 And also update some documentations.
Wow guys! Congratulations on a great cooperation. It's super-cool to see LeechCore support VMs through libmicrovmi plugin. I'm the creator of ufrisk/MemProcFS#62. I suppose I can now close the issue?
Once again thanks for this integration.
I have a bug report as well. It does only seem to affect PCILeech sometimes; and not MemProcFS. When I try do a write microvmi segfaults. It would be much nicer if the write would just fail gracefully if it should fail at all. No biggie by any means but I guess bug reports are welcome.
Ah yes, thanks for the bug report.
Indeed, the write_physical trait API has not been implemented in https://github.com/Wenzel/libmicrovmi/blob/master/src/driver/virtualbox.rs#L33
I should also open an issue to fail gracefully instead of panicking when an API is missing.
LeechCore is a physical memory acquisition library providing various methods from software to hardware.
On the software side, it can acquire live memory through 4 methods:
Through libmicrovmi integration, support can be added to access live memory:
Also, in the future, we could refactor and add liveKd and LiveCloudKd as part of libmicrovmi, since they give access to a VM's physical memory, and are totally in the scope of a libmicrovmi driver.
:arrow_down: :arrow_down: :arrow_down:
cc @ufrisk for this opinion on the matter, and what's the next step to accomplish this