memflow / cloudflow

memflow command line interface
MIT License
16 stars 6 forks source link

No qemu_procfs connectors found #15

Closed lukrei closed 10 months ago

lukrei commented 10 months ago

When I run cloudflow -ef it cannot find connector after issuing the command: echo "my_qemu_vm qemu" >> /cloudflow/connector/new

According to your description the name "my_qemu_vm qemu" must be exactly like this because it would try to connect to the first windows vm found.

All of the following packages were installed with memflowup utility from source. memflow-win32 - CorePlugin [installed: git 9d690f] memflow-native - CorePlugin [installed: git 8b056b] memflow-qemu - CorePlugin [installed: git 306636] memflow-coredump - CorePlugin [installed: git 98f2eb] memflow-kcore - CorePlugin [installed: git 2eea57] memflow-pcileech - CorePlugin [installed: git edaca4] memflow-kvm - CorePlugin [installed: git 3e782a]

Expected behaviour: cloudflow is able to find the connector after starting the daemon with -ef parameter.

lukrei commented 10 months ago

this would be the detailed output on my debian trixie kvm hypervisor:

does anyone have an idea how to fix this? I tried a whole reinstall of debian with trixie but nearly everything from h33p s guide is different now.

This 2 error messages are most concerning for me right now: outdated/mismatched Connector plugins where found at: /usr/local/lib/memflow [ERROR] outdated/mismatched Connector plugins where found at: /usr/local/lib/memflow

user@srvhyp01:~/projects/memflow-applied/scanflow$ cloudflow -ef -vvv 20:17:08 [INFO] scanning "/usr/local/lib/memflow" for libraries 20:17:08 [WARN] MEMFLOW_OS_WIN32 has invalid ABI. 20:17:08 [WARN] MEMFLOW_CONNECTOR_COREDUMP has invalid ABI. 20:17:08 [WARN] MEMFLOW_CONNECTOR_KVM has invalid ABI. 20:17:08 [WARN] MEMFLOW_CONNECTOR_KCORE has invalid ABI. 20:17:08 [WARN] MEMFLOW_CONNECTOR_PCILEECH has invalid ABI. 20:17:08 [WARN] MEMFLOW_OS_NATIVE has invalid ABI. 20:17:08 [WARN] MEMFLOW_CONNECTOR_QEMU has invalid ABI. 20:17:08 [WARN] MEMFLOW_CONNECTOR_KVM has invalid ABI. 20:17:08 [INFO] scanning "/home/luky/projects/memflow-applied/scanflow" for libraries Mounting FUSE filesystem on /cloudflow 20:17:08 [INFO] filesystem mounted at /cloudflow 20:17:08 [INFO] please use 'umount' or 'fusermount -u' to unmount the filesystem Initialized! 20:17:08 [INFO] Mounting /cloudflow 20:17:08 [DEBUG] (2) fuse::request: INIT(2) kernel: ABI 7.38, flags 0x73fffffb, max readahead 131072 20:17:08 [DEBUG] (2) fuse_mt::fusemt: init 20:17:08 [DEBUG] (2) fuse::request: INIT(2) response: ABI 7.8, flags 0x1, max readahead 131072, max write 16777216 20:17:08 [DEBUG] (2) fuse::request: ACCESS(4) ino 0x0000000000000001, mask 0o004 20:17:08 [DEBUG] (2) fuse_mt::fusemt: access: "/", mask=0o4 20:17:08 [DEBUG] (2) filer_fuse: access "/" 20:17:08 [DEBUG] (2) fuse::request: LOOKUP(6) parent 0x0000000000000001, name ".Trash" 20:17:08 [DEBUG] (2) fuse_mt::fusemt: lookup: "/", ".Trash" 20:17:08 [DEBUG] (2) fuse::request: LOOKUP(8) parent 0x0000000000000001, name ".Trash-1000" 20:17:08 [DEBUG] (2) fuse_mt::fusemt: lookup: "/", ".Trash-1000" 20:17:14 [DEBUG] (2) fuse::request: LOOKUP(10) parent 0x0000000000000001, name "connector" 20:17:14 [DEBUG] (2) fuse_mt::fusemt: lookup: "/", "connector" 20:17:14 [DEBUG] (2) fuse_mt::inode_table: adding 2 -> "/connector" with 0 lookups 20:17:14 [DEBUG] (2) fuse_mt::inode_table: lookups on 2 -> Some("/connector") now 1 20:17:14 [DEBUG] (2) fuse::request: LOOKUP(12) parent 0x0000000000000002, name "new" 20:17:14 [DEBUG] (2) fuse_mt::fusemt: lookup: "/connector", "new" 20:17:14 [DEBUG] (2) fuse_mt::inode_table: adding 3 -> "/connector/new" with 0 lookups 20:17:14 [DEBUG] (2) fuse_mt::inode_table: lookups on 3 -> Some("/connector/new") now 1 20:17:14 [DEBUG] (2) fuse::request: OPEN(14) ino 0x0000000000000003, flags 0x8401 20:17:14 [DEBUG] (2) fuse_mt::fusemt: open: "/connector/new" 20:17:14 [DEBUG] (2) fuse::request: FLUSH(16) ino 0x0000000000000003, fh 274877906944, lock owner 5185539541502418424 20:17:14 [DEBUG] (2) fuse_mt::fusemt: flush: "/connector/new" 20:17:14 [DEBUG] (2) fuse_mt::fusemt: initializing threadpool with 8 threads 20:17:14 [DEBUG] (5) filer_fuse: flush "/connector/new" 20:17:14 [DEBUG] (2) fuse::request: WRITE(18) ino 0x0000000000000003, fh 274877906944, offset 0, size 16, flags 0x6 20:17:14 [DEBUG] (2) fuse_mt::fusemt: write: "/connector/new" 0x10 @ 0x0 20:17:14 [ERROR] unable to find plugin with name 'qemu'. 20:17:14 [ERROR] possible available Connector plugins are: 20:17:14 [ERROR] outdated/mismatched Connector plugins where found at: /usr/local/lib/memflow/libmemflow_coredump.dev.so, /usr/local/lib/memflow/libmemflow_kvm.x86_64.so, /usr/local/lib/memflow/libmemflow_kcore.dev.so, /usr/local/lib/memflow/libmemflow_pcileech.dev.so, /usr/local/lib/memflow/libmemflow_qemu.dev.so, /usr/local/lib/memflow/libmemflow_kvm.dev.so 20:17:14 [ERROR] EoF 20:17:14 [DEBUG] (2) fuse::request: RELEASE(20) ino 0x0000000000000003, fh 274877906944, flags 0x8401, release flags 0x0, lock owner 0 20:17:14 [DEBUG] (2) fuse_mt::fusemt: release: "/connector/new"

h33p commented 10 months ago

Cloudflow is currently using a fairly old memflow version, you could try running cargo update on the repo to have it switch to memflow 0.2.0-beta10, and try again.

ko1N commented 10 months ago

Hey, thanks for the issue! This is a slight oversight on my part. https://github.com/memflow/cloudflow has had older memflow versions in the lockfile. I just pushed a new version of cloudflow that mounts it directly to latest version 0.2.0-beta10 (which also all plugins you installed have).

Please reinstall cloudflow from the latest master branch and let me know if that resolves your issue.

I also fixed the compilation errors in cloudflow on the latest rust version by updating the minidump dependency.

lukrei commented 10 months ago

I tried it and at least I can read process by name with ls in /cloudflow/os/win/processes/by-name and also got a minidump from the windows os. So I guess the cloudflow is working as expected right now. Thank you very much!

ko1N commented 10 months ago

I'm glad to hear that! If you encounter anything else please let us know :+1: