Closed bcvilnrotter closed 1 year ago
Have you correctly imported the collection? Can you show the uploaded files under the Flow ID you reference?
It is generally easier to run the query in the flow's notebook because then the flow id and client id are already populated.
The import collection appeared to run correctly. I'll post screenshots and a giant text log of all the output I got when importing the collection.
Also, what is the flow's notebook? I've been testing these commands in a notebook that was made, but not necessarily in a notebook specific to any client_id or flow_id.
import collection code and output:
screenshot of the file directory containing all the artifacts that were imported using the importcollection artifact:
all endpoint devices in my local velociraptor gui:
output text from the VQL is attached: import-collection-VQL-output.txt
I'm also able to pull out the client_id and flow_id's for confirmation that my values are accurate:
These are the steps you use to import the offline collector:
1) Go to the server artifacts and select the Server.Utils.ImportCollection
2) Specify the path on disk where your offline collection container exists (on the server - it is up to you to transfer it to the server somehow).
3) This will import the collection as if it was collected using a regular client - so this mean it will create a collection under the client's database. If the client is already loaded, otherwise it makes a new offline client - just search for it or you can see the logs above to see what the client id ended up.
4) Now this is exactly as if you collected it with a client/server architecture. You should see all the files imported in the uploads tab.
You can also click on the flow's notebook tab to get a notebook already in the flow's context. It already knows the flow id and client id so you just apply the remapping and go.
Thanks for this, I'm still getting the same error when running the above command using the artifact from the exchange server (Artifact.Exchange.KapeFiles.Remapping()) within the imported collections flow notebook as you described. Let me know if you seeing anything off with what I've done.
the full error listing is below:
Applying remapping [type:"permissions" permissions:"COLLECT_CLIENT" permissions:"FILESYSTEM_READ" permissions:"FILESYSTEM_WRITE" permissions:"READ_RESULTS" permissions:"MACHINE_STATE" permissions:"SERVER_ADMIN" type:"impersonation" os:"windows" hostname:"Virtual Host" env:{key:"SystemRoot" value:"C:\\Windows"} env:{key:"WinDir" value:"C:\\Windows"} disabled_functions:"amsi" disabled_functions:"lookupSID" disabled_functions:"token" disabled_plugins:"users" disabled_plugins:"certificates" disabled_plugins:"handles" disabled_plugins:"pslist" disabled_plugins:"interfaces" disabled_plugins:"modules" disabled_plugins:"netstat" disabled_plugins:"partitions" disabled_plugins:"proc_dump" disabled_plugins:"proc_yara" disabled_plugins:"vad" disabled_plugins:"winobj" disabled_plugins:"wmi" type:"mount" from:{accessor:"fs" prefix:"/clients/C.b93f605adc899302/collections/F.CDCMS44FSD9HC/uploads/auto"} on:{accessor:"auto" path_type:"auto"} type:"mount" from:{accessor:"fs" prefix:"/clients/C.b93f605adc899302/collections/F.CDCMS44FSD9HC/uploads/ntfs"} on:{accessor:"ntfs" path_type:"ntfs"} type:"shadow" from:{accessor:"raw_reg"} on:{accessor:"raw_reg"} type:"shadow" from:{accessor:"zip"} on:{accessor:"zip"} type:"shadow" from:{accessor:"data"} on:{accessor:"data"} type:"shadow" from:{accessor:"scope"} on:{accessor:"scope"} type:"shadow" from:{accessor:"gzip"} on:{accessor:"gzip"}]
parse_mft: Unable to open file C:$MFT: open \\?\C:\Users\Glapt\AppData\Local\Temp\clients\C.b93f605adc899302\collections\F.CDCMS44FSD9HC\uploads\ntfs\%5C%5C.%5CC%3A$MFT: The system cannot find the path specified.
DEBUG:Query Stats: {"RowsScanned":1,"PluginsCalled":2,"FunctionsCalled":0,"ProtocolSearch":12,"ScopeCopy":5}
I can also confirm that I do see the uploaded files looking similar to those provided in your screenshots:
Question, is it appropriate to be running Server.Utils.ImportCollection within a server notebook through VQL with the KapeFiles.Targets collection on the server rather than through the gui? I'm using this command in another project so wondering if using Server.Utils.ImportCollection in this way is good etiquette.
It looks like it is escaping the / between the C and the $MFT somehow. In 0.6.7 the entire collector format was refactored and it should be a lot more robust now with special characters and escaping - can you please test the process with 0.6.7 rc1?
Note that because 0.6.7 has not been released yet (it is in RC1) Velociraptor will not automatically update it. You will need to manually upload the 0.6.7 binary by clicking on the VelociraptorWindows tool definition, selecting the rc1 binary and clicking upload file.
Thanks for the help!! Updating my environment to the 0.6.7 prerelease version of velociraptor did in fact work for the flows notebook.
The artifacts that appear to worked with the remapping artifact:
The artifacts that appear to not work using this method:
Unfortunately, I'm still not able to use this method to run on the server notebook, or through a hunt. Perhaps that is way outside the scope of this artifact, but I would like more options to run Windows.KapeFiles.Remapping on a number of imported collections at once. Do you think this is a feature that could be added in the future?
Thanks a bunch for your help again, really appreciative of it!
Since this appears to not be a bug, I'm going to close this out and move the information from my last comment as a possible feature enhancement. Thanks a lot for walking me through this @scudette.
Hello all,
I have a feeling that this has already been tracked (a quick search of the open issues revealed no results), but using the KapeFiles.Remapper artifact described in the post processing blog post appears to give the same errors regardless of which environment I use.
Environment:
Below is a screenshot as well of the error as well as the code snippet of the error as well:
I realized that the parse_mft() plugin couldn't find the $MFT file because it was looking for C:\$MFT within the collections folder for the specified ClientId and FlowId rather than the specific location the collection was uploaded to. Additionally the location it looks for the $MFT file appears to be a bit off.
\?\C:\Users\\AppData\Local\Temp\clients\C.b93f605adc899302\collections\F.CDCMS44FSD9HC\uploads\ntfs\%5C%5C.%5CC%3A\$MFT
should be
\?\C:\Users\\AppData\Local\Temp\clients\C.b93f605adc899302\collections\F.CDCMS44FSD9HC\uploads\file\C\$MFT
But making some ad-hoc changes to the example provided in the blog post doesn't appear to be helping. (it appears that running the below command runs the Windows.NTFS.MFT() artifact correctly for the machine I'm running velociraptor on, but it fails to remap the accessor for the KapeFiles.Targets collection)
The same error has occurred for me in both Windows and Linux environments.