Und3rf10w / external_c2_framework

Python api for usage with cobalt strike's External C2 specification
224 stars 95 forks source link

Add support for multiple clients #14

Closed Und3rf10w closed 6 years ago

Und3rf10w commented 6 years ago

Apparently, Cobalt Strike actually has support for multiple simultaneous external c2 client connections. Need to add functionality that can facilitate this. Most of the changes are going to have to be to the server logic.

Multiple go() commands can be sent to the c2 server, which will respond to each one with a new stager. A separate TCP socket will have to be opened for each client between the c2 server and the controller. The server will have to be able to identify which traffic being sent corresponds to which client.

Primary Tasks

The following major tasks will have to be completed in order to facilitate this:

Detailed Tasks

See this comment for more info

Blockers

Optimal Processing

Off the top of my head, there's two approaches that can be taken to to modifying the server logic:

Approach 1

A new thread is used for each client, this thread opens the socket to the c2 server, requests a stager, uploads the stager, and communicates with the client at a specified block interval for the client. In addition, each thread will have to maintain its state.

This will also increase the rate of utilization for the transport, which could be an issue for transports that utilize an api that relies on credits or rate limits them. That being said, each client would be able to have differently configured block times though, which is certainly a desirable feature for some possible transports.

Approach 2

A new thread is used for each client, this thread opens the socket to the c2 server, requests a stager, uploads the stager, and communicates with the client every time a batch processing job happens. If this job does not correspond to the data for the thread, the data will be added to a queue that will ensure it is processed during the next hit of the corresponding job.

Each thread wouldn't have to maintain it's state with the client, as the server would simply be switching between send_data_to_client, recv_data_from_client, send_data_to_c2 and recv_data_from_c2 jobs in the same order, every time.

There are some disadvantages to this approach:

Each client would have the same block timer, and it would be difficult to quickly identify if a client has died. There's also an argument that batch processing significantly increase the amount of generated artifacts, being particularly susceptible to time series pattern analysis.


Both approaches have merits and faults, ideally both could be added in and the user given the ability to determine which approach they'd like to use when starting a server.

Ultimately, I'd like to see both approaches added in so that the project may reach its long term goal of being able to support multiple clients that are all utilizing multiple and possibly even different transports and encoders for each client.

Und3rf10w commented 6 years ago

I'm gonna go ahead and close this as a wontfix for now. Please reference #26 for additional details. The new design will (read: should) easily allow for this in a very straightforward manner.

Und3rf10w commented 6 years ago

Thinking about it, this can hopefully get fixed before doing a rewrite, which should also give some ideas on how to best tackle this in the rewrite, or may eliminate a rewrite altogether. Going to reopen this issue for now and add a "help wanted" tag.

Und3rf10w commented 6 years ago

Perhaps I'm overcomplicating it? Need to attempt, but maybe just wrap https://github.com/Und3rf10w/external_c2_framework/blob/dev/skeletons/frameworks/cobalt_strike/server/server.py#L50-L103 in threading.Thread() in a while True block? This would open a new socket to the c2 server for every beacon, which is what's required anyways. Obviously would require Thread.daemon = True.

Rhino Security Labs Fork

Looking at the RhinoSecurityLabs fork, it appears that this is essentially what they're doing:

  1. An empty array is made to store the beacons.
  2. A check is done to see if there are any new beacons.
  3. In that check, the id for that beacon is extracted (received from the client) and returned to server.py.
  4. createConnection() is called, and new socket is made, the transport is prepped, and configureStage.loadStager(sock, beaconId) is called.
  5. The beaconId is passed to commonUtils.sendData, which passes it to the transport.
  6. From within the transport, the beaconId is added to the header of the stager being sent, and the server waits for a metadata response from the client.
  7. To do this, commonUtils.retrieveData(beaconId) is called, which calls transport.retrieveData(beaconId), which then pulls the response specific to that beaconId.
  8. The socket object (to the c2 server), is returned and a new thread is started that calls taskLoop(sock, beaconId). The beaconId is appended to the beacons dictionary in the server.
  9. In taskLoop(), checkForTasks(sock) is called, and once there's a new task, the new task and beaconId is passed to establishedSession.relayTask.
  10. Then basically steps 7-9 are repeated for the life of the beacon.

Takeaways

While this is an effective approach for this specific implementation, there are a few nuances that prevent this from being able to be merged directly into the main codebase, mostly centered around modularity concerns. To start, I'll list the insights can be directly applied:

Directly Applicable Insights

Considerations

Tasks

A number of tasks will have to be created to attack this, but now there at least seems to be a clear way to approach it.

I'm probably missing something, but this seems to be a doable plan of attack. I'll likely make use of a project board to track the progress of this. If this approach works, it should also take care of #13, and I'll be able to close out milestone 2.

Und3rf10w commented 6 years ago

Copying tasks from above comment to main comment in issue to facilitate git project tracking

Und3rf10w commented 6 years ago

Commit 3661883 adds a number of TODOs.

Realized it's beneficial for the server to keep track of two separate block timers. One for each beacon, the amount of time to wait before checking the c2 server for new tasks, and one for the amount of time to check the transport to see if any new beacons arrived. Since this is an architecture problem, I added a new task to track this.

Und3rf10w commented 6 years ago

Commit 37e2fde adds more TODOs.

I realized we'll probably have to implement a queue to process when new beacons are received. The above commit tracks SOME of that logic, but doesn't fully realize it for project's tracking sake. Again, I'll add this as a new task since it's an architecture thing, and fully realize it in the next commit.

Und3rf10w commented 6 years ago

Commit 3f14db8 implements new_beacon_queue mentioned in the comment above.

Some thoughts on checking for new beacons:

task_loop currently handles c2 server interaction and task relaying fine, but needs to be able to decouple task retrieval in a way that would only ever grab responses for a specific beacon. This will likely best be done during the refactoring of existing transport logic. Something to consider, but we'll burn that bridge when we come to it.

Und3rf10w commented 6 years ago

Commit 1e6a910 is adds todos and does some minor refactor. In addition, it also adds a default for beacon.Beacon.beacon_id to account for weird cases.

The major things left to do are:

Some minor changes are probably going to be have to made to the client logic to account for the new data model, so I'll add another task to track that. This won't solve #22, so I won't tackle client abstraction yet, but that'll be easy to tackle after this.

I'm envisioning the common data model to look something like the following: {0001, NEW_TASK, AAAAAAA==}, where data is the base64'd raw data of the task/response. This could still be passed through an encoder module to obfuscate it, but allows for simple handling. I'm opting to not assign each task/response its own ID as there's no way to associate a particular task with a received response. If a database feature is ever implemented, then I'll add a time stamp to the entry for a task/response.

Und3rf10w commented 6 years ago

Commit 536d545 starts the common data model, mostly set up for server

Und3rf10w commented 6 years ago

Commit 0cd6618 refactors the client to utilize the new data model

Und3rf10w commented 6 years ago

Right now, data model doesn't use task_name, this is subject to change, but I need to take a look at what I can do with the a transport and try to understand whether that affect anything.

Und3rf10w commented 6 years ago

Current data model doesn't work, nor maintains order, but the basic idea is there:

>>> import base64
>>>
>>> client_id = 1
>>> task_data = "ayyyylmao"
>>> task_key = "NEWTASK"
>>> resp_key = "RESP4U"
>>> resp_data = "lolwtfbbq"
>>>
>>>
>>> def data_encode(data):
...     return base64.b64encode(data)
...
>>> def data_decode(data):
...     return base64.b64decode(data)
...
>>> task_frame = {client_id, task_key, data_encode(task_data)}
>>> print task_frame
set([1, 'YXl5eXlsbWFv', 'NEWTASK'])
>>> print task_frame[1]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'set' object does not support indexing

Researching alternatives

Und3rf10w commented 6 years ago

Found a viable solution from this stackoverflow answer. It will require us to use the ast module, but it seems to be distributed with python so it shouldn't be an issue.

Basically, the data model needs to be a list instead of a dictionary like I was doing. We will encode the data, rebuild the list, and turn the list into a string, then encode that string.

To transform it back, we'll decode the encoded data, use ast.literal_eval() to convert it back into a list, extract the relevant keys, base64_decode the data value, then continue as normal.

Here's a small script to demonstrate:

import ast
import base64

def encode_data(data):
    return base64.b64encode(data)

def decode_data(data):
    return base64.b64decode(data)

client_id = 15
key = "new_task"
data = "ayyylmao"

frame = [client_id, key, encode_data(data)]
print frame 
print "is type: " + str(type(frame))

print "encoding frame"
encoded_frame = encode_data(str(frame))
print encoded_frame + " is type: " + str(type(encoded_frame))

print "decoding encoded frame"
decoded_frame = decode_data(encoded_frame)
print decoded_frame + " is type: " + str(type(decoded_frame))

print "converting to list"
list_decoded_frame = ast.literal_eval(decoded_frame)
print list_decoded_frame 
print " is type: " + str(type(list_decoded_frame))

print "extracting encoded data"
decoded_data = decode_data(list_decoded_frame[2])
print "decoded data: " + decoded_data

And the output:

$ python test_encoding.py
[15, 'new_task', 'YXl5eWxtYW8=']
is type: <type 'list'>
encoding frame
WzE1LCAnbmV3X3Rhc2snLCAnWVhsNWVXeHRZVzg9J10= is type: <type 'str'>
decoding encoded frame
[15, 'new_task', 'YXl5eWxtYW8='] is type: <type 'str'>
converting to list
[15, 'new_task', 'YXl5eWxtYW8=']
 is type: <type 'list'>
extracting encoded data
decoded data: ayyylmao

Really, we probably SHOULD be using serialized json/dictionaries to do this, but let's keep it simple for right now.

Und3rf10w commented 6 years ago

Commit 490ee17 implements the changes mentioned above to the data model. At this point, still need to implement logic to check for new beacons, then refactor an existing transport to test this stuff out.

Und3rf10w commented 6 years ago

The transport is going to have to be passed the beacon_id so everything can key off of it in the third party service. This is an annoying OPSEC nuisance, and will probably have to be revisited. Hopefully users will be cognizant enough to modify modules as needed.

Refer to the following sequence diagrams for an idea of what this will look like:

Relaying a new task

image

Relaying a response

image

Und3rf10w commented 6 years ago

Commit 93b0c36 implements relaying a new task logic into server. Adding tasks to implement logic to associate a retrieved task's beacon_id with it's corresponding beacon.sock.

Und3rf10w commented 6 years ago

DOCUMENTATION NOTE: Transports are now required to accept the beacon_id in sendData and retrieveData as an argument regardless of whether it's used or not. Certain transports that rely on stream/near real-time processing, such as transport_raw_tcp_socket will likely have it's own environment instantiated for each task_loop, so it would never actually use the beacon_id argument, but should still accept it for the sake of uniformity.

Und3rf10w commented 6 years ago

Commit 3199b9c implements task retrieval logic into the server. Need to replicate this into the client

Und3rf10w commented 6 years ago

Commit d7e65b7 implements the logic to pass the beacon_id around within both client implementations

Und3rf10w commented 6 years ago

Now need to settle on which transport to do this with. As I rewrite the transport, that'll give me insight on how to implement the new_beacon logic.

Und3rf10w commented 6 years ago

Commit f308189 tweaks transport_gmail to support passing of beacon_id in a beacon independent way, in addition to a minor bugfix in the clients.

However, this has brought to light a bug in the project that is going to block the progress of getting this implemented. I am now tracking this bug in #29, and will add a task above for this as a blocker.

Und3rf10w commented 6 years ago

Well... I guess the good news is that we're almost half way done to getting this implemented...

Und3rf10w commented 6 years ago

Closed #29, and as a result added some new tasks here; thankfully that wasn't as difficult to rework as I was thinking.

Und3rf10w commented 6 years ago

Don't need to match client's to their associated sockets, as transport data retrieval will only ever be limited to the client's id, and thus associated with the proper beacon.Beacon object.

Und3rf10w commented 6 years ago

Commit fb67edc adds the requisite functions to transport_gmail, which is the transport I'm going to use to test this setup with.

In theory, if I build an environment that uses transport_gmail and launch two clients with different IDs (and pipe names if on the same machine), everything should work, and then it would just be a matter of updating the rest of the transports to be compatible with the changes.

I do not have access to my testing environment at the moment, so this will have to wait until I am able to access it. But other wise, yay! Great progress was made!

Und3rf10w commented 6 years ago

image

W00T W00T!

Need to push a lot of code from the test environment up to this branch, but it FINALLY works!

Und3rf10w commented 6 years ago

Also should probably push additional info for the beacon object on session init, such as the desired arch and pipe name. This is a low level of effort to resolve.

Und3rf10w commented 6 years ago

Tested and verified with the gmail transport. Will push necessary changes up.

Und3rf10w commented 6 years ago

Commit 6b9c6e5 adds the changes that make multi-client support work. Need to polish it up, push additional beacon options, then modify other transports.

Und3rf10w commented 6 years ago

Commit messed up some of the indentation, but I'll fix that

Und3rf10w commented 6 years ago

Did not mean to close issue

Und3rf10w commented 6 years ago

Commit 0c6b4f1 adds the client modifications that I forgot to push as well

Und3rf10w commented 6 years ago

Commit 9a77e1f adds the logic to pass and receive the desired pipe name and arch from the client to the server, which will then use those options to load the correctly desired stager from the cobalt strike c2 server.

Und3rf10w commented 6 years ago

Commit 29d55f2 adds a bunch of new logic for the rest of the existing transports to use the new model. Honestly, I'm gonna put off testing until beta release is ready.

I do REALLY need to test the imgur one at least, as that'll get me to write a batch_task_loop, but I may just make that a whole new issue altogether.

Und3rf10w commented 6 years ago

Need to add documentation to cobalt_strike framework for new builder values of CLIENT_ID and PIPE_NAME. Possibly consider supporting some form of randomization in the pipe_name and/or client_id? Both are probably necessary if you want to seamlessly run multiple client applications on the same machine, because otherwise one has to manually build each individual client distribution with manually defined files for each unique instance.

Adding new main task to track this, but basically need to determine the most seamless and userfriendly way to support this.

Und3rf10w commented 6 years ago

So the imgur transport is gonna require a bit of tweaking, regardless of the whole batch vs real-time processing argument. This will require me to complete the refactor of the client first, then refactor (read: rewrite and abstract) the way encoders and transports work.

In the mean time, I can probably merge this into dev and close it out. Getting closer...

Und3rf10w commented 6 years ago

Pull request created @ #30, so I guess I can close this out.