Closed Und3rf10w closed 6 years ago
I'm gonna go ahead and close this as a wontfix
for now. Please reference #26 for additional details. The new design will (read: should) easily allow for this in a very straightforward manner.
Thinking about it, this can hopefully get fixed before doing a rewrite, which should also give some ideas on how to best tackle this in the rewrite, or may eliminate a rewrite altogether. Going to reopen this issue for now and add a "help wanted" tag.
Perhaps I'm overcomplicating it? Need to attempt, but maybe just wrap https://github.com/Und3rf10w/external_c2_framework/blob/dev/skeletons/frameworks/cobalt_strike/server/server.py#L50-L103 in threading.Thread()
in a while True
block? This would open a new socket to the c2 server for every beacon, which is what's required anyways. Obviously would require Thread.daemon = True
.
Looking at the RhinoSecurityLabs fork, it appears that this is essentially what they're doing:
server.py
.configureStage.loadStager(sock, beaconId)
is called.beaconId
is passed to commonUtils.sendData, which passes it to the transport.commonUtils.retrieveData(beaconId)
is called, which calls transport.retrieveData(beaconId)
, which then pulls the response specific to that beaconId.taskLoop(sock, beaconId)
. The beaconId
is appended to the beacons
dictionary in the server.taskLoop()
, checkForTasks(sock)
is called, and once there's a new task, the new task and beaconId is passed to establishedSession.relayTask
.While this is an effective approach for this specific implementation, there are a few nuances that prevent this from being able to be merged directly into the main codebase, mostly centered around modularity concerns. To start, I'll list the insights can be directly applied:
socket
object.server
to keep track of the sessions without having to rely on files or a database.server.taskLoop()
function is something that should have been done in the first place, and will likely be reused for the most part.beaconId
and dataType
is really nice, and enables us to relay commands not only from the c2 server, but from the external_c2_framework
itself, which will be extremely useful in the future. The idea of using a header for the data is somewhat explored in the existing imgur transport, but is nowhere nearly as useful or feature complete. It is likely that all transports will have to be modified to support this.transport.fetchNewBeacons()
is an essential feature to add, but will likely need to be reimplemented from scratch to maintain design goals (see below).fetchNewBeacons()
transport inside every transport, a new universal task key init_beacon
could be used added, and fetchNewBeacons()
could be instead moved to the server
in a more modular and universal manner.fetchNewBeacons()
would essentially have to be a call to transport.retrieveData()
, and if either a new beaconId or task key init_beacon
is seen, then a new thread would start.taskLooper
thread.taskLooper
threads, one for a normal transport, and one maybe called something to the effect of batchTaskLooper
, which would allow us to account for either desired scenario.active_beacon
object will likely need to be created that will store the beaconId
, associated sock
object, and any other information such as the beacon's desired block_time
and/or associated metadata
. This will enable us to easily store this information in a database if/when we go that route. In addition we would only have to pass the active_beacon
object around internally.beacon_task_queue
and beacon_resp_queue
? Each item in either queue would only need the beaconId
and associated data
. When new item(s) are added, the destination to route them to could be cross referenced with the associated active_beacon
object, looked up by the beaconId
.A number of tasks will have to be created to attack this, but now there at least seems to be a clear way to approach it.
active_beacon
object that will contain the beaconId
, socket object
, and other necessary data.task_looper
function in server
.batch_task_looper
function in server
.beacons
dictionary in server
server
.beaconId
, task_key
, and data.server.main()
logic to handle batching and non-batching scenarios, threading task_looper
if needed.I'm probably missing something, but this seems to be a doable plan of attack. I'll likely make use of a project board to track the progress of this. If this approach works, it should also take care of #13, and I'll be able to close out milestone 2.
Copying tasks from above comment to main comment in issue to facilitate git project tracking
Commit 3661883 adds a number of TODOs.
Realized it's beneficial for the server to keep track of two separate block timers. One for each beacon, the amount of time to wait before checking the c2 server for new tasks, and one for the amount of time to check the transport to see if any new beacons arrived. Since this is an architecture problem, I added a new task to track this.
Commit 37e2fde adds more TODOs.
I realized we'll probably have to implement a queue
to process when new beacons are received. The above commit tracks SOME of that logic, but doesn't fully realize it for project's tracking sake. Again, I'll add this as a new task since it's an architecture thing, and fully realize it in the next commit.
Commit 3f14db8 implements new_beacon_queue
mentioned in the comment above.
Some thoughts on checking for new beacons:
active_beacons
list, as well as add the populated/configured beacon.Beacon object to the new_beacon_queue
.active_beacons
list.task_loop
currently handles c2 server interaction and task relaying fine, but needs to be able to decouple task retrieval in a way that would only ever grab responses for a specific beacon. This will likely best be done during the refactoring of existing transport logic. Something to consider, but we'll burn that bridge when we come to it.
Commit 1e6a910 is adds todos and does some minor refactor. In addition, it also adds a default for beacon.Beacon.beacon_id
to account for weird cases.
The major things left to do are:
beacon_id
, data_type
, and data
.Some minor changes are probably going to be have to made to the client logic to account for the new data model, so I'll add another task to track that. This won't solve #22, so I won't tackle client abstraction yet, but that'll be easy to tackle after this.
I'm envisioning the common data model to look something like the following:
{0001, NEW_TASK, AAAAAAA==}
, where data
is the base64'd raw data of the task/response. This could still be passed through an encoder
module to obfuscate it, but allows for simple handling. I'm opting to not assign each task/response its own ID as there's no way to associate a particular task with a received response. If a database feature is ever implemented, then I'll add a time stamp to the entry for a task/response.
Commit 536d545 starts the common data model, mostly set up for server
Commit 0cd6618 refactors the client
to utilize the new data model
Right now, data model doesn't use task_name
, this is subject to change, but I need to take a look at what I can do with the a transport and try to understand whether that affect anything.
Current data model doesn't work, nor maintains order, but the basic idea is there:
>>> import base64
>>>
>>> client_id = 1
>>> task_data = "ayyyylmao"
>>> task_key = "NEWTASK"
>>> resp_key = "RESP4U"
>>> resp_data = "lolwtfbbq"
>>>
>>>
>>> def data_encode(data):
... return base64.b64encode(data)
...
>>> def data_decode(data):
... return base64.b64decode(data)
...
>>> task_frame = {client_id, task_key, data_encode(task_data)}
>>> print task_frame
set([1, 'YXl5eXlsbWFv', 'NEWTASK'])
>>> print task_frame[1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'set' object does not support indexing
Researching alternatives
Found a viable solution from this stackoverflow answer. It will require us to use the ast module, but it seems to be distributed with python so it shouldn't be an issue.
Basically, the data model needs to be a list instead of a dictionary like I was doing. We will encode the data
, rebuild the list, and turn the list into a string, then encode that string.
To transform it back, we'll decode the encoded data, use ast.literal_eval()
to convert it back into a list, extract the relevant keys, base64_decode
the data value, then continue as normal.
Here's a small script to demonstrate:
import ast
import base64
def encode_data(data):
return base64.b64encode(data)
def decode_data(data):
return base64.b64decode(data)
client_id = 15
key = "new_task"
data = "ayyylmao"
frame = [client_id, key, encode_data(data)]
print frame
print "is type: " + str(type(frame))
print "encoding frame"
encoded_frame = encode_data(str(frame))
print encoded_frame + " is type: " + str(type(encoded_frame))
print "decoding encoded frame"
decoded_frame = decode_data(encoded_frame)
print decoded_frame + " is type: " + str(type(decoded_frame))
print "converting to list"
list_decoded_frame = ast.literal_eval(decoded_frame)
print list_decoded_frame
print " is type: " + str(type(list_decoded_frame))
print "extracting encoded data"
decoded_data = decode_data(list_decoded_frame[2])
print "decoded data: " + decoded_data
And the output:
$ python test_encoding.py
[15, 'new_task', 'YXl5eWxtYW8=']
is type: <type 'list'>
encoding frame
WzE1LCAnbmV3X3Rhc2snLCAnWVhsNWVXeHRZVzg9J10= is type: <type 'str'>
decoding encoded frame
[15, 'new_task', 'YXl5eWxtYW8='] is type: <type 'str'>
converting to list
[15, 'new_task', 'YXl5eWxtYW8=']
is type: <type 'list'>
extracting encoded data
decoded data: ayyylmao
Really, we probably SHOULD be using serialized json/dictionaries to do this, but let's keep it simple for right now.
Commit 490ee17 implements the changes mentioned above to the data model. At this point, still need to implement logic to check for new beacons, then refactor an existing transport to test this stuff out.
The transport is going to have to be passed the beacon_id
so everything can key off of it in the third party service. This is an annoying OPSEC nuisance, and will probably have to be revisited. Hopefully users will be cognizant enough to modify modules as needed.
Refer to the following sequence diagrams for an idea of what this will look like:
Commit 93b0c36 implements relaying a new task logic into server
. Adding tasks to implement logic to associate a retrieved task's beacon_id
with it's corresponding beacon.sock
.
DOCUMENTATION NOTE: Transports are now required to accept the beacon_id
in sendData
and retrieveData
as an argument regardless of whether it's used or not. Certain transports that rely on stream/near real-time processing, such as transport_raw_tcp_socket
will likely have it's own environment instantiated for each task_loop
, so it would never actually use the beacon_id
argument, but should still accept it for the sake of uniformity.
Commit 3199b9c implements task retrieval logic into the server
. Need to replicate this into the client
Commit d7e65b7 implements the logic to pass the beacon_id
around within both client implementations
Now need to settle on which transport to do this with. As I rewrite the transport, that'll give me insight on how to implement the new_beacon
logic.
Commit f308189 tweaks transport_gmail
to support passing of beacon_id
in a beacon independent way, in addition to a minor bugfix in the clients.
However, this has brought to light a bug in the project that is going to block the progress of getting this implemented. I am now tracking this bug in #29, and will add a task above for this as a blocker.
Well... I guess the good news is that we're almost half way done to getting this implemented...
Closed #29, and as a result added some new tasks here; thankfully that wasn't as difficult to rework as I was thinking.
Don't need to match client's to their associated sockets, as transport data retrieval will only ever be limited to the client's id, and thus associated with the proper beacon.Beacon
object.
Commit fb67edc adds the requisite functions to transport_gmail
, which is the transport I'm going to use to test this setup with.
In theory, if I build an environment that uses transport_gmail
and launch two clients with different IDs (and pipe names if on the same machine), everything should work, and then it would just be a matter of updating the rest of the transports to be compatible with the changes.
I do not have access to my testing environment at the moment, so this will have to wait until I am able to access it. But other wise, yay! Great progress was made!
W00T W00T!
Need to push a lot of code from the test environment up to this branch, but it FINALLY works!
Also should probably push additional info for the beacon object on session init, such as the desired arch and pipe name. This is a low level of effort to resolve.
Tested and verified with the gmail transport. Will push necessary changes up.
Commit 6b9c6e5 adds the changes that make multi-client support work. Need to polish it up, push additional beacon options, then modify other transports.
Commit messed up some of the indentation, but I'll fix that
Did not mean to close issue
Commit 0c6b4f1 adds the client modifications that I forgot to push as well
Commit 9a77e1f adds the logic to pass and receive the desired pipe name and arch from the client to the server, which will then use those options to load the correctly desired stager from the cobalt strike c2 server.
Commit 29d55f2 adds a bunch of new logic for the rest of the existing transports to use the new model. Honestly, I'm gonna put off testing until beta release is ready.
I do REALLY need to test the imgur one at least, as that'll get me to write a batch_task_loop
, but I may just make that a whole new issue altogether.
Need to add documentation to cobalt_strike
framework for new builder values of CLIENT_ID
and PIPE_NAME
. Possibly consider supporting some form of randomization in the pipe_name
and/or client_id
? Both are probably necessary if you want to seamlessly run multiple client applications on the same machine, because otherwise one has to manually build each individual client distribution with manually defined files for each unique instance.
Adding new main task to track this, but basically need to determine the most seamless and userfriendly way to support this.
So the imgur transport is gonna require a bit of tweaking, regardless of the whole batch vs real-time processing argument. This will require me to complete the refactor of the client first, then refactor (read: rewrite and abstract) the way encoders and transports work.
In the mean time, I can probably merge this into dev and close it out. Getting closer...
Pull request created @ #30, so I guess I can close this out.
Apparently, Cobalt Strike actually has support for multiple simultaneous external c2 client connections. Need to add functionality that can facilitate this. Most of the changes are going to have to be to the server logic.
Multiple
go()
commands can be sent to the c2 server, which will respond to each one with a new stager. A separate TCP socket will have to be opened for each client between the c2 server and the controller. The server will have to be able to identify which traffic being sent corresponds to which client.Primary Tasks
The following major tasks will have to be completed in order to facilitate this:
Detailed Tasks
See this comment for more info
active_beacon
object that will contain thebeaconId
,socket object
, and other necessary data. - d2a5a96task_looper
function inserver
. - 3661883active_beacons
dictionary inserver
- 0cf26ectask_looper
- 3661883new_beacon_queue
in server, a queue to safely process new beacons. - 3f14db8server
. - d520faabeacon_id
with it's correspondingbeacon.sock
- See this commenttransport.send_server_notification()
function forclient
to use - fb67edctransport.check_for_new_clients()
function forserver
to use. ReturnNone
if new clients, else return encoded data. - fb67edctransport_gmail
and validate it all works, fix any bugs that arise from it - 6b9c6e5batch_task_looper
function inserver
.Blockers
Optimal Processing
Off the top of my head, there's two approaches that can be taken to to modifying the server logic:
Approach 1
A new thread is used for each client, this thread opens the socket to the c2 server, requests a stager, uploads the stager, and communicates with the client at a specified block interval for the client. In addition, each thread will have to maintain its state.
This will also increase the rate of utilization for the
transport
, which could be an issue fortransports
that utilize an api that relies on credits or rate limits them. That being said, each client would be able to have differently configured block times though, which is certainly a desirable feature for some possibletransports
.Approach 2
A new thread is used for each client, this thread opens the socket to the c2 server, requests a stager, uploads the stager, and communicates with the client every time a batch processing job happens. If this job does not correspond to the data for the thread, the data will be added to a queue that will ensure it is processed during the next hit of the corresponding job.
Each thread wouldn't have to maintain it's state with the client, as the server would simply be switching between
send_data_to_client
,recv_data_from_client
,send_data_to_c2
andrecv_data_from_c2
jobs in the same order, every time.There are some disadvantages to this approach:
Each client would have the same
block
timer, and it would be difficult to quickly identify if a client has died. There's also an argument that batch processing significantly increase the amount of generated artifacts, being particularly susceptible to time series pattern analysis.Both approaches have merits and faults, ideally both could be added in and the user given the ability to determine which approach they'd like to use when starting a server.
Ultimately, I'd like to see both approaches added in so that the project may reach its long term goal of being able to support multiple clients that are all utilizing multiple and possibly even different
transports
andencoders
for each client.