cudeso / misp2sentinel

MISP to Sentinel integration
MIT License
58 stars 18 forks source link

MISP to multiple Sentinel #2

Open Sp-TT opened 1 year ago

Sp-TT commented 1 year ago

Hi, actually this is a requirement instead of an issue.

Have you considered to add function allow it pushs to multiple Sentinels with one pull. Instead of running multiple codes with different config files.

Cheers

cudeso commented 1 year ago

This is actually a good suggestion for an enhancement @Sp-TT . I'll look into it for a future release. Ideally it would also take into consideration the misp_event_filters , meaning each Sentinel can have their own set of filters.

As a short workaround now you can install the script in different directories / venvs and then run it from there with a different config.

lnfernux commented 1 year ago

Just did a quick PoC for this;

I think if you redo the main function in script.py:

def push_to_sentinel(tenant, id, secret, workspace):
    logger.info(f"Using Microsoft Upload Indicator API")
    config.ms_auth[TENANT] = tenant
    config.ms_auth[CLIENT_ID] = id
    config.ms_auth[CLIENT_SECRET] = secret
    config.ms_auth[WORKSPACE_ID] = workspace
    parsed_indicators, total_indicators = _get_misp_events_stix()
    logger.info("Found {} indicators in MISP".format(total_indicators))

    with RequestManager(total_indicators, logger) as request_manager:
        logger.info("Start uploading indicators")
        request_manager.upload_indicators(parsed_indicators)
        logger.info("Finished uploading indicators")
        if config.write_parsed_indicators:
            json_formatted_str = json.dumps(parsed_indicators, indent=4)
            with open("parsed_indicators.txt", "w") as fp:
                fp.write(json_formatted_str)

def main():
    tenants = json.loads(config.ms_auth)
    for key, value in tenants.items():
        push_to_sentinel(key, value['client_id'], value['client_secret'], value['workspace_id'])

and config.py:

ms_auth = {
    '<tenant_1_Id>': {
        'client_id': '<client_id>',
        'client_secret': '<client_secret>',
        'graph_api': False,                                 # Set to False to use Upload Indicators API   
        'scope': 'https://management.azure.com/.default',   # Scope for Upload Indicators API
        'workspace_id': '<workspace_id>'
    },
    '<tenant_2_Id>': {
        'client_id': '<client_id>',
        'client_secret': '<client_secret>',
        'graph_api': False,                                 # Set to False to use Upload Indicators API   
        'scope': 'https://management.azure.com/.default',   # Scope for Upload Indicators API
        'workspace_id': '<workspace_id>'
    },
    '<tenant_n_Id>': {
        'client_id': '<client_id>',
        'client_secret': '<client_secret>',
        'graph_api': False,                                 # Set to False to use Upload Indicators API   
        'scope': 'https://management.azure.com/.default',   # Scope for Upload Indicators API
        'workspace_id': '<workspace_id>'
    }
}

It would allow for looping through the different workspaces? Also would need to add more SPNs for all the different workspaces, or an enterprise app that's added to all workspaces as a Microsoft Sentinel Contributor.

This was just a basic test, so would need to do a proper PoC with push later. But the idea is there 👯

cudeso commented 1 year ago

Good approach! I put it on the list to include once I merge the upload_indicators branch with main.

Kaloszer commented 11 months ago

Does this mean that currently when there are >1 entries in tenants this will cause the .py to pull each time it will send (eg. (api request to misp > generate rest request > send to workspace 1 -> then api request to misp > gen rest request > send to workspace 2 ) or (api request to misp > generate rest request > send to workspace 1 -> send to workspace 2)

If that's the case, this would be a low hanging fruit to fix and improve perf.

lnfernux commented 9 months ago

The Azure function currently supports the multiple Sentinel mode, but as you said @Kaloszer I think we can improve performance by only getting the indicators once, then sending it instead of loop-downloading indicators.

Kaloszer commented 9 months ago

So something really 'dumb' like this should work I guess

https://github.com/cudeso/misp2sentinel/pull/69 (nice)

Just put a global var, check if it already exists and use it if it does, else just parse it. Don't have a way to test this year. But I don't see why this wouldn't do the trick.

cudeso commented 9 months ago

Simple, but I like it ;-) . I don't see a reason why this wouldn't work. Will check with memory consumption on a large instance (without the Azure function, as 'local' running script).

Kaloszer commented 8 months ago

@cudeso

Looking into fixing the code that was submitted and I was wondering, why is there such a big drift between init.py / script.py? Shouldn't they be pretty much the same in the grand scheme of things?

2 files that execute pretty much the same logic need to be maintained

lnfernux commented 8 months ago

@Kaloszer

This is my fault, I stripped all the unnecessary functions related to graph support.

lnfernux commented 8 months ago

Also logging in Azure Functions works differently if you want it to print compared to local so that's also a diff.

jusso-dev commented 7 months ago

I think this can be closed @cudeso ?

cudeso commented 7 months ago

If I'm not mistaken this is only in the Azure function, not in the locally hosted Python version. (unfortunately haven't come around to have both versions more in sync)

Parasdeepkohli commented 6 days ago

Hi,

Is there any update on this? The workaround of installing the repo in different directories is quite wasteful of resources.

cudeso commented 4 days ago

Hi @Parasdeepkohli , I have been completely overwhelmed with $dayjob tasks and have not been able to work on it.

Parasdeepkohli commented 4 days ago

Hey @cudeso

Ahhh thats uhderstandable! Thanks for all the work you've been putting in despite your day job workload :-)