threatpatrols / opnsense-plugin-configsync

Synchronize OPNsense system configuration .xml files to various cloud-storage providers
https://documentation.threatpatrols.com/opnsense/plugins/configsync
BSD 2-Clause "Simplified" License
6 stars 1 forks source link

Encryption / handling of sensitive config data? #3

Open ChrisHardie opened 1 year ago

ChrisHardie commented 1 year ago

I'm new to OPNsense so maybe this is mitigated in some other way, but is it possible to encrypt the xml files before they are synced to the remote destination, assuming there's sensitive information contained in that file? Thank you.

ndejong commented 1 year ago

Good question

I've considered adding client side encryption functionality before being sent (over HTTPS) to the S3 provider, however what threat-model would this aim to solve?

The main S3 providers (ie AWS-S3, GCP-S3, Backblaze and DigitalOcean) all provide server-side encryption at rest options - and the transport to those providers is protected within HTTPS.

Some of these provide also customer key-management-systems (KMS) features that come with a deep and robust set of logging, audit and abuse detection around the KMS features.

And that's just the KMS - those same S3 providers provide similar logging, audit and abuse detection on S3 object stores as well.

The blunt answer is that if you are concerned about the safety of objects in your S3 then you probably need to spend more time configuring and building up the protections available to you with the various S3 service providers. When all implemented, then S3 object storage from the main providers is extremely robust...

However it certainly feels shallow to put it that way when getting S3, KMS, logging, audit, alerting all setup "right" represents significant time, learning and effort - I get it.

Putting aside the effort required to make S3 object storage from a tier-1-provider awesome, then the layers of protection are currently

... and finally, the design of ConfigSync is to perform a one-way outbound sync to the S3 provider without being able to retrieve old config.xml files directly inside the OPNsense UI interface - this design means that a compromise of the OPNsense host (a bad thing) does not also lead to a compromise of S3 keys that has read permission to previous config.xml files - you are responsible for applying an appropriate read-only policy to those credentials but ConfigSync by design does not require any object read-access.

This means that users retrieving previous config.xml files need to use their user cloud-platform logins and interfaces to get to the S3 bucket, which in-turn causes all the S3 provider logging and audit alarms to go off when they are accessed etc.

So, is there a threat-model that client-side encryption would protect against?

Yes, in the case where an S3 bucket is inappropriately setup, or the cloud-provider S3 account is compromised, or the S3 provider is not a first-tier provider with at-rest encryption or the fancy logging, audit and alerting - I'd suggest a hapless engineer probably has bigger problems in those cases.

Open to further discussion

ChrisHardie commented 1 year ago

I appreciate your thoughtful response here. You're right that the threat-models where this extra bit of security would be useful are limited, though I always try to operate under the assumption that any remote server will eventually be compromised in a way that bypasses any claims about its owners or attackers being unable to decrypt what's stored there.

I guess I saw the option to encrypt settings exports in the core OPNsense UI, and that prompted me to wonder what, if any, encryption could happen automatically as a part of backup syncs as well. When I didn't see it mentioned anywhere, I thought I'd ask. I think it could be sufficient to add a note to the docs making it clear that users of this plugin are responsible for trusting and securing the storage destinations they use, and that they should understand what sensitive information is being exported as a part of that process. Maybe that's in the category of "duh" but at least it would come up in future searches like the one I did. :)

Thanks again.

ndejong commented 1 year ago

All good.

Following on because I'm sure what you describe is a common thought train.

Yes, the OPNsense interface does provide an option to encrypt the config.xml files when exporting directly from the OPNSense UI - however this is a different scenario and is likely a scenario where you are simply trying to backup the file locally.

In the scenario where you are exporting to a local filesystem you probably do not have available all the transport-encryption, storage-encryption, secret-key-management, access-logging, access-audit and access-control that a modern S3 bucket storage providers provide - yes, without those things you really do want to make sure the exported config.xml file is not floating around on your local systems in clear-text.

I can still see how a time-strapped engineer may recoil at the notion of putting together a well functioning S3, because yes it takes time, effort, understanding and cloud-providers like to charge for all that logging, audit, key-management.

Leaving this as a thought bubble then -

nhairs commented 1 year ago

2c as I was asked to take a look at this.

Thoughts

Scenarios

I can think of a few scenarios where you might want to ensure that you have client side encryption:

The storage environment is shared and outside the control of the person administering OPNSense and that environment does not and will not be changed to provide appropriate controls.

In this case I would question why you are using this storage environment to backup your potentially sensitive config. Even having such an environment goes against most best practices of cloud providers and it would probably have more bang for buck to improve the management and security of the cloud whole cloud environment over trying to compensate inside it.

The transport to the storage provider is not secure (e.g. MinIO over HTTP)

At first this seems like a scenario we might want to cover, however taking a "secure defaults" "secure by design" approach, I think it would be better to reject using HTTP endpoints in the first place (potentially allow overriding this check with a big red warning). Again, probably better bang for buck setting up HTTPS rather than trying to compensate for the lack of it.

Server Side Encryption is not enforced

It is possible to setup a storage provider to not have encryption by default / not enforced (if relying on request to say what encryption to use), in which case encrypting the file beforehand can prevent leaks due to misconfiguration. However again, at this point you should be spending time to fix the configuration rather than a "belt and braces" approach.

Secure at all costs

I'm sure there may be some instances where the security of the OPNSense configuration needs extra protection to make sure it absolutely doesn't go walk about. However at this point I'd have the following questions:

Setting up cloud environments is hard / expensive

I can still see how a time-strapped engineer may recoil at the notion of putting together a well functioning S3, because yes it takes time, effort, understanding and cloud-providers like to charge for all that logging, audit, key-management.

The blunt answer is that if you are concerned about the safety of objects in your S3 then you probably need to spend more time configuring and building up the protections available to you with the various S3 service providers. When all implemented, then S3 object storage from the main providers is extremely robust.

:point_up: +1

It could be suggested that the tool could do some "preflight" checks to ensure that the back-end is configured appropriately - especially since the current implementation does not allow for injecting the ServerSideEncryption extra arguments. However I feel like at this point you are better off using dedicated tools to check your cloud environment is appropriately configured.

That said, over time this is potentially going to become less of an issue as cloud providers also move to secure defaults.

Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. SSE-S3, which uses 256-bit Advanced Encryption Standard (AES-256), is automatically applied to all new buckets and to any existing S3 bucket that doesn't already have default encryption configured. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS Command Line Interface (AWS CLI) and the AWS SDKs. (link)

Leading a horse to water

This feels more like a documentation problem. Often times a user may ask for ThingA to solve their problem, not realising that ThingB is a much better solution. I think it is better to lead users towards better practices rather than support their (potentially) misconceived ideas. Documentation is key for this.

Suggestions

Whilst tools should be somewhat flexible to supporting a variety of use-cases, I also strongly believe that tools should encourage users towards better practices even if that means purposely restricting functionality.

Without a compelling use-case where client side encryption is the best solution, I don't think this feature should be supported.

That said I think we can do more to lead users of this tool towards better practices.

If it were me I'd make the following changes:

nhairs commented 1 year ago

So I've been thinking about this more, and whilst I stand by basically everything I've said above I'd like to add the following.

Whilst I believe that "experts" definitely should be leading / coercing users towards the better answers, I'm probably not an expert enough to confidently assert that "there is unlikely a scenario where application-level encryption is required above cloud native best practices".

I don't think it really changes my suggested changes above, but perhaps included in the documentation of "why application level encryption is provided", also include "if you believe it should be provided please comment on this issue with your use-case". It's not an emphatic "no", but at least reduces maintenance burden until there is a proven need for it.