Imagine this scenario. You are responsible for security and privacy at a banking startup. As you are building everything from scratch, you decide to build things with data security as one of the primary drivers of architectural decisions.
Your planned environment is a microservice environment, in the cloud, where all deployments and large-scale access to data is done by automation, not humans, and definitely never done from client endpoints. Data is encrypted, tokenized or hashed, where appropriate, and microservices only have access to the minimum amount of privileges needed to operate. Keys are stored in hardware modules, and backups are stored in cloud environments with write-only credentials, ensuring that a complete take-over of your environment won’t allow someone to delete the backups. Support staff only have access to customer data for a limited period, when customers agree to it, through the use of temporary access grants and step-up authentication. Your machine learning team has access to anonymized data to develop models, and when they need to test them against production data, it is done through a CI/CD pipeline, so they never access the data directly. Your entire infrastructure has egress filtering, with known good destinations allow-listed, and mutual TLS authentication every microservice with each other. All data access logs are stored, retained, available for investigation but also used for anomaly detection. You don’t even need to deploy DLP on laptops in “protect” mode, as data should never make it there anyway.
Imagine this scenario. You are responsible for security and privacy at a banking startup. As you are building everything from scratch, you decide to build things with data security as one of the primary drivers of architectural decisions.
Your planned environment is a microservice environment, in the cloud, where all deployments and large-scale access to data is done by automation, not humans, and definitely never done from client endpoints. Data is encrypted, tokenized or hashed, where appropriate, and microservices only have access to the minimum amount of privileges needed to operate. Keys are stored in hardware modules, and backups are stored in cloud environments with write-only credentials, ensuring that a complete take-over of your environment won’t allow someone to delete the backups. Support staff only have access to customer data for a limited period, when customers agree to it, through the use of temporary access grants and step-up authentication. Your machine learning team has access to anonymized data to develop models, and when they need to test them against production data, it is done through a CI/CD pipeline, so they never access the data directly. Your entire infrastructure has egress filtering, with known good destinations allow-listed, and mutual TLS authentication every microservice with each other. All data access logs are stored, retained, available for investigation but also used for anomaly detection. You don’t even need to deploy DLP on laptops in “protect” mode, as data should never make it there anyway.