nongrata081 / devkit

development kit for dev workflow optimisation
0 stars 0 forks source link

study & choose security approach for containerize env (?) #114

Open nongrata081 opened 4 years ago

nongrata081 commented 4 years ago

Introduction: What you need to know about container security



Topics to be mentioned (remove from this list once added to the containerized architecture security visualization):

nongrata081 commented 4 years ago

Solutions:

nongrata081 commented 4 years ago

Resources:

nongrata081 commented 4 years ago

Another lesson learned is that software alone cannot guarantee security. Containerization also requires that organizations examine their processes and teams and potentially adjust to the new operational model. The ephemeral nature of containers may call for different procedures than those used with traditional servers. For example, incident response teams will need awareness of the roles, owners, and sensitivity levels of deployed containers before they can know the proper steps to take in the event of an ongoing attack.

Security threats and mitigations are ever-evolving, and no one resource can provide all the answers. Still, the NIST Application Container Security Guide offers a solid foundation and framework for security policy for containerized environments. It’s well worth a read for anyone involved in building, deploying, managing, and maintaining containers and containerized applications, and it's a must-read for security professionals as the industry transitions to this next phase of IT.

nongrata081 commented 4 years ago

https://searchitoperations.techtarget.com/tip/Container-OS-options-abound-Make-the-right-choice http://techgenix.com/docker-centric-os/ https://coreos.com/

Study differences between:

nongrata081 commented 4 years ago

https://www.twistlock.com/ https://www.twistlock.com/2017/01/09/containerized-node-js-applications/

nongrata081 commented 4 years ago
nongrata081 commented 4 years ago

https://www.mongodb.com/presentations/its-a-dangerous-world-from-os-through-application-securing-your-mongodb-infrastructure https://stackoverflow.com/questions/33417852/should-i-secure-my-mongodb-database http://knowledgebasement.com/how-to-secure-your-mongodb-database-server-on-ubuntu-14-04/ https://github.com/vitvegl/AppArmor-profiles/blob/master/ubuntu/i386/usr.bin.mongo-utils

nongrata081 commented 4 years ago
nongrata081 commented 4 years ago

Clair is an open source project for the static analysis of vulnerabilities in application containers (currently including appc and docker).

nongrata081 commented 4 years ago
nongrata081 commented 4 years ago

Container vulnerability scanning tools https://www.google.nl/amp/s/techbeacon.com/security/10-top-open-source-tools-docker-security%3famp

nongrata081 commented 4 years ago

https://sysdig.com/blog/20-docker-security-tools/

nongrata081 commented 4 years ago

https://www.youtube.com/watch?v=wGz_cbtCiEA

nongrata081 commented 4 years ago

Containerized env security checklist

A container-specific host OS is a minimalist OS explicitly designed to only run containers, with all other services and functionality disabled, and with read-only file systems and other hardening practices employed. When using a container-specific host OS, attack surfaces are typically much smaller than they would be with a general-purpose host OS, so there are fewer opportunities to attack and compromise a container-specific host OS. Accordingly, whenever possible, organizations should use container-specific host OSs to reduce their risk. However, it is important to note that container-specific host OSs will still have vulnerabilities over time that require remediation.

Container-specific OSs:

nongrata081 commented 4 years ago

While most container platforms do an effective job of isolating containers from each other and from the host OS, it may be an unnecessary risk to run apps of different sensitivity levels together on the same host OS. Segmenting containers by purpose, sensitivity, and threat posture provides additional defense in depth. By grouping containers in this manner, organizations make it more difficult for an attacker who compromises one of the groups to expand that compromise to other groups. This increases the likelihood that compromises will be detected and contained and also ensures that any residual data, such as caches or local volumes mounted for temp files, stays within its security zone. In larger-scale environments with hundreds of hosts and thousands of containers, this grouping must be automated to be practical to operationalize. Fortunately, container technologies typically include some notion of being able to group apps together, and container security tools can use attributes like container names and labels to enforce security policies across them.

nongrata081 commented 4 years ago

Traditional vulnerability management tools make many assumptions about host durability and app update mechanisms and frequencies that are fundamentally misaligned with a containerized model. For example, they often assume that a given server runs a consistent set of apps over time, but different application containers may actually be run on different servers at any given time based on resource availability. Further, traditional tools are often unable to detect vulnerabilities within containers, leading to a false sense of safety. Organizations should use tools that take the declarative, step-by-step build approach and immutable nature of containers and images into their design to provide more actionable and reliable results. These tools and processes should take both image software vulnerabilities and configuration settings into account. Organizations should adopt tools and processes to validate and enforce compliance with secure configuration best practices for images. This should include having centralized reporting and monitoring of the compliance state of each image, and preventing non- compliant images from being run.

nongrata081 commented 4 years ago

Security should extend across all tiers of the container technology. The current way of accomplishing this is to base security on a hardware root of trust, such as the industry standard Trusted Platform Module (TPM). Within the hardware root of trust are stored measurements of the host’s firmware, software, and configuration data. Validating the current measurements against the stored measurements before booting the host provides assurance that the host can be trusted. The chain of trust rooted in hardware can be extended to the OS kernel and the OS components to enable cryptographic verification of boot mechanisms, system images, container runtimes, and container images. Trusted computing provides a secure way to build, run, orchestrate, and manage containers.

nongrata081 commented 4 years ago

Deploy and use a dedicated container security solution capable of preventing, detecting, and responding to threats aimed at containers during runtime. Traditional security solutions, such as intrusion prevention systems (IPSs) and web application firewalls (WAFs), often do not provide suitable protection for containers. They may not be able to operate at the scale of containers, manage the rate of change in a container environment, and have visibility into container activity. Utilize a container-native security solution that can monitor the container environment and provide precise detection of anomalous and malicious activity within it.

nongrata081 commented 4 years ago

Container runtimes

Every host OS used for running containers has binaries that establish and maintain the environment for each container, also known as the container runtime. The container runtime coordinates multiple OS components that isolate resources and resource usage so that each container sees its own dedicated view of the OS and is isolated from other containers running concurrently. Effectively, the containers and the host OS interact through the container runtime. The container runtime also provides management tools and application programming interfaces (APIs) to allow DevOps personnel and others to specify how to run containers on a given host. The runtime eliminates the need to manually create all the necessary configurations and simplifies the process of starting, stopping, and operating containers. Examples of runtimes include Docker [2], rkt [3], and the Open Container Initiative Daemon [7].

nongrata081 commented 4 years ago

2.3.1 Image Creation, Testing, and Accreditation

In the first phase of the container lifecycle, an app’s components are built and placed into an image (or perhaps into multiple images). An image is a package that contains all the files required to run a container. For example, an image to run Apache would include the httpd binary, along with associated libraries and configuration files. An image should only include the executables and libraries required by the app itself; all other OS functionality is provided by the OS kernel within the underlying host OS.

The image creation process is managed by developers responsible for packaging an app for handoff to testing. Image creation typically uses build management and automation tools, such as Jenkins [8] and TeamCity [9], to assist with what is called the “continuous integration” process. These tools take the various libraries, binaries, and other components of an app, perform testing on them, and then assemble images out of them based on the developer-created manifest that describes how to build an image for the app.

Most container technologies have a declarative way of describing the components and requirements for the app. For example, an image for a web server would include not only the executables for the web server, but also some machine-parseable data to describe how the web server should run, such as the ports it listens on or the configuration parameters it uses.

After image creation, organizations typically perform testing and accreditation. For example, test automation tools and personnel would use the images built to validate the functionality of the final form application, and security teams would perform accreditation on these same images. The consistency of building, testing, and accrediting exactly the same artifacts for an app is one of the key operational and security benefits of containers.

nongrata081 commented 4 years ago

2.3.2 Image Storage and Retrieval

Examples of registries include Amazon EC2 Container Registry [10], Docker Hub [11], Docker Trusted Registry [12], and Quay Container Registry [13].

Images are typically stored in central locations to make it easy to control, share, find, and reuse them across hosts. Registries are services that allow developers to easily store images as they are created, tag and catalog images for identification and version control to aid in discovery and reuse, and find and download images that others have created. Registries may be self-hosted or consumed as a service.

Registries provide APIs that enable automating common image-related tasks. For example, organizations may have triggers in the image creation phase that automatically push images to a registry once tests pass. The registry may have further triggers that automate the deployment of new images once they have been added. This automation enables faster iteration on projects with more consistent results.

Once stored in a registry, images can be easily pulled and then run by DevOps personas across any environment in which they run containers. This is another example of the portability benefits of containers; image creation may occur in a public cloud provider, which pushes an image to a registry hosted in a private cloud, which is then used to distribute images for running the app in a third location.

nongrata081 commented 4 years ago

2.3.3 Container Deployment and Management

Examples of orchestrators are Kubernetes [14], Mesos [15], and Docker Swarm [16].

Tools known as orchestrators enable DevOps personas or automation working on their behalf to pull images from registries, deploy those images into containers, and manage the running containers. This deployment process is what actually results in a usable version of the app, running and ready to respond to requests. When an image is deployed into a container, the image itself is not changed, but instead a copy of it is placed within the container and transitioned from being a dormant set of app code to a running instance of the app.

The abstraction provided by an orchestrator allows a DevOps persona to simply specify how many containers need to be running a given image and what resources, such as memory, processing, and disk need to be allocated to each. The orchestrator knows the state of each host within the cluster, including what resources are available for each host, and determines which containers will run on which hosts. The orchestrator then pulls the required images from the registry and runs them as containers with the designated resources.

Orchestration tools are also responsible for monitoring container resource consumption, job execution, and machine health across hosts. Depending on its configuration, an orchestrator may automatically restart containers on new hosts if the hosts they were initially running on failed. Many orchestrators enable cross-host container networking and service discovery. Most orchestrators also include a software-defined networking (SDN) component known as an overlay network that can be used to isolate communication between apps that share the same physical network.

When apps in containers need to be updated, the existing containers are not changed, but rather they are destroyed and new containers created from updated images. This is a key operational difference with containers: the baseline software from the initial deployment should not change over time, and updates are done by replacing the entire image at once. This approach has significant potential security benefits because it enables organizations to build, test, validate, and deploy exactly the same software in exactly the same configuration in each phase. As updates are made to apps, organizations can ensure that the most recent versions are used, typically by leveraging orchestrators. Orchestrators are usually configured to pull the most up-to-date version of an image from the registry so that the app is always up-to-date. This “continuous delivery” automation enables developers to simply build a new version of the image for their app, test the image, push it to the registry, and then rely on the automation tools to deploy it to the target environment.

This means that all vulnerability management, including patches and configuration settings, is typically taken care of by the developer when building a new image version. With containers, developers are largely responsible for the security of apps and images instead of the operations team. This change in responsibilities often requires much greater coordination and cooperation among personnel than was previously necessary. Organizations adopting containers should ensure that clear process flows and team responsibilities are established for each stakeholder group.

nongrata081 commented 4 years ago

https://habr.com/ru/company/acribia/blog/448704/

nongrata081 commented 4 years ago

how it all started (read an article & realize docker is outdated & there is a need for systematic approach for containerizing dev envs):

goodbye docker: https://technodrone.blogspot.com/2019/02/goodbye-docker-and-thanks-for-all-fish.html https://news.ycombinator.com/item?id=19351236

nongrata081 commented 4 years ago