anitsh / til

Today I Learn (til) - Github `Issues` used as daily learning management system for taking notes and storing resource links.
https://anitshrestha.com.np
MIT License
76 stars 11 forks source link

Container, Containerization #17

Open anitsh opened 4 years ago

anitsh commented 4 years ago

Objectives:

  • [ ] What is a container?
  • [ ] Understand how containers work?

A container is an application bundle of lightweight components, such as application dependencies, libraries, and configuration files, that run in an isolated environment on top of traditional operating systems or in virtualized environments for easy portability and flexibility.

Containers

Linux containers are technologies that allows us to package and isolate applications with their entire runtime environment—all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality.

i.e. It provides a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run which allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop.

Containers are not a sandbox. While containers have revolutionized how we develop, package, and deploy applications, running untrusted or potentially malicious code without additional isolation is not a good idea. The efficiency and performance gains from using a single, shared kernel also mean that container escape is possible with a single vulnerability.

Containers provide isolation between the application environment and the external host system, support a networked, service-oriented approach to inter-application communication, and typically take configuration through environmental variables and expose logs written to standard error and standard out.

Containers themselves encourage process-based concurrency and help maintain dev/prod parity by being independently scalable and bundling the process’s runtime environment.

Containers create consistent environments to rapidly develop and deliver cloud-native applications that can run anywhere.

Containers are also an important part of IT security. By building security into the container pipeline and defending your infrastructure, you can make sure your containers are reliable, scalable, and trusted.

Containers silo applications from each other unless you explicitly connect them. That means you don't have to worry about conflicting dependencies or resource contention — you set explicit resource limits for each service. Importantly, it's an additional layer of security since your applications aren't running directly on the host operating system.

Benefits Of Using Containers:

Portability: Apps developed in containers have everything they need to run and can be deployed in multiple environments, including private and public clouds. Portability means flexibility because you can more easily move workloads between environments and providers. 
Scalability: Containers have the ability to scale horizontally, meaning a user can multiply identical containers within the same cluster to expand when needed. By using and running only what you need when you need it, you can reduce costs dramatically. 
Efficiency: Containers require fewer resources than virtual machines (VMs) since they don’t need a separate operating system. You can run several containers on a single server and they require less bare-metal hardware, which means lower costs.
Increased security: Containers are isolated from each other, which means if one container is compromised, others won’t be affected. 
Speed: Because of their autonomy from the operating system, starting and stopping a container is a matter of seconds. This also allows for faster development and operational speed, as well as a faster, smoother user experience.

Containerization provides a clean separation of concerns, as developers focus on their application logic and dependencies, while IT operations teams can focus on deployment and management without bothering with application details such as specific software versions and configurations specific to the app.It reinforces many of the principles from #174 The Twelve-Factor App principles, allowing easy scaling and management.

Containers have garnered broad appeal through their ability to package an application and its dependencies into a single image that can be promoted from development, to test, and to production. Containers make it easy to ensure consistency across environments and across multiple deployment targets like physical servers, virtual machines (VMs), and private or public clouds. With containers, teams can more easily develop and manage the applications that deliver business agility.

Applications: Containers make it easier for developers to build and promote an application and its dependencies as a unit. Containers can be deployed in seconds. In a containerized environment, the software build process is the stage in the life cycle where application code is integrated with needed runtime libraries.

Infrastructure: Containers represent sandboxed application processes on a shared Linux® operating system (OS) kernel. They are more compact, lighter, and less complex than virtual machines and are portable across different environments—from on-premises to public cloud platforms.

Kubernetes is the container orchestration platform of choice for the enterprise. With many organizations now running essential services on containers, ensuring container security has never been more critical. This paper describes the key elements of security for containerized applications.

Containers make it easier for developers to build and promote an application and its dependencies as a unit. Containers also make it easy to get the most use of your servers by allowing for multitenant application deployments on a shared host. You can easily deploy multiple applications on a single host, spinning up and shutting down individual containers as needed. Unlike traditional virtualization, you do not need a hypervisor to manage guest operating systems on each VM. Containers virtualize your application processes, not your hardware.

Of course, applications are rarely delivered in a single container. Even simple applications typically have a frontend, a backend, and a database. And deploying modern microservices-based applications in containers means deploying multiple containers—sometimes on the same host and sometimes distributed across multiple hosts or nodes

When managing container deployment at scale, you need to consider:

Which containers should be deployed to which hosts?  
Which host has more capacity?  
Which containers need access to each other and how will they discover each other?  
How do you control access to and management of shared resources such as network and storage?  
How do you monitor container health?  
How do you automatically scale application capacity to meet demand?  
How do you enable developer self-service while also meeting security requirements?

You can build your own container management environment, which requires spending time integrating and managing individual components. Or you can deploy a container platform with built-in management and security features. This approach lets your team focus their energies on building the applications that provide business value rather than reinventing infrastructure.

When managing container deployment at scale, you need to consider:

Resources

304

anitsh commented 3 years ago

Open Container Initiative (OCI)

Well, referenced from the OCI Offical Site, the Open Container Initiative (OCI) launched on June 22nd 2015 by Docker, CoreOS and other partners is a lightweight project, for the express purpose of creating open industry standards around container formats and runtime.

The OCI currently contains two specifications: the Runtime Specification (runtime-spec) and the Image Specification (image-spec). The Runtime Specification outlines how to run a “filesystem bundle” that is unpacked on disk. A “filesystem bundle” is a set of files organized in a certain way, and containing all the necessary data and metadata for any compliant runtime (eg Docker and CRI-O) to perform all standard operations against it.

At a high-level an OCI implementation would download an OCI Image then unpack that image into an OCI Runtime filesystem bundle. At this point the OCI Runtime Bundle would be run by an OCI Runtime (eg Docker and CRI-O). Now moving to Buildah.

anitsh commented 3 years ago

Container Functionalities

Manager/Engines

Image Builder

Runtime

Image Inspection and Distribution

Network

Storage

Security

Monitoring

Resource