team performs a pull from the repository for the application, which also pulls down
any containers on which the application’s container depends, and deploys the
application to a container host. No complicated deployment instructions are required,
as the deployment is essentially self-contained. This is a completely immutable
operation that ensures that the containerized application will always run the same
way. The operations team then monitors the application, provides feedback to the
developers in the form of insights and metrics that may lead to updated versions of
the application, and the entire life cycle begins again. The containerization of the
application also helps if a rollback of an application update is required. If version 2.5
has a problem, for instance, version 2.4 of the container can be quickly pulled and
deployed.
The deployment is also fast. There is no creation of a new OS instance and no finding
of resources. The container is deployed to an existing container host, which could be
physical or virtual, and it happens quickly, potentially in subseconds, which opens up
new ways to run applications.
In the new “cloud era,” we see more microservices; an application is broken into its
component processes, and each process runs as its own microservice. Containers
embrace the microservice philosophy and enable it, which is best understood with an
example. Consider a stock application. The stock application’s web frontend runs in a
container, and a request is made for information about five stocks. A container
instance is created for each request, thus five containers, and the application in the
container performs the research into its delegated stock, responds with the required
information, and then the container is deleted. This approach not only is efficient with
resources, but also scales the application, with the only limit being the resources
available in the farm of container hosts.
Because each container shares a common container host instance, a greater density of
workloads is realized when compared to traditional machine virtualization, in which
every virtual machine requires its own complete OS instance.
Using a separate virtual machine for each application provides another benefit beyond
isolation: control of resource consumption. Containers provide resource controls (for
example, quality of service, or QoS) to ensure that one container does not consume
more than its fair share of resources, which would negatively impact other containers
on the same container host—the “noisy neighbor” problem. The container QoS allows
each container to have specific amounts of resources, such as processor, memory, and
network bandwidth to be assigned. This is covered later in this chapter, but it ensures
that a container does not consume more than its allotted amount of resources.
Containers rely on container images, which are analogous to a VHD with Hyper-V. A
container image is read-only (although layers can be added on top of images to create
new images, as you will explore later in this chapter). A container image is utilized by
container instances, and it is the container image that depends on other container
images. A container image consists of its Metadata, which includes items such as its
name, commands to execute when using it to start a container instance, its