environments was incredibly challenging. A new
paradigm for deploying and managing infrastructure was
needed.
The IT infrastructure of the past had a number of
limitations, including high deployment and maintenance
costs, long waiting times for new infrastructure to be
ready for production (slowing down new application
deployments and hampering innovation and fast
prototyping), and difficulty in troubleshooting since each
environment was slightly different. Several technologies
have evolved to address these limitations. The one that
has had the biggest impact is public cloud computing.
Besides making infrastructure available almost
instantaneously and reducing the costs associated with
purchasing hardware, cloud computing has also changed
the way IT infrastructure is being provisioned,
configured, and consumed. With the advent of APIs,
public cloud infrastructure can more easily be
automated, and cookie-cutter templates can be created
for easy and fast deployment of new infrastructure.
Infrastructure as code is all about the representation of
infrastructure through machine-readable files that can
be reproduced for an unlimited amount of time. It is a
common practice to store these files in a version control
system like Git and then use them to instantiate new
infrastructure—whether servers, virtual machines, or
network devices. In this way, infrastructure can be
treated as source code, with versioning, history, and easy
rollback to previous versions, if needed. Being able to
spin up new infrastructure in a repeatable, consistent
fashion becomes extremely useful, for example, in cases
in which there is a spike in traffic and in the number of
application requests for a certain period of time. In such
cases, new servers, load balancers, firewalls, switches,
and so on can be dynamically provisioned within seconds
to address the additional load. Once the traffic subsides
and the need for higher capacity subsides, the