Fresh. Rapidly evolving. Disruptive. These are just some of the terms used to describe the containerization wave sweeping the tech industry. In addition to making, it easier to add and remove resources vertically or horizontally, the containerization craze has, in a short five years, managed to tap into an element at the very heart of every modern business – applications. Now, other IT operation professionals have caught on to the benefits provided by more containerized environments. In our three-part series, we will discuss what they are and how you can use them to improve the way your organization navigates the ever-evolving world of technology.

The History

Containers may seem like a technology that magically appeared out of the ethos, however like most technologies they have evolved over time. The origin of the modern container is steeped in a deep history dating back to the late 1970s. The term chroot was used to describe the chroot(2) system call or the chroot(8) wrapper program. This function was used to manually create a chroot jail. The chroot jail was used for process isolation which as well derives from early times via AS400 and IBM.

These mechanisms of process isolation were further adopted into BSD OS in 1982 and then into the Linux operating system later in the 90s. Containerization would lay dormant for nearly two decades post inclusion into BSD OS and only used in specific use cases for process isolation.

In the early 2000s, the concept became more relevant as virtualization was beginning to increase in development and interest with the inclusion of the chroot jail into FreeBSD via the FreeBSD Jails which became the foundation for the Linux VServer in late 2001 allowing partitioning of resources. These developments were then introduced into the Linux kernel with OpenVZ in 2005 although this date is debated by some of the core developers back to 1999. Regardless of the specific date these advancements plus Sun Microsystems coining the term “Container” in their 2004 release of Solaris Containers laid the foundation for the modern container.

These evolutions led to the introduction of Control Groups (cgroups) implemented to enhance process isolation of the CPU and memory around 2006. In 2008 LXC (Linux Containers) was introduced to provide the cgroups functionality that allows limitation and prioritization of resources (CPU, Memory, Block I/O, Network, Etc.) and namespaces to provide isolated environments.

The reliability and stability of these functions and improvements in the Linux ecosystem laid the foundation for the launch of Docker. Most understand Docker as the starting point of containers, but we must never forget the hard work that came before to provide a foundation for the disruptive change that is now available to us from 40+ years of continuous improvement.

What are containers?

3Simply put, containers are a form of operating system virtualization which allows one or more self-contained, pre-packaged applications to run on a single host in an isolated environment. Each container is pre-built to include all and only the necessary executables, binaries, code, and configuration files required to run an application. Unlike the traditional server or virtual machine approach, these containers leverage their host’s existing kernel in addition to its own packaged binaries to reduce the need for having a fully baked OS, thus making them more lightweight and requiring significantly less overhead to run. This also enables portability as it ensures the container application will run reliably each time, regardless of the uniqueness of the host environment once it has a suitable host.

Containers are also very versatile in their use. A single container might be used to run anything from a command-line tool, a small microservice, or a software process to a fully functioning, large-scale application. Like traditional servers, containers can also be combined, or clustered, to run highly scalable, distributed applications composed of many microservices.

Container clusters commonly leverage orchestration engines, such as Docker Swarm and Kubernetes, which simplify and automate common management tasks like deployment, scheduling, scaling, storage, load balancing, and health monitoring. One major benefit of these engines, while combined with the portability of your containers, is the ease of scaling your application across your cluster – something which would involve more time and effort with traditional server computing.

The added ease of deployment and scaling using containers comes as a major asset for organizations that depend on the right combination of applications and digital infrastructure to craft their growth and security strategies. Perhaps one of their most effective uses, containers enable development teams to embrace agile workflows and keep pace with the ever-changing needs of the organization by making it easier to adopt continuous integration (CI) and continuous delivery/deployment (CD) practices which directly translate into quicker product delivery times for customers.

The Case for Containerized Environments

The most effective argument in the case for the use of containers is one that has propelled technology innovations for years – efficiency. By their very nature, containers require fewer system resources than traditional or hardware virtual machine environments.  With less overhead, increased portability, and a pathway to better application development, proponents of containers will tell you that the benefits outweigh the perceived challenges. Outside of creating an easier environment for developers, containerized environments can provide benefits to the end-user as well. For the end-user, Continuous Integration (CI) and Continuous Delivery/Deployment (CD) allow for applications’ code to be deployed to production-ready environments much quicker. This allows for better customer satisfaction with the product expediting bug fixes and new feature deployment.

The Challenges

While containers are increasing in popularity, industry-wide adoption of this tech innovation has yet to happen. Developers, CIOs, and other IT professionals are justifiably concerned about the challenges surrounding the use and deployment of containers across their networks. Recent industry analysis of the adoption of containers showed that one in five developers see security issues as the biggest challenge while other, more non-technical issues such as a lack of experience and expertise also present challenges. In part two of our series, we tackle how organizations can mitigate these challenges starting with choosing the right container technology solutions to meet their ever-evolving needs.