1. Containers provide standardized development
Everybody wins when solution providers focus on creating value and not on the intricacies of the target environment. This is where containers shine.
With the wide-scale adoption of container technology products (like Docker) and the continued spread of standard container runtime platforms (like Kubernetes), developers have less compatibility aspects to consider. While it’s still important to be familiar with the target environment, the specific operating system, installed utilities, and services are less of a concern as long as we can work with the same platform during development.
For workloads targeting on-premises environments, the runtime platform can be selected based on the level of orchestration needed. Some teams decide on running their handful of services via Docker Compose. This is typical for development and testing environments, and not unheard of for productive installations. For use cases which warrant a full-blown container orchestrator, Kubernetes (and derivatives like OpenShift) is still dominant.
Those developing for the cloud can choose from a plethora of options. Kubernetes is present in all major cloud platforms, but there are also options for those with monolithic workloads, from semi to fully managed services to get those simple web applications out there.
For those venturing into serverless, the deployment unit is typically either a container image or source code which then a platform turns into a container.
While smaller firms seem to react faster to the container trend, larger companies are catching up. Enterprise clients recognize the benefits of building and shipping software using containers – and other cloud-native technologies. Shipping solutions as containers is becoming the norm.
2. Limited exposure means more security
Targeting container platforms (as opposed to traditional OS packages) comes with benefits for deveopers, but also with security consequences. For example, if we’re given a completely managed Kubernetes platform, the client’s IT team is responsible for configuring and operating the cluster in a secure fashion. Our developers can thus focus on the application we deliver and, thanks to container technology, we can further limit exposure to attacks and vulnerabilities.
This ties into the basic idea of containers: by only packaging what is strictly necessary for your application, you may also reduce the possible attack surface. This can be achieved by building images from scratch or by choosing secure base images to enclose your deliverables.
There are also cases when the complete packaging process is handled by your development tool(s). Spring Boot, for example, incorporates buildpacks, which can build Docker OCI images from your web applications in an efficient and reliable way. This relieves developers from hunting for base images and reduces the need for various optimizations.
Source: https://buildpacks.io/docs/concepts/
Developers using Docker Desktop can also try local security scanning to spot vulnerabilities before entering code and artifact repositories: https://docs.docker.com/engine/scan/
3. Containers support diverse developer environments
While Adnovum specializes in web and mobile application development, within those boundaries we utilize a wide range of technologies. Supporting such heterogeneous environments can be tricky.
Imagine we have one Spring Boot developer who works on Linux, and another one who develops the Angular frontend on a Mac. They both rely on a set of tools and dependencies to develop the project on their machine:
- A local database instance
- Test doubles (mocks, etc.) for third-party services
- Browsers (sometimes multiple versions)
- Developer tooling, including runtimes and build tools
It can be difficult to support these tools across multiple operating systems if they’re installed natively. Instead, we try to push as many as possible of these into containers. This helps us to align the developer experience and reduce maintenance costs across platforms.
Our developers working on Windows or Mac can use Docker Desktop, which not only allows them to run containers but brings along some additional functionality. For example, we can use Docker Compose out of the box, which means we don’t need to worry about ensuring people can install it on various operating systems. Doing this over many such tools can add up to a significant cognitive and cost relief for your support team.
Outsourcing your dependencies this way is also useful if your developers need to work across multiple projects at once.
4. Containers aid reproducibility
As professional software makers, we want to ensure that we not only provide excellent solutions for our clients, but that we can trace back any issue to the exact code change which produced the artifact – typically a container image for web applications. Eventually, we may also need to rebuild a fixed version of said artifact. This can prove to be challenging because build environments evolve over time.
Automation (specifically infrastructure as code, IaC) is key for providing developers with a reliable and scalable build infrastructure. We want to be able to re-create environments swiftly in case of software or hardware failure, or provision infrastructure components according to older configuration parameters for investigations. Our strategy is to manage all infrastructure via tools like Ansible or Terraform, incl. our data center and cloud environments. Whenever possible, we run services as containers instead of installing them as traditional packages.
We try to push hermetic builds because they can bootstrap their own dependencies, which significantly decreases their reliance on what is installed in the build context that your specific CI/CD platform offers. Historically, we had challenges with supporting automated UI tests which relied on browsers installed on the machine. As the number of our projects grew, their expectations for browser versions diverged. This became difficult to support even with our dedication to automation.
Eventually, we adopted bootstrapping and containers in our automated builds, allowing teams to define what version of Chrome or Java their project needed. During the CI/CD pipeline, the required version dependency will be downloaded before the build in case it’s not already cached.
Immutability means our dependencies and thus our products never change after they’re built. However, Docker tags are mutable by design, which can be confusing.
For example, let’s say your Docker file starts like this: FROM acme:1.2.3
It would be logical to assume that whenever you (re-)build your own image, the same base image is used. In reality, the label could point to different images in case somebody decides to publish a new image under the same label. To make sure you’ll be using the exact same image as before, start to refer to images via their digest. This is a trade-off in both usability and security. While using digests brings you closer to truly reproducible builds, it also means if the authors of a base image issue a new image version under the same tag, your builds won’t be using the latest version. Therefore, use base images from trusted sources and introduce vulnerability scanning into your pipelines.
By combining automation, hermetic builds, and immutability, we’ll be able to rebuild older versions of our code. This may be required to reproduce a bug, or to address vulnerabilities before shipping a fixed artifact.
While we still see room for improvement on our journey towards reproducibility, employing containers along the way was a decision we would make again.
Develop best practices to get the most out of Docker
Containers, especially Docker, can be a significant boost for all groups of developers. As with most topics, getting to know the best practices comes through experience and using the right sources for learning. To get the most out of Docker’s wide range of features, make sure to consult the documentation.
This blog post was originally published on Docker's website.