Anyone who tries to keep up with the state of the container universe had better set aside a bunch of time to do so. There are several players involved, feature sets are updated regularly, business alliances are coming fast-and-furious, and the hype setting is turned up to 11. If your goal is to match up the container landscape, applications, and open source software with your business needs and technical stack, it’d be easy to get confused.

Here at {code} we’re product- and company-agnostic and friends with everyone.

From our point of view, everyone involved in open source is motivated to do the right thing. We start from a premise of shared purposes, with an expectation of goodness based on every player’s desire to be involved. We share several values. Among them: By working together, we jointly can make progress faster for the community, for vendors’ customers, and ultimately for end users, far more so than can be achieved by any one individual working on his own. We can work together to architecturally enable the best solutions and build things that enable progress and innovation.

I don’t mean to say that everything is perfect. But we are all working together to build an ecosystem and computing environment to make it easier to create and operate smarter cloud native applications that are truly portable and resilient. A critical example to fulfill this vision in the container area (if a late-comer) is storage; and it is progressing rather well.

Another important element in the progression of these environments is the container platforms that support cloud native applications. Containers introduced into our computing models the capability for a new consumption layer openly focused on applications. This is heading in the right direction. Instead of bottom-up purposefully building infrastructure to support specific software and its needs, our existing consumption model is completely getting flipped, redefined, and becoming portable. Compute-oriented consumption layers such as software-defined (virtualization) and IaaS (cloud) still may be used but they are appealing less to consumers in favor of ones focused on applications.

When consuming cloud services we start with the assumption that a myriad of services and decisions have been made for us. We don’t consume and carve up infrastructure for operating systems; in cloud native, we deploy applications that natively consume infrastructure services they need. For this, a new application layer is introduced through container platforms. Supporting this means building and/or managing infrastructure that is flexible, elastic, and interoperable on behalf of applications on top of any cloud. It is the components in these cloud native environments that DevOps teams – the people with their hands on the keyboard – need to evaluate in order to determine the best solution for their applications.

But as with all technology trends, maturity brings more advanced implementation questions. Up to this point, the early pioneers have to blaze a path, which is time-consuming (since mistakes are inevitable) and expensive (because qualified scouts demand high salaries). But as these capabilities are becoming more mainstream, newcomers and existing IT teams are instead facing age-old buy-versus-build discussions. Should we invest in creating a bespoke internal environment that ticks every checkbox on the wish list? Or is it faster and cheaper to invest in an externally-provided solution? Up to what point should it be a black box?

An ideal platform enabling this is not prescriptive about what applications it supports. The environment provides flexible and application-oriented consumption capabilities. It supports patterns like micro-services, but is not specialized and discounted towards others. It consumes infrastructure on behalf of smarter applications. It enables numerous infrastructure services to be scheduled dependently to satisfy workloads. It reinforces cloud native patterns in portability, elasticity, and scalability. It solidifies the innovative spirit of containers and provides key semantics that can be reused by more specialized services which in turn can be more focused towards a use case.

Choices about solutions require thinking about future strategy. Am I choosing a platform that I can leverage to optimize different types of applications? For example, Cloud Foundry (PaaS) includes a broad set of features supporting applications. It is prescriptive and focused on specific operational models behind non-persistent applications. It brokers for dependent services where necessary to support applications. As a black box it works. But since the solution is built with a prescriptive purpose, it appeals only to certain applications.

The impact of a focused platform also trickles down through dependent projects. A narrower perspective influences decisions about lower layers. This could include scheduling (Diego) which get built quickly and with low friction, but in a silo to support something very specific. This turns into a big disadvantage in open source since it is only used by one project and lacks contributors and momentum as compared to other independent reusable components such as Kubernetes.

How far up the stack does this new application-oriented consumption layer go? The ideal solution provides a level of reusable functionality. Container schedulers provide the key requirements for portability across on-prem and off-prem in this case. For example, we are likely to see organizations run database services (DBaaS) and platform-supporting developer services (PaaS) on top of this new application-oriented consumption layer. PaaS would then specialize with a simple but pure application developer focus. Different PaaS may even be used specific to application needs or developer teams. DBaaS would specialize in making the consumption and management of relational data services simple. Choosing a solution that enables this extends both the reach and relevance of containers and operational knowledge to best support these new emerging environments.

PaaS Application Patterns Scheduler component Scheduler is reusable Scheduling is extensible Components act as independent projects Lowest scheduling layer Storage scheduling
Cloud Foundry 12-factor Diego No No Less Application No
Docker UCP Container Swarm Yes No Some, see Moby Containers No
Mesosphere DC/OS Any Mesos Yes Frameworks Less Tasks, Resources Planned
Red Hat OpenShift Container Kubernetes Yes Controllers More Containers Yes

We are now in a phase where pioneering and expensive efforts are being replaced with useful environments and solutions. Organizations are making choices that solve multiple problems, not just one. Portability and application-focused operations are becoming a reality as the result of highly-interoperable components. It is a very exciting times. I am looking forward to all of the innovation in the cloud native space that is about to take place on top of extensible orchestrators and schedulers.