“There is no such thing as a stateless architecture” – Jonas Boner

Applications need data. It’s the foundation for why our businesses exist. When containers first emerged, the primary purpose was to tackle stateless services. As the technology matured over a short period of time, the need to allow containerized applications to have direct access to data increased. Both modern and traditional applications need different types of storage whether it be to file, block, or object storage utilizing documents, relational databases, streaming media, etc.

Containers promise application portability far beyond what virtualization can achieve because the hypervisor has requirements for hardware emulation. The ability to achieve application portability is going to rely on interoperability amongst container orchestrators. Even with modern cloud-native applications, storage is a critical component because applications can take advantage of persistent storage platforms and develop around some of its features. See Clint Kitson’s blog Understanding Storage in a Cloud Native Context for more information.

Container orchestrators and runtimes make specific requests for storage services to achieve things like Create/Remove, Inspect/List, Attach/Detach, Mount/Unmount. This has led to unique attempts of solving the “storage issue” between container orchestrators and created a divide amongst the industry. The API requests for storage orchestration to external platforms can come from two possibilities. First is an in-tree driver which is native code built into the container orchestrator. The second is out-of-tree drivers using a plugin. Each one of these has its own pros and cons list. The in-tree driver is subject to the release cycle of the container orchestrator while out-of-tree plugins may not be able to provide enhanced feature sets tied to a container orchestrator.

Docker
Docker was was the first to tackle external storage in 1.7 Experimental by creating the Docker Volume Driver Interface. Docker also has the Docker Plugin model that was introduced in 1.13 as well as the Docker Store. Docker discovers UNIX (.sock files) plugins by looking for them in the plugin directory located at /run/docker/plugins. This is an example of using an out-of-tree model.

The UNIX domain socket files must run on the same Docker host, whereas plugins with spec or json files can run on a different host if a remote URL is specified. This places centralization of storage functionality as part of the plugin’s responsibility. The interface accepts JSON/RPC over HTTP. The interfaces that are exposed for this out-of-tree model give full volume lifecycle and orchestration capabilities to the Docker CLI. However, if there are advanced storage features such as snapshotting or replication, it’s not exposed to the Docker CLI.

Mesos
Mesos had a history of only supporting local storage until v0.23. mesos-module-dvdi was created to address this issue and subsequently its features were merged upstream and is now available in Mesos 1.0+. This module utilizes an ongoing project called DVDCLI which packages the Docker Volume Driver CLI into Mesos allowing the use of any docker volume driver to be used with any Mesos containerizer. Similar to Docker, it uses JSON that allows the Framework to talk to DVDCLI and that uses JSON/RPC over HTTP to talk to the Docker Volume Driver Interface.

As mentioned before, since this utilizes any Docker volume driver, it’s an out-of-tree plugin and has the same capabilities and limitations for volume lifecycle management only available to the Docker CLI.

Kubernetes
Kubernetes is unique in that it has both in-tree and out-of-tree drivers. We’ve discussed these at length with Storage in Kubernetes Explained and What’s new with storage in Kubernetes 1.6 but let’s recap.

The in-tree drivers are native to Kubernetes code and are part of its standard distribution. These drivers expose API commands for their storage platforms based upon the interfaces that Kubernetes makes available such as Mount/Unmount, Create/Delete, etc. Kubernetes performs all of it’s functions for pod creations and looks to the driver to perform specific API calls for the actions needed. This also can take advantage of features in Kubernetes like Dynamic Provisioning and Storage Classes. The disadvantage is that if there is any bug or a feature that needs to be added to the storage platform, it’s dependent on the Kubernetes release cycle. The release cycle could potentially mean 3-6 months of waiting for a fix and continual maintenance and rebasing of the code.

Out-of-tree drivers use the Flexvolume interface. Flexvolume enables users to write their own drivers and add support for their volumes in Kubernetes. Vendor drivers should be installed in the volume plugin path, usr/libexec/kubernetes/kubelet-plugins/volume/exec/<vendor~driver>/, on every Kubelet node and on master nodes. This allows drivers to live outside of the core Kubernetes code and feature updates and bug fixes can be released on its own schedule. The Flexvolume interface expects volumes creation and deletion to happen outside of it. Therefore, only Attach/Detach and Mount/Unmount capabilities are available and not the entire volume lifecycle.

All Together Now
Wrapping all these together gives us a view of how fragmented this space can be and the unique differences between all three.

All of this means storage vendors need to create multiple integrations to be supported across the container ecosystem. The Container Storage Interface (CSI) project is in the early stages but will be a key to storage and container success going forward.

In the meantime, we’ve been “all in” here at {code} and have an integration for storage and every container orchestrator. We know today’s and tomorrow’s containerized applications need storage and there is a solution for every scenario using REX-Ray with Docker and Mesos, FlexREX and Kubernetes, REX-Ray plugin for Docker, and a native ScaleIO driver for Kubernetes.

Explore each of these options in the {code} Labs using ScaleIO and Vagrant. If you want to learn more about what happens behind the scenes of the REX-Ray Docker Plugin, watch this session by Clint Kitson from DockerCon US 2017