Ceph support coming to REX-Ray

screen-shot-2016-12-08-at-8-02-55-am Software-based distributed storage provides more freedom and flexibility when designing modern data centers. Because of this, adoption of software-based storage has rapidly accelerated in recent years. The success of Dell EMC’s ScaleIO is a great example of this which has been supported in REX-Ray since the beginning.

REX-Ray has flourished into a stable storage orchestration tool that is used by organizations ranging from startups to enterprises. The success has sparked an ever-growing community around it where many are contributing back and more feature requests are being added. The community asked for it and we’re glad to announce Ceph RADOS Block Device (RBD) support is currently in progress and being tested for the next major release of REX-Ray.

On April 21, 2016, codename “Jewel” became the first Ceph release in which Ceph was considered stable. This release excited the community because it provides users more choices for data storage. Three days later on April 24th, 2016, the REX-Ray community asked for Ceph RBD support in REX-Ray issue #390. Shortly after, issue #191 was created as part of the libStorage (the workhorse behind REX-Ray) tracker and development began in September 2016. {code} recognized the container community needed a Ceph implementation that provides uniform support across all container platforms (including Docker, Mesos, and Kubernetes) and packaged all of the features and architectural simplicity that REX-Ray delivers with a commitment behind it from a major organization.

The development has been placed into three phases, each providing support for different functionality. With Version 1, all standard functionality for volume create, delete, attach/detach, mount/unmount will be there. Looking forward to Version 2 and 3, more advanced functionality such as snapshot, volume copy, volume locking, and customized deployments will be added.

screen-shot-2016-12-08-at-8-07-53-amCeph RBD support for Version 1 is projected to be introduced into REX-Ray 0.7.0, which should be available by the end of 2016. View the progress for tracking feature completion in REX-Ray #527 and current status for inclusion in libStorage Pull Request #347.

We’re excited to add Ceph to the growing list of supported storage platforms, giving users more choice on how they provide persistent storage for containers. This continues the rapid trend of adopting software-based storage to create a distributed storage platform on-premise or in the cloud. Look forward to seeing a new addition to the {code} Labs to easily create a Ceph and REX-Ray environment. Let us know what you think and please provide any suggestions or feature requests in the project links.

Announcing the {code} Catalyst program!

The open source community is fascinating; we meet, interact and engage with people from different organizations and backgrounds on a daily basis. The {code} team participates in the open source community, which provides freedom and flexibility when using and creating software, driving technology shifts and producing solutions that may not have been possible without the openness and engagement with the community.

Based on the belief that our community is what makes us better and stronger, we are proud to announce a new community initiative from the {code} team:

The {code} Catalyst Program!

code_catalyst_horizontal_web_rgb

The program

The {code} Catalyst program is focused on promoting thought leading members of the open source community by creating a candid dialogue between open source advocates, developers and project managers across company boundaries. Our goal is to create an ecosystem of innovative open source advocates who lead and advance emerging technology to support software-based infrastructures.

“The number one thing for me when looking at a community to engage with is a passion for sharing and intellectual curiosity.”

Mike Coleman, Technology Evangelist, Docker Inc., {code} Catalyst

Topics covered in the program’s regularly scheduled webinars and workshops include open source use and adoption at large enterprises, automation and orchestration, CI/CD innovations, monitoring and metrics, storage and container data persistence, and of course deep-dives into {code} by Dell EMC’s open source projects.

The members

According to Merriam-Webster, a catalyst is a substance that increases the rate of a reaction, spurring rapid change. With a catalyst, reactions occur faster because they require less activation energy and often only tiny amounts of the catalyst are needed.

Based on this definition we see the {code} Catalysts as influential advocates of open source. They educate others on projects they are involved in, engage in conversations to advance software-based infrastructure and share real-life experiences with other members of the larger {code} Community.

code-catalyst-members{code} Catalyst members have vast knowledge in industry-changing open source projects that redefine how modern data centers are run, from containerization to automation to large-scale CI/CD pipeline implementations.

The freedom of choice, collaboration and contribution inspires me to work with open source and the open source community.”

Ajeet Raina, Project Lead Engineer, Dell Technologies, {code} Catalyst

As a community we also see the value of our members getting to know each other on a personal level. Keep your eyes open for our in-person {code} Assemblies and join us for entertaining activities at prominent industry events such as OSCON, CloudNativeCon, DockerCon, Open Source Summit, MesosCon and Dell EMC World.

Make sure to follow our #codecatalyst members on Twitter here.

Do you think you or someone you know fits the bill of being a {code} Catalyst?

Apply here: codedellemc.com/community

k8-lib-arch

A Cloud Native Proposal: Storage Extensibility with libStorage

Kubernetes has exploded onto the scene gaining attention from everyone. As we mentioned last week, it is at a fork (no pun intended) in the road where early inclusions of certain providers are going to prohibit growth to more providers and platforms. For instance, the user experience with Amazon AWS includes the native ability to consume EBS storage, but moving applications off of AWS, while still using Kubernetes, will prove to be cumbersome. Our involvement in the Kubernetes Storage SIG is to ensure both optimal user experience and heterogeneous storage platform support.

Last week at KubeCon, {code} and REX-Ray were recognized on the cloud native eco-system slide. The project is widely known because of our focus on ensuring both an optimal and flexible architecture and phenomenal user experience for consumers. This important position is based on our commitment to open source and our contribution to heterogeneous orchestration tools and frameworks for storage. This week we are proud to bring you the latest {code} Lab demonstrating our proposed libStorage integration with Kubernetes.

REX-Ray is widely being used to provide storage orchestration on application platforms. It is built on top of a less known but easily consumable and powerful API, model, and framework called libStorage. Today, this framework has proven success with Docker, Mesos, and Cloud Foundry by being able to abstract storage functionality to a common library that is cross-platform and cross-storage provider compatible. The {code} engineering team has been hard at work with a proposal (PR #28599) for a fully functional volume API for Kubernetes, giving Kubernetes the same capabilities of being extensible for numerous amounts of storage platforms.

k8-lib-arch

Watch the video to see it in action and try Exploring Kubernetes with libStorage Persistent Volumes on AWS from the {code} Labs yourself. All AWS functionality is taking place completely by way of libStorage. The lab will go through the entire process of building a fully functional fork of Kubernetes with libStorage and a few application demos. The application deployments will look at all three Kubernetes scenarios: direct pod volume mapping, persistent volume claims, and dynamic storage class provisioning. (don’t stop reading, more technical details after the video)

Ensuring optimal user experience and growth of storage persistence use cases in this space is critical. This tends to mean a few things:

  • Application platforms should not support storage platforms directly. There must be a common storage API that is advertised and explicitly supported by a storage platform or framework/plugin.
  • Adhering to the storage specification can be easy and flexible. An out of the box Go framework will be an option to make it as easy as possible to build integration. Creating a completely native implementation of the API in any language of choice should ensure flexibility.
  • The runtime implementation is not dependent on achieving storage functionality. The containerization of services are not required, but optional if/when necessary.

Today the API represents commonality across the features of Docker, Mesos, Cloud Foundry, and Kubernetes. Specifically, we expect the integration with Kubernetes to achieve the following:

  • Heterogeneous storage platform and framework support
  • Reduced Kubernetes dependencies by way of minimal client with no storage platform specific dependencies in the core of Kubernetes
  • Testing burden shifted from Kubernetes to libStorage and storage platforms
  • A consistent experience for both operations and consumers where volume functionality is portable across application platforms
  • No extra node requirements or installs and an ability to centralize control of volume functionality
  • Automatic volume attachment for Persistent Volumes (PV)
  • Support for PV & PersistentVolumeClaim (PVC) binding and consumption
  • Support for dynamic provisioning and consumption of volumes using Storage Classes

Today the libStorage integration to Kubernetes just serves as a proposal. There is no doubt work will continue and evolve to support the quickly moving cloud native eco-system. We are excited to play a part and feel that both REX-Ray and libStorage projects are a great starting place to accelerate the cloud native eco-system.

Take the lab for a spin and let us know what you think.

{code} in the Cloud Native Landscape at KubeCon Seattle 2016

Cloud Native discussions are in full swing in Seattle at this week’s KubeCon. The {code} team is here contributing to and engaging with the community to make Kubernetes a successful platform and choice for customers. The team’s focus continues to be storage solutions for container platforms, including Kubernetes. We even had a great validation moment where, during the opening keynote, {code} was highlighted as a leading provider for cloud-native storage.

screen-shot-2016-11-09-at-3-25-55-pm

Our team has a proven track record of successfully making storage technologies available to modern and open source infrastructure.

  1. Docker 1.7 debuted REX-Ray as one of three available volume drivers when it was first released to the public. This early involvement helped {code} drive the adoption of stateful applications in Docker containers.
  2. Since the release of Mesos 0.23, Docker’s Volume Driver has been supported natively which has continued into Mesos v1.0 storage. This is made possible through our isolator module.
  3. More recently, REX-Ray has been integrated into Cloud Foundry to enable persistent applications and external volumes through a volume service broker and Bosh implementation.
  4. Core to the success of REX-Ray with these platforms has been its focused approach to providing storage orchestration for different platforms. REX-Ray has introduced an evolutionary package called libStorage which already has expanded REX-Ray architecture choices but can play an even bigger role in ensuring excellent user experience while bringing storage closer to platforms such as Kubernetes.

We have made great progress solving industry-wide challenges with adopting storage to successfully containerize stateful applications. All of these solutions can be taken for a test run in the {code} Labs repo.

Over the past few months, we’ve witnessed Kubernetes show all the signs of being able to grow into a large community driven container scheduler. As the community grows the challenges of integrating enterprise strategies, such as storage, into application platforms has to be addressed.

There is wide agreement that one of the key aspects of open source software is the user experience. So far the Kubernetes core team has opted to prefer storage drivers that are maintained as part of the project. This tends to lead towards an excellent user experience once the driver is mature since there are no extra tools, drivers, or long running processes to maintain.

Question: Can external storage providers be introduced to Kubernetes while maintaining an excellent user experience?

The big challenge here is to figure out a solution to provide this without bringing platform dependencies into the core project. This isn’t necessarily a new challenge in the industry, but it has to be approached in a community driven way. We are excited to participate in the Kubernetes Storage SIG to help solve these and many more challenges of external storage functionality within Kubernetes. In fact, the last meeting had 30 community members from Google, Red Hat, customers, and storage partners.

screen-shot-2016-11-09-at-3-26-22-pm

Are you looking for a way to get involved? Or have questions? Reach out to the lead {code} engineers Vladimir Vivien and Steve Wong on this project. Join the {code} community on Slack and be a part of the #project-REXRay and #kubernetes channels. Additionally, make sure to join the #sig-storage channel within the Kubernetes community on Slack.

Introducing RackHD CLI

 

rackhd-updated.png

RackHD is a technology stack for enabling automated hardware management and orchestration through cohesive APIs. It serves as an abstraction layer between other management layers and the underlying, vendor-specific physical hardware. Essentially, it makes bare-metal data centers easier to manage from a deployment and configuration perspective.

Out-of-the-box RackHD does not include a command line interface (CLI). CLIs are critical to system administrators who want to gain quick insights into an application and enable more powerful behavior through scripting. Since {code} already hosts golang bindings via gorackhd, it seemed like a natural fit to pair gorackhd and Cobra to kickstart a simple, cross-platform RackHD CLI.

RackHD CLI is still in its infancy – we are only on version 0.1.0. This release lays the groundwork for a project that is easy to contribute to and expand on. Current capabilities of the RackHD CLI are listing RackHD nodes, SKUs and tags. Nodes can be tagged with arbitrary labels and the list of nodes can be filtered based on tags.

rackhcli.png

bash-148836_960_720.pngProducing RackHD CLI enabled Travis Rhoden, a {code} team member, to start a fun little side-project that integrated RackHD and Kubernetes with “kube-up.sh”. This script is a developer-oriented tool for creating new Kubernetes clusters on a variety of providers, such as GCE, AWS, or Azure. Due to its nature as a BASH script, adding support for RackHD into kube-up.sh required a CLI that could allocate new nodes for a Kubernetes cluster. The RackHD driver for kube-up.sh is located here in GitHub. The kube-up.sh tool has since been deprecated in Kubernetes, but this was still a useful exercise to show the potential of a RackHD CLI.

We at {code} believe the allocation and consumption of compute resources should be easy, regardless of whether those resources are virtual, cloud or bare-metal in your own data center. The RackHD CLI is another tool that enables developers to interface with bare-metal machines in much the same way they would with nodes from a cloud provider. Get started using RackHD CLI by deploying a Vagrant instance of RackHD and use the RackHD CLI to get information and while you’re at it, don’t forget to contribute!

And if you’re curious about past projects with RackHD, check out the RackHD Machine Driver and a home lab.