Introducing RackHD CLI



RackHD is a technology stack for enabling automated hardware management and orchestration through cohesive APIs. It serves as an abstraction layer between other management layers and the underlying, vendor-specific physical hardware. Essentially, it makes bare-metal data centers easier to manage from a deployment and configuration perspective.

Out-of-the-box RackHD does not include a command line interface (CLI). CLIs are critical to system administrators who want to gain quick insights into an application and enable more powerful behavior through scripting. Since {code} already hosts golang bindings via gorackhd, it seemed like a natural fit to pair gorackhd and Cobra to kickstart a simple, cross-platform RackHD CLI.

RackHD CLI is still in its infancy – we are only on version 0.1.0. This release lays the groundwork for a project that is easy to contribute to and expand on. Current capabilities of the RackHD CLI are listing RackHD nodes, SKUs and tags. Nodes can be tagged with arbitrary labels and the list of nodes can be filtered based on tags.


bash-148836_960_720.pngProducing RackHD CLI enabled Travis Rhoden, a {code} team member, to start a fun little side-project that integrated RackHD and Kubernetes with “”. This script is a developer-oriented tool for creating new Kubernetes clusters on a variety of providers, such as GCE, AWS, or Azure. Due to its nature as a BASH script, adding support for RackHD into required a CLI that could allocate new nodes for a Kubernetes cluster. The RackHD driver for is located here in GitHub. The tool has since been deprecated in Kubernetes, but this was still a useful exercise to show the potential of a RackHD CLI.

We at {code} believe the allocation and consumption of compute resources should be easy, regardless of whether those resources are virtual, cloud or bare-metal in your own data center. The RackHD CLI is another tool that enables developers to interface with bare-metal machines in much the same way they would with nodes from a cloud provider. Get started using RackHD CLI by deploying a Vagrant instance of RackHD and use the RackHD CLI to get information and while you’re at it, don’t forget to contribute!

And if you’re curious about past projects with RackHD, check out the RackHD Machine Driver and a home lab.

Docker Announcements, Wienerschnitzel, and Presentation Links from ContainerCon Berlin 2016

Once again the Linux Foundation has pulled off a great event! Last week we went to Berlin for LinuxCon/ContainerCon Europe. With good speakers, sponsors, food and more, we had a blast all while focusing on our favorite topic, container technologies.

Docker Announcements

It’s sort of a long-running joke that it wouldn’t be a conference unless Docker open sources a new piece of software. Docker didn’t disappoint when Solomon Hykes (Founder and CTO of Docker) announced InfraKit as a new part of its arsenal. Taken directly from the InfraKit repo itself:

InfraKit is a toolkit for creating and managing declarative, self-healing infrastructure. It breaks infrastructure automation down into simple, pluggable components. These components work together to actively ensure the infrastructure state matches the user’s specifications. Although InfraKit emphasizes primitives for building self-healing infrastructure, it also can be used passively like conventional tools.

Without going too deep into the weeds – InfraKit is a mix of plugins that comprise an architecture, which relies on a few key pieces of technology to deploy new hosts when one goes down. Think of InfraKit like an orchestrator such as Docker Swarm but for hosts, not containers.

A logical comparison would be AWS Auto-Scaling Groups that respawn EC2 instances when one has failed. Docker takes this further and uses a plug-in architecture to go beyond just EC2. It’s interesting to note that Docker Machine isn’t used under the covers. This would have been an easy target to use as a service with the array of plug-ins that have already been developed.

Container platforms are going beyond the container itself. Companies like Docker are looking to create a solution where the platform wants to manage its own infrastructure. The {code} team is in that space with the recent release of the Docker Machine Driver for RackHD v0.2.0 and look to continually push the envelope. InfraKit looks to have a healthy future – a RackHD plug-in would enable bare-metal container infrastructure to be managed by the platform itself. A revolutionary idea.

DockerCon 2017 has been announced for April 17-20th in Austin Texas. We will have to wait until then to see if Docker has any more repos to open up. Or will we?

{code} Community Assembly dinner

As we all know, work/life balance is key. To keep this balance strong the {code} team is excited to announce we have started a new supper series at various events called {code} Community Assembly.

community dinner berlin .JPGIn Berlin we invited community members to join us for a night filled with German food, drinks and lively conversation. We went to Prater Gaststätte, the oldest biergarten in Berlin, where the discussions ranged from containers, to broken database schemas, to finding love. After these events, the {code} team is always reminded of how cool our community members are – thank you to all who joined us!

If you would like to be a part of our community gatherings and enjoy meeting smart, like-minded people while eating good food (let’s be honest, who doesn’t), then please reach out to Jonas Rosland on our {code} Community Slack.

The Open Source Storage Summit

October 7, 2016, marked our third time hosting Open Source Storage Summit. This was our largest one yet with over 50 people who joined us for a full day of technical sessions and hands-on labs; the first one led by Jonas Rosland focused on using Docker and REX-Ray, then David vonThenen guided us through a lab on Mesos, Marathon and REX-Ray.

Nick Thomas from Klöckner gave a funny and informative presentation on how they’ve automated their CI/CD pipeline with the help of the Rancher, GitLab and their own project called SHIA. Want to learn more? Have a look at Nick’s presentation here.


Our very own {code} team member, David vonThenen, spoke on Software-Defined Storage and Container Schedulers with a focus on how he’s created a Mesos framework to automatically deploy ScaleIO on Mesos nodes. The most frequently asked questions were about the possibility of doing this for other storage solutions, how you manage Mesos nodes, ScaleIO and, naturally,  how Mesos’ containerizer differs from Docker’s. Check out David’s slides here.

david berlin.jpg

We like you, be our friend? To make sure you don’t miss out on any of our future events, register for our newsletter and join the {code} Community Slack!

RackHD Machine Driver v0.2.0

rackhdWe are excited to announce the latest release of RackHD Machine Driver v0.2.0, which allows users to manage Docker hosts through RackHD. With the first version, the Machine Driver allowed a user to configure Docker on a RackHD node that already had an Operating System (OS) installed. With this new release, a RackHD workflow can be applied to the node at provision time, allowing for OS installs. Additionally, a user no longer has to specify a RackHD node to use; rather, they can specify a SKU and the Driver will automatically pick a node for you. Let’s break down why this is important.

docker-machine-logoMost Docker Machine drivers target cloud-based and/or virtualized infrastructure. You use Docker Machine to say “give me a Docker instance running on AWS”. If Amazon isn’t your cloud provider of choice, you can use Rackspace, Google, Digital Ocean, etc. A key component of cloud infrastructure is always this ability to say “give me a machine” and specifically not caring about where the machine is physically (ignoring things like availability zones for now). You also don’t know it’s IP address in advance. You do choose what machine characteristics you need, such as CPU, RAM, disk space, and an OS.

Contrast this with working with bare-metal servers in your own data center. It’s not uncommon for each node to have a hard-coded IP address, often related to its position in a rack or row. When you want to use a node, you often need to know which node you are using so you can look up its IP address. Or perhaps a group of nodes is assigned to you and you are told “these are your machines, and these are their IP addresses”. If that node breaks, a technician has to go to that exact server and try to fix it, rather than just deleting it (and ceasing to pay for it) and creating a new one.

RackHD helps ease this burden by centrally managing resources in an automated way. RackHD can handle IP address assignment and takes on the task of OS installation. Advanced workflows can be written to perform tasks based on data collected from the machine itself, such as installing a specific OS on all servers from a specific manufacturer. The way this is done with RackHD is the concept of a SKU. This is a set of rules to categorize a machine based on information gathered during discovery. When a node is first discovered by RackHD, data is collected from sources such as dmi, lshw, lspci, IPMI, ohai, etc. You may not know what these are, but the results end up in catalogs that you can use to access fine-grained details about the motherboard, processor, manufacturer, firmware versions, etc. You then define rulesets to categorize a machine into a SKU. Examples would be “nodes with an Intel chipset go in a SKU named ‘Intel'”, or “nodes with a BMC and an AMD processor go in a SKU named ‘docker'”. Perhaps you see where this is going now.

set-of-container-hostsWith the latest RackHD Machine Driver you can now consume bare-metal resources in much the same way you consume cloud-based ones. If you set up appropriate SKU logic in RackHD to give you a pool of nodes that are available for use with Docker, you can use Docker Machine to grab one and configure it. You don’t have to care which physical node you are grabbing, you merely specify the SKU much like specifying a machine type from your cloud provider. If the nodes in your SKU don’t already have an OS on them (part of your SKU logic could be to automatically install CoreOS on a node in the SKU, for example), you can give Machine Driver the name of the RackHD workflow to install the OS of your choice, such as Ubuntu.

We think this opens a whole new model of consumption for bare-metal resources. With the rise of private and hybrid cloud, more people are trying to efficiently use their private, local resources rather than reaching out to the cloud. But the cloud is convenient! The combination of RackHD and the RackHD Machine Driver work toward making the use of bare-metal resources every bit as convenient as on-demand cloud-based infrastructure.

ScaleIO Framework for Apache Mesos

At {code} we believe that scale-out applications are at the heart of persistence in Platform 3, but there is additional complexity associated with those applications when deployed into production. It just so happens that Mesosphere thinks the same way. Florian Leibert, Co-Founder and CEO of Mesosphere, recently wrote an article entitled Welcome to the Era of Container 2.0. Simply put, Containers 2.0 includes the co-existence of stateless and stateful containers on the same container runtime. Leibert makes a compelling case for how DC/OS is already delivering on this vision through a rich set of services offered and built on two-level scheduling provided by the Mesos Framework interface.


The message behind the Container 2.0 story is that we should not think of the things we deploy as applications, but instead as a service that is managed by the application platform and is easily consumed by its end users.This idea is embedded in the latest Mesos 1.0 release that includes features and APIs in which services can both provide and consume storage easily.

The recent work being done inside the {code} team aligns with this goal in mind.

ScaleIo Framework.pngThe ScaleIO Framework v0.1.0, a new {code} project, takes the software-defined storage platform Dell EMC ScaleIO and wraps its capabilities into an Apache Mesos Framework. It automatically deploys and configures ScaleIO on Mesos Agents to enable external volume consumption for your persistent applications.

Deploying ScaleIO is as simple as launching any other task in Mesos when using the SIO Framework. Almost instantly, all of the software (the ScaleIO packages, REX-Ray and mesos-module-dvdi) is rolled out and configured without any manual intervention. And within a couple of minutes, ScaleIO is ready to provision volumes for all of your container needs.

The ScaleIO Framework will evolve in the near future to include:

·      Ability to provision the entire ScaleIO cluster from scratch

·      Support for additional platforms (CentOS/RHEL, CoreOS)

·      Ability to monitor operational aspects of ScaleIO

Give this Framework a try and provide some feedback about your experience(s) using it! You can find more information on the ScaleIO Framework’s GitHub page with specific details about support, software requirements and how to launch this Framework on Mesos. There’s even a simple AWS CloudFormation template to easily spin up an entire ScaleIO and Apache Mesos cluster to test it out in under 4 minutes! Check out the video at the end to see it in action.

You can find myself along with the {code} team at ContainerCon EU in Berlin, Oct 4th  through the 6th. I will be speaking there at a session entitled Game Changer: Software-Defined Storage and Container Schedulers on Thursday Oct 6th at 5pm and will be covering this topic in more detail. I hope to see you there!

Lederhosen, Oktoberfest and ContainerCon


Earlier this year, LinuxCon and ContainerCon merged together into a single event. This merger created a much larger open source community attendance at all LinuxCon/ContainerCon events, and we expect to see the same next week in Berlin, Germany. At previous conferences we have seen developers, advocates and many others in the industry discussing topics such as deploying containers in enterprise applications, security around containers and cloud storage. This has contributed to increasingly dynamic conversations that lead directly to industry growth, and we have no doubt this trend will continue at the Berlin event.

Screen Shot 2016-09-28 at 7.10.05 PM.pngAs the team prepares for the conference – which conveniently coincides with Oktoberfest! – we are polishing our beer steins and packing our lederhosen. Coderhosen? Just kidding! or are we? Here’s where you can find us next week:

  • Booth D39 in the expo area

  • Friday’s Open Source Storage Summit (from 09:30-15:30 – we are providing lunch)
    • The agenda is focused on container platforms and we’ve added hands-on labs for persistent storage on Docker and Mesos.

Please come by our booth and say hello — and don’t forget to pack your lederhosen!

{code} Represents at the first Dell EMC Public Event!

The Dell EMC Forum in Dallas was an event unlike any other. After all, it was the first united appearance of Dell and EMC as one company. Talk about exciting times!
5311_a4svkc6c9fAs you walked into the Irvine Conference Center, attendees were greeted by a modern setting almost like something out of Star Trek. With over 1,200 registered to attend, the crowd was a mixture of customers, partners and Dell EMC employees many of whom made sure to stop by the {code} booth to try out the virtual reality experience (the HTC Vive paired with Dell Alienware) and talk about open source.

Screen Shot 2016-09-21 at 6.50.10 PM.pngDuring this one day event, Kenny Coleman (Developer Advocate, {code}) gave an introductory session on containers. Kenny  detailed how containers can lay the foundation for the future of infrastructure. He also broke down why one should choose a container platform and how a container is different from virtual machines. With a full room of engaged of attendees, at standing room only, Kenny demonstrated container abilities using Minecraft to forage for inventory as a means of collecting data. He then simulated a server failure which makes the container non-existent but through features within REX-Ray (one of our core projects focused on data persistence for containers) the container was restarted on another host with all of the data intact.

Thank you Dell EMC Forum Dallas team! We look forward to being a part of the Dell EMC Forums in New York City, NY and Long Beach, CA of 2016. We can’t wait to see you there!

Multiplayer Minecraft in Moments

By Akira Wong, Intern from UC Irvine and Proud Heng, Intern from UC San Diego

We learned a ton about how containers impact operating applications over this past summer with the {code} team. Before we began this “Summer of {code},” we needed a goal. We wanted a way to challenge ourselves and it was important that we find a project that was relevant to us and provided opportunities to explore containers, open source and what it means to be a part of the {code} team. After doing some research we came up with our research question:

What would it take to run Minecraft in a container?

Building blocks, adventure, and a playground in one game – Chances are you or someone you know plays Minecraft. To properly grasp this phenomenon check out this short video featured by WIRED.

If you have ever looked into the technical aspects of actually running a Minecraft server, you’ve probably noticed that it is notoriously tedious and even difficult to set up at times. After all, there are numerous hurdles to overcome – to deploy a server one must configure dependencies, open ports, and accept Minecraft’s end user license agreement. While this may seem trivial to some, by the time the rest of us figure out how to maintain and upgrade the server, our friends might have already moved on!

Basically setting up and maintaining a Minecraft server is tough. Players just want to play with their friends and family.

So why not make it an easier process in the long run with containers and persistent storage!

Docker to the rescue!

Docker is a rising star in infrastructure management technology. It’s a tool that allows you to easily deploying applications. We’ll be using what Docker calls containers to run our Minecraft Server. Containers let us create a bundle of everything that’s needed to skip the tedious setup and instead focus on an easy deployment of applications such as Minecraft servers. In addition, containers can be run nearly anywhere – on your laptop, your home server, in a virtual machine or using cloud hosting such as Amazon’s EC2 cloud. Even if you don’t understand the technical details, it’s time to get excited. This means that not only can we deploy a Minecraft server in less than fifteen minutes, it can also stay online forever. If we hand off the configuration to Docker and the hosting to cloud environments, then we have more time to focus on what we actually care about – playing Minecraft.

However, there’s one tiny problem there’s always a hiccup

In exchange for portability and automatic deployment, Docker enforces “statelessness”. This means that a container is meant to live, perform its service, and die.  Ultimately, there is no difference between individual container instances. This poses a major problem to our prospective server – persistence. If  data created by our server will be deleted as soon as the container stops running this questions the sustainability of our Minecraft server.

If you’ve been following along, maybe you already see the looming problem, but for those who don’t quite grasp it…

Imagine you’ve already set up your Minecraft server in a container- it was a breeze, after reading and sharing this blog, you made an AWS account and got a Minecraft server online quickly. Things have been going great. Two weeks have passed, and you and the other players have assembled a castle, built a farm, and established friendly relations with the local villagers. However, you’ve had a nagging feeling in the back of your head lately. What was that thing about containers being stateless? Would our Minecraft server world data be persistent even if our container was killed? What would happen to all our work if this container ever shut down?

As soon as this thought occurs to you, disaster strikes! A janitor trips over a cord, there’s a combination hurricane, tsunami, earthquake, zombie apocalypse and all of AWS goes down. Darkness. Days go by and eventually AWS comes back online. You hesitantly restart the container, trembling as you log in. Only to find…

Everything is gone.

  • That castle? Disappeared.
  • Your watermelons? Never to be eaten.
  • That villager you named Villager McVillageface? You’ll never see him again.

All you can see is the empty expanse of a plains biome. Your players burst into tears when you break the news and you struggle to hold it together yourself. If only this disaster could have been averted. If only there were some way to achieve persistence in containers – then the world may have been saved.

As it turns out there is a way to save state in containers. Using what Docker calls volume mounts it is possible to mount local storage into the container. Then, upon restarting a container with the same volume mount, the data will still be there. However, note that this applies only to local storage, if the container host goes up in flames or you ever want to upgrade it it, then you will need to jump through more hoops.

Fortunately, we can use REX-Ray to circumvent these issues and connect to remote cloud storage volumes. REX-Ray is an open source project from {code} by Dell EMC that allows us to future-proof our Minecraft server. By storing the data remotely, we don’t have to worry if our container or even the whole server is lost while also laying the groundwork to be able to quickly upgrade our server. REX-Ray handles the details of creating storage volumes and connecting one to our Minecraft server container. Using Docker’s volume mounts and REX-Ray we can save our Minecraft world data so there’s no longer need to worry about data being lost forever…

Sound good? Hop over to our GitHub project for detailed instructions and you’ll be up and running in no time.