Blog

Simplify Kubernetes deployments with Helm (Part 1)

Jul 20, 2017 | Announcements, Migration, MSP

In this multi-post blog series, I’ll share best practices for simplifying Kubernetes deployments using Helm. In this first post in the series, we’ll give you a basic introduction to Helm. Mainly, we’ll discuss why you need a tool like Helm. Separately, we’re also releasing a series of blog posts on best practices for provisioning and managing Kubernetes cluster, so stay tuned! For now, let’s start exploring Helm.

A little background

We recently migrated a customer to Kubernetes. Their application was deployed as Docker containers on VMs—no scheduler. Our customer, in turn, had multiple customers and personalized each deployment for each customer.

Before the migration, Ansible script managed the containers:

  • Configuration files were generated via Ansible templates and Jinja2
  • Docker Container deployments used Ansible Docker modules and Docker Environment variables to personalize parameters

We realized we could continue to improve Ansible scripts to build common deployment requirements, like rolling deployments and increasing and decreasing the number of containers for a given microservice. However, doing that in Ansible seemed like reinventing the wheel, so we decided to look for a scheduler. The customer wanted a cloud-agnostic solution and the app was already dockerized, so we decided to go the Kubernetes route.

The buzz on Kubernetes

Let’s briefly talk about Kubernetes. There is no denying Kubernetes is THE thing at the moment, and there is a vibrant community behind it, even though many of our customers use AWS and therefore have access to EC2 Container Services (ECS). Kubernetes, of course, has come a long way and is rapidly adding features. Both Azure and Google have native support for Kubernetes, so if you are considering a multi-cloud strategy, Kubernetes is probably the best route.

In an ideal situation, you should be able to interact across multiple Kubernetes clusters—regardless of which cloud—and keep the deployment the same. When you are getting started with Kubernetes, you will work with rather static configurations. You may be able to do simple changes, such as increasing the number of replica sets or changing a parameter here and there, but for complex apps, you need the ability to templatize Kubernetes manifests and provide a set of configuration parameters that allow users to customize their deployments. You also need dependency management because you want some services to wait for other services to be up. This is where Helm can help.

What is Helm

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes. It also provides other useful features, such as templating and dependency management.

Helm uses a packaging format called charts. A chart is a collection of files that describes a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.

When you use Kubernetes, you deploy several resources, like deployments, replica controllers, services, and namespaces, to make your application functional. You can have the history of each of your deployments, but there is not a straightforward way to manage the history of all your deployment resources as a single unit. Helm can provide visibility because all the files that belong to the chart are seen as a unit.

Getting started with Helm

The Helm client is easily setup in a Mac OSX environment. You can use homebrew and run the following command: brew install kubernetes-helm. After you setup the kubernetes context so you can reach the Kubernetes cluster, all what is left is to run helm init, which will install the Tiller server on the Kubernetes cluster. The Helm client interacts with Tiller to install Charts curated applications.

Lets try our first example:

helm install --set persistence.enabled=false stable/mysql

This will create a pod running a mysql container. Notice that we added the persistence.enabled=false to the command line. The mysql chart is written so it requires by default a persistent volume, but for this example we don’t really care. Check the output and you see that Helm created everything else that is required for a mysql configuration under a randomly generated name. In my case it was “joyous-pike”. It also presents you with a nicely formatted Readme specific to the newly created configuration

NAME:   joyous-pike
LAST DEPLOYED: Sat Jul  8 23:09:43 2017
NAMESPACE: default
STATUS: DEPLOYED
 
RESOURCES:
==> v1/Secret
NAME               TYPE    DATA  AGE
joyous-pike-mysql  Opaque  2     0s
 
==> v1/Service
NAME               CLUSTER-IP  EXTERNAL-IP  PORT(S)   AGE
joyous-pike-mysql  10.0.0.210         3306/TCP  0s
 
==> v1beta1/Deployment
NAME               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
joyous-pike-mysql  1        1        1           0          0s
 
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
joyous-pike-mysql.default.svc.cluster.local
 
To get your root password run:
 
    kubectl get secret --namespace default joyous-pike-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
 
To connect to your database:
 
1. Run an Ubuntu pod that you can use as a client:
 
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
 
2. Install the mysql client:
 
    $ apt-get update && apt-get install mysql-client -y
 
3. Connect using the mysql cli, then provide your password:
    $ mysql -h joyous-pike-mysql -p

We’ll talk about Helm in more detail in future blog posts. A few key takeaways for now:

  • We can deploy an application with one command.
  • All resources needed are nicely packaged in configurable YAML files.
  • We set parameters on the command line, when needed.

Want to read the next part of this blog series?

Read “Simplify Kubernetes deployments with Helm (Part 2)”

nClouds Insights

Join thousands of DevOps and cloud professionals. Sign up for our newsletter for updated informaion and insights