Running Kubernetes Locally via Minikube

2016-10-08

In this post I'll show you have to run Kubernetes locally via Minikube, a basic understanding of Kubernetes is required. Lets get started!

Minikube

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

More documentation can be found on Github

k8s-scrum

k8s-scrum is the demo project that we will be using in this tutorial. The project contains 2 microservices: k8s-product-owner and k8s-developer.

k8s-product-owner has a REST endpoint mapped on '/meeting'. When the endpoint is called the product owner has a meeting with a couple of important people and they decide to request new development during an active sprint...

1
2
3
4
5
@RequestMapping("/meeting")
public DeveloperResponse addToActiveSprint() {
    LOG.info("Requesting out of scope development");
    return restTemplate.getForObject("http://k8s-developer-service/develop", DeveloperResponse.class);
}

As you can see we are sending the request to 'k8s-developer-service', this is a regular Kubernetes service.

The k8s-developer microservice will receive develop request and will return an appropriate response back to the k8s-product-owner microservice:

1
2
3
4
5
@RequestMapping("/develop")
public DeveloperResponse develop() {
    LOG.info("Received new develop request during an active sprint");
    return new DeveloperResponse("Put it on the backlog!");
}

The code can be found on Github

Making it work

We first need to create the Kubernetes cluster (I use VirtualBox):

1
minikube start

Once the cluster is started we need to be able to push docker images to the docker environment within Minikube:

1
eval $(minikube docker-env)

Now run docker ps and docker images and you'll see something like this:

1
2
3
4
5
CONTAINER ID        IMAGE                                                        COMMAND                  CREATED             STATUS              PORTS               NAMES
bf8d8677d0dd        gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0   "/dashboard --port=90"   5 minutes ago       Up 5 minutes                            k8s_kubernetes-dashboard.90e9da9f_kubernetes-dashboard-zw9pf_kube-system_8ed2ed5b-8d78-11e6-94cb-36f5793698cf_e9253648
18ccd6bce12d        gcr.io/google_containers/pause-amd64:3.0                     "/pause"                 5 minutes ago       Up 5 minutes                            k8s_POD.2225036b_kubernetes-dashboard-zw9pf_kube-system_8ed2ed5b-8d78-11e6-94cb-36f5793698cf_fe538489
76e143fa4be3        gcr.io/google-containers/kube-addon-manager-amd64:v2         "/opt/kube-addons.sh"    5 minutes ago       Up 5 minutes                            k8s_kube-addon-manager.a1c58ca2_kube-addon-manager-minikube_kube-system_3e8322eb546e1d90d2fb7cac24d6d6a2_86db91dd
371a813f1d05        gcr.io/google_containers/pause-amd64:3.0                     "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD.d8dbe16c_kube-addon-manager-minikube_kube-system_3e8322eb546e1d90d2fb7cac24d6d6a2_eea99a51
1
2
3
4
REPOSITORY                                            TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kubernetes-dashboard-amd64   v1.4.0              436faaeba2e2        2 weeks ago         86.27 MB
gcr.io/google-containers/kube-addon-manager-amd64     v2                  a876fb07f9c2        4 months ago        231.1 MB
gcr.io/google_containers/pause-amd64                  3.0                 99e59f495ffa        5 months ago        746.9 kB

Those are the containers/images configured by Minikube. Now lets push our own images into Minikube. Navigate into the microservices and run the following command in both of them:

1
mvn clean package docker:build

When you run docker images you will see that we now have our images in the docker environment within Minikube:

1
2
3
4
5
6
7
REPOSITORY                                            TAG                 IMAGE ID            CREATED              SIZE
jdruwe/k8s-developer                                  latest              f446544d04f0        21 seconds ago       195.5 MB
jdruwe/k8s-product-owner                              latest              58ce7d2ff470        About a minute ago   195.5 MB
frolvlad/alpine-oraclejdk8                            slim                f8103909759b        2 weeks ago          167.1 MB
gcr.io/google_containers/kubernetes-dashboard-amd64   v1.4.0              436faaeba2e2        2 weeks ago          86.27 MB
gcr.io/google-containers/kube-addon-manager-amd64     v2                  a876fb07f9c2        4 months ago         231.1 MB
gcr.io/google_containers/pause-amd64                  3.0                 99e59f495ffa        5 months ago         746.9 kB

We can now create a new Kubernetes deployment:

1
kubectl run k8s-product-owner --image=jdruwe/k8s-product-owner --port=8080 --image-pull-policy=Never 

Do the same for the other microservice:

1
kubectl run k8s-developer --image=jdruwe/k8s-developer --port=8080 --image-pull-policy=Never

Remember to turn off the imagePullPolicy:Always, as otherwise kubernetes won't use images you built locally (it will use my public image on docker hub).

Open up the dashboard using 'minikube dashboard', you should see something like this:

Both deployments have 1 pod running by default, you can see the also using kubectl get pods.

We have the pods up and running so the next step would be to configure a LoadBalancer service to allow web traffic to the k8s-product-owner pod(s). This is external to Kubernetes. Unfortunately this is not supported at the moment in Minikube

Features that require a Cloud Provider will not work in Minikube. These include: LoadBalancers...

They do provide a workaround

Any services of type NodePort can be accessed over that IP address, on the NodePort. Lets create a service just doing that:

1
kubectl expose deployment k8s-product-owner --type=NodePort

We can figure out its external IP address and port using the following command:

1
minikube service k8s-product-owner --url

Navigating to the url and its meeting endpoint will fail at this moment because we did not configure a Kubernetes service for the developer pod(s) yet. Remember: ...k8s-developer-service... in the REST call? Create the service now:

1
kubectl expose deployment k8s-developer --port=80 --target-port=8080 --name=k8s-developer-service

Lets try that REST call again:

The product-owner pod now calls and returns the response from the developer-pod!

Optional

You could try out scaling the k8s-developer deployment just for fun :)

1
kubectl scale --replicas=2 deployment/k8s-developer

You should now have 2 pods running:

You can also view the registered endpoints (developer pods):

1
kubectl describe svc k8s-developer-service

Calls to the service will now be distributed among its 2 running pods. If you have a comment or question just drop me a line below.

Created by Jeroen Druwé