This is part 4 in a series of posts related to
Kubernetes. Pervious posts were about
kubelet,
Kubernetes API, etcd, and kubectl,
and kube-scheduler.
This time we will explore kube-controller-manager
and its role in
managing replication of pods (a.k.a. scaling).
We will pick up where part 3 left off. If you do not already have a VM setup in the state at the end of that post, you will want to do that now.
What About Scaling
We will see how the kube-controller-manager
helps simplify scaling up or
down a set of pods in the cluster.
The Kubernetes
kube-controller-manager
watches the cluster state for differences between the current state and
the desired state. It doesn’t do this for everything — that’s pretty
much the definition of every controller in Kubernetes. The ones that
ship with Kubernetes today are the replication controller, endpoints
controller, namespace controller, and serviceaccounts controller. For
this post we are only interested in the
replication controller
included with kube-controller-manager
since that governs scaling of
pods.
Introducing Deployments
Before we get into scaling our pods, we need to learn about a new Kubernetes resource — the deployment.
A deployment wraps the pod definition with information about many
things including the number of replicas — i.e. instances — of a pod.
Below is an example deployment for our nginx
pod. Note that what we
have in the spec:
block is exactly what we had in the pod definition
in previous posts.
|
|
Let’s pull this definition into our VM and deploy it.
|
|
At this point nothing will happen. Much like when we were deploying pods
without explicit node assignments and without the kube-scheduler
deployed. Except this time, we will need the kube-controller-manager
to make use of deployment resources.
What we see above is that we want (i.e. desire) two instances of the
spec:
block. This is determined by the replicas: 2
line in the
deployment specification. We also see “CURRENT”, “UP-TO-DATE”, and
“AVAILABLE” at 0. These will stay like this until the
kube-controller-manager
is running to act on the deployments.
Deploying the Contoller-Manager
In yet another terminal - we are up to five now - let’s get the
kube-controller-manager
in place.
|
|
Back in our original terminal window, run the following to see what we have going on besides pods in our cluster.
|
|
As a recap, etcd
stores the state of the cluster and is accessed only
through the kube-apiserver
. kube-scheduler
ensures that pods are
assigned to kubelet
“nodes” that have sufficient resources to run them.
Now we have added kube-controller-manager
with the expectation that
it helps us quickly scale up or down the instances of a pod. Let’s take
another look at the current state of the cluster.
|
|
We see above that the deployment has been processed now and that we have two instances of the pod described in the deployment running.
Scaling in Action
Now let’s test out scaling down.
|
|
We told the cluster that we only needed one instance in the nginx
deployment and it promptly elected one of the pods to terminate so that
we would only have one running per our request.
When we scale up to three fast enough we can see something interesting.
|
|
Note that the pod that was terminating continues to terminate and two new pods are spun up to meet the current needs of the deployment. Once a pod begins terminating it will not be “resuscitated”.
Cleaning Up
When we are done, we can spin down the deployment and it’s associated
pods using the delete
command through kubectl
as we have done with
pods in the past posts.
|
|
After a minute or so all the pods related to the deployment will have finished terminating.
A last cleanup step is to exit
from the Vagrant box and destroy
it.
|
|
What’s Next
In a follow-up article, we will explore how newer versions of Kubernetes
— v1.10+ — change how we deploy pods and start up our controllers
like kube-scheduler
and kube-controller-manager
.