Kubernetes By Component - Part 2

K8s API Server, etcd, and kubectl


This is part 2 in a series of posts related to Kubernetes. Part 1 was about the kubelet. Like part 1, this post is inspired by another 2015 post by Kamal Marhubi and is intended to update and expand on the topics with my own thoughts and learnings. We will explore kube-apiserver, its relationship to etcd, and get a taste of why kubectl is so nice.

Our Goal

As in the previous post, my goal is to provide a deeper sense of what each component in a Kubernetes cluster is doing by building up a cluster one component at a time. There is a lot of Kubernetes that can feel like “magic” and it sometimes takes a bit of hands on in isolation to get a feel for what a component is doing.

If kubelet is the base component, then kube-apiserver (the Kubernetes API service) is the next level up. It also serves as the primary interaction point between each Kubernetes component and the central store for all state in Kubernetes - etcd. kubectl, by comparison, serves as a convenient CLI for interacting with the Kubernetes API.

Kubernetes Cluster State in etcd

Kubernetes was built around the idea that all components are stateless and largely function by asking the API server for the desired state, comparing that to the current state, making changes, and saving the results back to etcd via the Kubernetes API. If one of these components was restarted, it would pick up where it left of by starting its cycle of interaction with the Kubernetes API over again.

etcd seems to have been chosen because it is a distributed reliable key-value store. I believe I read somewhere that we may see this portion of Kubernetes become “pluggable” in the future but for now etcd is all there is.

One very important thing to know is that only the Kubernetes API can talk directly to etcd - everything else only knows about the API service.

Priming the Environment

As we did in part 1 of this series, we will use a Vagrant box running Ubuntu to house our work. This provides a consistent platform and also contains everything we do nicely.

Download and install Vagrant if you have not already. You may also need to install Virtual Box if you do not have it already.

From there, open a terminal and provision your Vagrant box with the following commands:

1
2
$ vagrant init ubuntu/artful64
$ vagrant up

Update: Later in this post, I ran into an issue with ubuntu/artful64 related to disk space being constrained to ~2GB instead of the expected ~10GB. It’s possible that when you are reading this has been addressed, but incase you haven’t there is a workaround for the bug at the appropriate point in the article.

From there, you will ssh into the Vagrant box and install Docker using the instructions on the Ubuntu page (also copied below). In the instructions the sudo apt-get update will probably take a few minutes to complete but ensures you are installing the latest packages. Also you will probably get asked about using additional disk at a few of these steps and I answered Y as I went through this. This is similar to the previous article, but ruby was added to help with some YAML to JSON conversion.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ vagrant ssh
$ sudo apt-get update
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    jq \
    ruby \
    software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
sub   rsa4096 2017-02-22 [S]

$ sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce
$ docker --version
Docker version 17.12.0-ce, build c97c6d6
$ sudo docker run hello-world

Docker should have said “hi” and you should be ready for the next step.

Then we need to grab kubelet - later it will instruct Docker to run containers for us.

1
2
3
4
$ wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.9.2/bin/linux/amd64/kubelet
kubelet               100%[================================>] 140.95M  16.MB/s   in 9.1s
$ chmod +x kubelet

At this point we are ready to start tackling setting up etcd and kube-apiserver.

Firing up the API Server

Now we pick back up where Kamal’s post jumps in with “Starting the API server”. As he said, we first need to get etcd running. We create a directory for storing the state and getting it running with Docker.

1
2
3
$ mkdir etcd-data
$ sudo docker run --volume=$PWD/etcd-data:/default.etcd \
--detach --net=host quay.io/coreos/etcd > etcd-container-id

One thing to note is that we are using --net=host so that the API server can talk to etcd at 127.0.0.1 on the default port 2379.

And then we grab the Kubernetes API server binary:

1
2
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.9.2/bin/linux/amd64/kube-apiserver
$ chmod +x kube-apiserver

With the binary in place with the correct permissions, we can fire it up. (Makes me remember The Crow to say that, but’s probably not the best imagery here. ;) ) We need to tell kube-apiserver where to find etcd and the IP range of the cluster. 10.0.0.0/16 will work for us but longer term you may want to better understand what this choice implies.

1
2
3
4
5
$ ./kube-apiserver \
--etcd-servers=http://127.0.0.1:2379 \
--service-cluster-ip-range=10.0.0.0/16
I0225 18:17:12.348380    9163 server.go:121] Version: v1.9.2
error creating self-signed certificates: mkdir /var/run/kubernetes: permission denied

Permission denied?! This next piece will make sense only in the context of this VM - rerun that last command with sudo. You will want to be very cautious of running processes under sudo in any other environment.

1
2
3
$ sudo ./kube-apiserver \
--etcd-servers=http://127.0.0.1:2379 \
--service-cluster-ip-range=10.0.0.0/16

You should see something like the following and the prompt will not return while this server is running.

1
2
3
...
I0225 18:19:13.301716    9172 trace.go:76] Trace[648158743]: "Create /apis/apiregistration.k8s.io/v1beta1/apiservices" (started: 2018-02-25 18:19:12.29934746 +0000 UTC m=+2.925071183) (total time: 1.00233459s):
Trace[648158743]: [1.002257739s] [1.002097969s] Object stored in database

In another terminal, we need to ssh back into the Vagrant box and use curl to check the API server out.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
$ vagrant ssh
$ curl http://localhost:8080/api/v1/nodes
{
  "kind": "NodeList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/nodes",
    "resourceVersion": "33"
  },
  "items": []
}

That gave us a list of nodes running in the cluster. It’s empty because we haven’t set any up yet.

Kamal points out that resourceVersion is used for concurrency control. This allows a client to send back changes to a resource along with a referecnce to the version it was at before the change so the server can determine if there was a conflicting write since the client last requested this information.

Kamal also makes use of the jq utility to parse JSON from these sorts of responses. For this part, we only want to see the items so we can try the following and see that we also have no pods currently.

1
2
$ curl --stderr /dev/null http://localhost:8080/api/v1/pods | jq '.items'
[]

Our API server is running and now it’s time to add a node.

Adding a Node

In the previous post, we started with kubelet watching a directory for pod manifests. This time we want to add pods via the API server. To do that we need to start up kubelet pointing to the API server.

In order for kubelet to find the API server we need to tell it where to look using a kubeconfig file. For our purposes create a file in the current directory called kubeconfig using the link provided.

1
$ wget https://raw.githubusercontent.com/joshuasheppard/k8s-by-component/master/part2/kubeconfig

Again, the kubelet command will need to be run with sudo in the Vagrant box for ease of learning but is not recommended in other settings.

1
$ sudo ./kubelet --kubeconfig=$PWD/kubeconfig

The above command is one of the departures from Kamal’s post. In that, he used --api-servers (instead of --kubeconfig) which is no longer available.

When we operate with kubelet talking to an API server it opens up some interesting opportunities - for one, we can add another node to the cluster pointed at the API server and not need to reconfigure the API server to make use of it.

To verify that kubelet knows about our API server we can open up (yet) another terminal session and ask about node(s).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
$ vagrant ssh
$ curl --stderr /dev/null http://localhost:8080/api/v1/nodes \
| jq '.items' | head
[
  {
    "metadata": {
      "name": "ubuntu-artful",
      "selfLink": "/api/v1/nodes/ubuntu-artful",
      "uid": "5cef461b-1a62-11e8-b471-025d03b2c119",
      "resourceVersion": "66",
      "creationTimestamp": "2018-02-25T19:30:40Z",
      "labels": {
        "beta.kubernetes.io/arch": "amd64",

Now that we have a running node, let’s add a pod.

Adding a Pod

Much like in the previous post, we want to get a simple pod definition in place to test this out. We will reuse much of the prior definition with an important change - we need to specify where the pod should run. This is not needed in a complete Kubernetes cluster - because there is a component called kube-scheduler to “schedule” pods on nodes - but since we are building this up one component at a time we need to take on a little more responsibility in our pod definition.

1
$ wget https://raw.githubusercontent.com/joshuasheppard/k8s-by-component/master/part2/nginx.yaml

The important difference from the definition in part 1 is adding nodeName: ubuntu-artful where the value is the name of the node as found in the prior curl result.

Another inconvenience here is that while a more complete Kubernetes cluster can be sent commands via kubectl using YAML files as input, kube-apiserver only understands JSON. Kamal offers up the following as a way to convert our YAML file appropriately.

1
2
$ ruby -ryaml -rjson \
-e 'puts JSON.pretty_generate(YAML.load(ARGF))' < nginx.yaml > nginx.json

Now we can submit the request to the API server to add the pod using Kamal’s example. Make sure that the API server knows that the file we are sending is JSON using the Content-Type header.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ curl \
--stderr /dev/null \
--header "Content-Type: application/json" \
--request POST http://localhost:8080/api/v1/namespaces/default/pods \
--data @nginx.json | jq 'del(.spec.containers, .spec.volumes)'
{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "nginx",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/nginx",
    "uid": "310002db-1a66-11e8-b471-025d03b2c119",
    "resourceVersion": "252",
    "creationTimestamp": "2018-02-25T19:58:04Z"
  },
  "spec": {
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeName": "ubuntu-artful",
    "securityContext": {},
    "schedulerName": "default-scheduler"
  },
  "status": {
    "phase": "Pending",
    "qosClass": "BestEffort"
  }
}

After a few moments, we should be able to see that the pod is now running on our node.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
$ curl --stderr /dev/null http://localhost:8080/api/v1/namespaces/default/pods | \
jq '.items[] | { name: .metadata.name, status: .status } | del(.status.containerStatuses)'
{
  "name": "nginx",
  "status": {
    "phase": "Failed",
    "message": "Pod The node was low on resource: [DiskPressure].",
    "reason": "Evicted",
    "startTime": "2018-02-25T19:58:04Z"
  }
}

OK, so now we are learning something else - we need to be sure that we have enough disk available for our pods.

Brief Side-Trip into Vagrant Image Disk Size

Checking the current disk we see the following:

1
2
3
$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       2.2G  2.0G  153M  93% /

After a bit of Googling, I found that this is a known issue with the ubuntu/artful64 Vagrant box that we are using. While it will be nice when the underlying bug is addressed (it wasn’t as of 2018-02-25), we can get around the problem for now with the following:

1
2
3
4
5
6
7
8
9
$ sudo resize2fs /dev/sda1
resize2fs 1.43.5 (04-Aug-2017)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/sda1 is now 2621179 (4k) blocks long.

$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       9.7G  2.0G  7.7G  21% /

Back to Provisioning Our Pod

If we check the pod status again at this point, we will see that it is still marked as failed.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
$ curl --stderr /dev/null http://localhost:8080/api/v1/namespaces/default/pods | \
> jq '.items[] | { name: .metadata.name, status: .status } | del(.status.containerStatuses)'
{
  "name": "nginx",
  "status": {
    "phase": "Failed",
    "message": "Pod The node was low on resource: [DiskPressure].",
    "reason": "Evicted",
    "startTime": "2018-02-25T19:58:04Z"
  }
}

If we try to POST the pod again we will get a rejection like the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
$ curl \
--stderr /dev/null \
--header "Content-Type: application/json" \
--request POST http://localhost:8080/api/v1/namespaces/default/pods \
--data @nginx.json | jq 'del(.spec.containers, .spec.volumes)'
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "pods \"nginx\" already exists",
  "reason": "AlreadyExists",
  "details": {
    "name": "nginx",
    "kind": "pods"
  },
  "code": 409
}

Here again, we find something that would be less of an issue in a more complete Kubernetes cluster. The pod isn’t getting rescheduled now that we have more space because we have to do that ourselves - we don’t have kube-scheduler to sort that out for us.

1
2
3
$ curl \
--stderr /dev/null \
--request DELETE http://localhost:8080/api/v1/namespaces/default/pods/nginx

That command tells the API server to remove the pod called nginx. We can confirm it is gone with the following:

1
2
$ curl --stderr /dev/null http://localhost:8080/api/v1/namespaces/default/pods | \
jq '.items[] | { name: .metadata.name, status: .status } | del(.status.containerStatuses)'

Hopefully, you will see what I saw here which is nothing. Now we can resubmit the pod definition and verify it went as expected.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
$ curl \
--stderr /dev/null \
--header "Content-Type: application/json" \
--request POST http://localhost:8080/api/v1/namespaces/default/pods \
--data @nginx.json | jq 'del(.spec.containers, .spec.volumes)'
{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "nginx",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/nginx",
    "uid": "8ae6c7e4-1a6d-11e8-b471-025d03b2c119",
    "resourceVersion": "608",
    "creationTimestamp": "2018-02-25T20:50:41Z"
  },
  "spec": {
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeName": "ubuntu-artful",
    "securityContext": {},
    "schedulerName": "default-scheduler"
  },
  "status": {
    "phase": "Pending",
    "qosClass": "BestEffort"
  }
}
$ curl --stderr /dev/null http://localhost:8080/api/v1/namespaces/default/pods | \
jq '.items[] | { name: .metadata.name, status: .status } | del(.status.containerStatuses)'
{
  "name": "nginx",
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-02-25T20:50:41Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-02-25T20:50:53Z"
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2018-02-25T20:50:53Z"
      }
    ],
    "hostIP": "10.0.2.15",
    "podIP": "172.17.0.2",
    "startTime": "2018-02-25T20:50:41Z",
    "qosClass": "BestEffort"
  }
}

The pod is now running and assigned the IP 172.17.0.2 by Docker. Kamal references this as a good place to start learning about Docker’s network configuration.

Before we move on, we should double check that nginx is actually listening on that IP:

1
2
3
4
5
$ curl --stderr /dev/null http://172.17.0.2 | head -4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Kubernetes CLI - kubectl

Now that we have learned how to interact with the Kubernetes API server directly we can take a step up in terms of abstractions. You could use curl and jq and manually convert YAML to JSON - or you could use the great CLI that Kubernetes offers - kubectl. Much like Kamal said, I think you will like it.

To get started, let’s fetch the client:

1
2
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.9.2/bin/linux/amd64/kubectl
$ chmod +x kubectl

Once we have this, getting a list of nodes (or pods) is pretty simple:

1
2
3
4
5
6
$ ./kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
ubuntu-artful   Ready     <none>    1h        v1.9.2
$ ./kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
nginx     2/2       Running   0          20m

Again, Kamal was right - this is much easier to type and easier to read the responses. To test this out we will spin up a copy of the existing nginx pod using kubectl. Start by duplicating the pod manifest YAML file and replace the name.

1
$ sed 's/^  name:.*/  name: nginx-the-second/' nginx.yaml > nginx2.yaml

Then we can use kubectl create to start that second copy.

1
2
3
4
5
6
$ ./kubectl create --filename nginx2.yaml
pod "nginx-the-second" created
$ ./kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
nginx              2/2       Running   0          24m
nginx-the-second   2/2       Running   0          27s

Depending on how quickly you execute that second command, you may find that some of the containers are still being allocated. Give it a moment and run ./kubectl get pods again until you see what I have above.

If we want to get more information about a pod, we can use kubectl describe:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
$ ./kubectl describe pods/nginx-the-second | head
Name:         nginx-the-second
Namespace:    default
Node:         ubuntu-artful/10.0.2.15
Start Time:   Sun, 25 Feb 2018 21:14:56 +0000
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           172.17.0.3
Containers:
  nginx:

And once again, we can double check that this second copy of nginx is serving requests at the IP listed above.

1
2
3
4
5
$ curl --stderr /dev/null http://172.17.0.3 | head -4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Wrapping Up

At this point, we have a sense of what it means to start a pod in Kubernetes from the command line. There are a number of additional components you would find in a more complete Kubernetes cluster but it helps to take it a piece at a time. If nothing else, we can better appreciate how each new component makes our lives simpler by removing work we previously had to do ourselves.

Before we leave this, we should look at how to clean up our running pods. I think you will find this a bit easier than using curl.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
$ ./kubectl delete pods/nginx pods/nginx-the-second
pod "nginx" deleted
pod "nginx-the-second" deleted
$ ./kubectl get pods
NAME               READY     STATUS        RESTARTS   AGE
nginx              2/2       Terminating   0          34m
nginx-the-second   2/2       Terminating   0          9m
$ ./kubectl get pods
No resources found.
$ sudo docker ps
CONTAINER ID        IMAGE                 COMMAND                 CREATED             STATUS              PORTS               NAMES
f1f55af993cd        quay.io/coreos/etcd   "/usr/local/bin/etcd"   4 hours ago         Up 4 hours                              wizardly_tereshkova
$ sudo docker stop $(cat etcd-container-id)
f1f55af993cd82393959648647bfc734ff8e1c1e611be4aadaa34bab4ad46280
$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

A last cleanup step is to exit from the Vagrant box and destroy it.

1
2
$ exit
$ vagrant destroy

What’s Next

Next we will look at kube-scheduler in a future post so we no longer need to specify the node that our pod runs on.