As I prepared to present the four previous posts
as “Dissecting Kubernetes” at KCDC X,
I tried to upgrade from v1.9 to v1.11 of Kubernetes and
I found that when we only have the
kubelet and the
run into a permissions issue we didn’t have before.
Picking Up From Part 2
In part 2 of this series,
we have an Ubuntu Vagrant box running
kubelet pointing to
etcd behind it and we can talk with it using
kubectl. All’s well until you send a pod definition to it when you
are using the v1.11.x binaries.
1 2 3 4 5 6
$ wget -q --show-progress --https-only --timestamping \ https://raw.githubusercontent.com/joshuasheppard/k8s-by-component/master/part5/nginx-nodeName.yaml $ ./kubectl create -f nginx-nodeName.yaml Error from server (Forbidden): error when creating "nginx-nodeName.yaml": \ pods "nginx" is forbidden: error looking up service account default/default: \ serviceaccount "default" not found
All I had done at this point was build a new VM as I had done in the prior posts, but instead pulled the new (at the time) v1.11.0 binaries, and ran through the steps. Clearly I needed to learn more about how the expectations around Service Accounts had changed sometime after v1.9.x.
Learning About Service Accounts
After a lot of digging — I was reading every line because I really didn’t know what I was looking for yet — I found the following in 1.10’s “Other Notable Changes > Auth” as the 11th bullet:
Default enabled admission plugins are now
To this point I hadn’t run into admissions plugins
or needed them in my stripped down demo of Kubernetes. As I look through
the documentation I just linked to, I remember it being less clear a few
months ago when I originally found it. Even so, when you scroll way down
you eventually find the
ServiceAccount admission controller block
which essentially points you to another page describing
Reading through that page, I see that when you don’t specify a service
account for your pod — and I didn’t in my
nginx-nodeName.yaml — it
gets assigned the
default service account in the namespace. From there
I made the connection… something needed to setup the
service account in the
default namespace and hadn’t done that for me
when I only had
kubelet in my cluster.
Just to check the state of the cluster I ran the command I
found on that last page — after all it claimed, “Every namespace has a
default service account resource called
$ ./kubectl get serviceAccounts No resources found.
Sure enough, no dice. So… what is responsible for setting that up?
Controller Manager - Doer of “Everything”
After some Googling, I found this post
on StackOverflow. It references that the
wasn’t running as part of the problem they were running into. And in the
answer, I found that:
The default service account for each namespace is created by the service account controller, which is a loop that is part of the kube-controller-manager binary. So, verify that binary is running, and check its logs for anything that suggests it can’t create a service account, make sure you set the “–service-account-private-key-file=somefile” to a file that has a valid PEM key.
My plans had been to build out a cluster one component at a time and the
kube-controller-manager was one of the last things
I was adding. If this was true, not only did I need to revisit the order
but I also needed to learn about creating PEM keys.
Reading a little further in the answer I saw this:
Alternatively, if you want to make some progress without service accounts, and come back to that later, you can disable the admission controller that is blocking your pods by removing the “ServiceAccount” option from your api-server’s
--admission-controllersflag. But you will probably want to come back and fix that later.
So I could cheat this if I really needed to. That said, I wanted to find a good balance between using the defaults and keeping the demo and explanations simple. Add to this, the post on StackOverflow was from 2015 and I had zero guarantee it was still accurate in 2018 given how fast Kubernetes is moving.
For now, I was determined to figure this out.
Creating PEM Keys - “The Hard Way”
Looking at what Kelsey had done, I knew that it would not work well with my more simplistic approach to incrementally building up a cluster. But maybe I could borrow some of the ideas and preserve my approach — all the while learning a bit more. :)
After re-reading his instructions, I decided I would try keeping my
kube-controller-manager invocation as simple as possible and only add
--service-account-key-file=service-account.pem parameter. After
all it was the only parameter that seemed targeted at service accounts.
But to do that I needed to generate the
service-account.pem. And in
the instructions to generate the Service Account key pair
it references the
service-account-csr.json. And to generate any of those I needed to
cfssljson to my install steps alongside Docker.
In the end, I added a few lines to my setup-1-docker.sh and cobbled together generate-service-account-pems.sh. It wasn’t quite as simple a story as before, but I could talk to it as part of covering how I prepared the VM before getting started.
Pulling It All Together
After all of this, the demo script becomes the following after starting from a fresh Ubuntu-Bionic Vagrant VM.
1 2 3 4 5 6 7 8 9 10 11 12 13
$ vagrant up $ vagrant ssh $ wget -q --show-progress --https-only --timestamping \ "https://raw.githubusercontent.com/joshuasheppard/k8s-by-component/master/part5/setup-1-docker.sh" \ "https://raw.githubusercontent.com/joshuasheppard/k8s-by-component/master/part5/setup-2-download-k8s-1.11.sh" $ chmod +x setup-1-docker.sh setup-2-download-k8s-1.11.sh $ ./setup-1-docker.sh $ ./setup-2-download-k8s-1.11.sh $ sudo docker run --volume=$PWD/etcd-data:/default.etcd \ --detach --net=host quay.io/coreos/etcd > etcd-container-id $ sudo ./kube-apiserver \ --etcd-servers=http://127.0.0.1:2379 \ --service-cluster-ip-range=10.0.0.0/16
And in a second terminal:
$ vagrant ssh $ sudo ./kubelet --kubeconfig=$PWD/kubeconfig
And in a third terminal:
$ vagrant ssh $ sudo ./kube-controller-manager --kubeconfig=$PWD/kubeconfig --service-account-private-key-file=service-account-key.pem
And in a fourth terminal we can now verify the service account and deploy our pod:
1 2 3 4 5 6 7 8 9
$ vagrant ssh $ ./kubectl get serviceaccounts NAME SECRETS AGE default 1 6s $ ./kubectl create -f nginx-nodeName.yaml pod/nginx created $ ./kubectl get pods NAME READY STATUS RESTARTS AGE nginx 2/2 Running 0 19s
Updating “Dissecting Kubernetes”
Now that I have this working and have a reasonable understanding of why,
I need to work through how to best integrate this into the conference
session I will give in October at dev up
called Dissecting Kubernetes. As of right now, I
think I will add a minute of explanation of the VM setup to cover
the certificate generation script and then introduce the
kube-controller-manager after I get the permissions error when I
try to deploy the first pod through the
kube-apiserver. While it
may mean hitting the
kube-controller-manager twice — once before
and once after the
kube-scheduler — I think it is fair given how
kube-controller-manager really is to the control plane.