As I prepared to present the four previous posts
(part 1,
part 2,
part 3,
part 4)
as “Dissecting Kubernetes” at KCDC X,
I tried to upgrade from v1.9 to v1.11 of Kubernetes and
I found that when we only have the kubelet
and the kube-apiserver
we
run into a permissions issue we didn’t have before.
Picking Up From Part 2
In part 2 of this series,
we have an Ubuntu Vagrant box running kubelet
pointing to
kube-apiserver
with etcd
behind it and we can talk with it using
kubectl
. All’s well until you send a pod definition to it when you
are using the v1.11.x binaries.
|
|
All I had done at this point was build a new VM as I had done in the prior posts, but instead pulled the new (at the time) v1.11.0 binaries, and ran through the steps. Clearly I needed to learn more about how the expectations around Service Accounts had changed sometime after v1.9.x.
Learning About Service Accounts
To begin this research, I dug into CHANGELOG-1.10.md
and CHANGELOG-1.11.md
in the Kubernetes repo looking for some mention of serviceaccount
or
“service account”.
After a lot of digging — I was reading every line because I really didn’t know what I was looking for yet — I found the following in 1.10’s “Other Notable Changes > Auth” as the 11th bullet:
Default enabled admission plugins are now
NamespaceLifecycle
,LimitRanger
,ServiceAccount
, …
To this point I hadn’t run into admissions plugins
or needed them in my stripped down demo of Kubernetes. As I look through
the documentation I just linked to, I remember it being less clear a few
months ago when I originally found it. Even so, when you scroll way down
you eventually find the ServiceAccount
admission controller block
which essentially points you to another page describing
ServiceAccounts.
Reading through that page, I see that when you don’t specify a service
account for your pod — and I didn’t in my nginx-nodeName.yaml
— it
gets assigned the default
service account in the namespace. From there
I made the connection… something needed to setup the default
service account in the default
namespace and hadn’t done that for me
when I only had kube-apiserver
and kubelet
in my cluster.
Just to check the state of the cluster I ran the command I
found on that last page — after all it claimed, “Every namespace has a
default service account resource called default
.”
|
|
Sure enough, no dice. So… what is responsible for setting that up?
Controller Manager - Doer of “Everything”
After some Googling, I found this post
on StackOverflow. It references that the kube-controller-manager
wasn’t running as part of the problem they were running into. And in the
answer, I found that:
The default service account for each namespace is created by the service account controller, which is a loop that is part of the kube-controller-manager binary. So, verify that binary is running, and check its logs for anything that suggests it can’t create a service account, make sure you set the “–service-account-private-key-file=somefile” to a file that has a valid PEM key.
Crap.
My plans had been to build out a cluster one component at a time and the
kube-controller-manager
was one of the last things
I was adding. If this was true, not only did I need to revisit the order
but I also needed to learn about creating PEM keys.
Reading a little further in the answer I saw this:
Alternatively, if you want to make some progress without service accounts, and come back to that later, you can disable the admission controller that is blocking your pods by removing the “ServiceAccount” option from your api-server’s
--admission-controllers
flag. But you will probably want to come back and fix that later.
So I could cheat this if I really needed to. That said, I wanted to find a good balance between using the defaults and keeping the demo and explanations simple. Add to this, the post on StackOverflow was from 2015 and I had zero guarantee it was still accurate in 2018 given how fast Kubernetes is moving.
For now, I was determined to figure this out.
Creating PEM Keys - “The Hard Way”
When I was learning Kubernetes, I bumped into Kelsey Hightower’s Kubernetes The Hard Way. In it, I remember him working with CA certificates and including those when bootstrapping the control plane.
Looking at what Kelsey had done, I knew that it would not work well with my more simplistic approach to incrementally building up a cluster. But maybe I could borrow some of the ideas and preserve my approach — all the while learning a bit more. :)
After re-reading his instructions, I decided I would try keeping my
kube-controller-manager
invocation as simple as possible and only add
the --service-account-key-file=service-account.pem
parameter. After
all it was the only parameter that seemed targeted at service accounts.
But to do that I needed to generate the service-account.pem
. And in
the instructions to generate the Service Account key pair
it references the ca.pem
, ca-key.pem
, ca-config.json
and
service-account-csr.json
. And to generate any of those I needed to
add go
, cfssl
, and cfssljson
to my install steps alongside Docker.
In the end, I added a few lines to my setup-1-docker.sh and cobbled together generate-service-account-pems.sh. It wasn’t quite as simple a story as before, but I could talk to it as part of covering how I prepared the VM before getting started.
Pulling It All Together
After all of this, the demo script becomes the following after starting from a fresh Ubuntu-Bionic Vagrant VM.
|
|
And in a second terminal:
|
|
And in a third terminal:
|
|
And in a fourth terminal we can now verify the service account and deploy our pod:
|
|
Updating “Dissecting Kubernetes”
Now that I have this working and have a reasonable understanding of why,
I need to work through how to best integrate this into the conference
session I will give in October at dev up
called Dissecting Kubernetes. As of right now, I
think I will add a minute of explanation of the VM setup to cover
the certificate generation script and then introduce the
kube-controller-manager
after I get the permissions error when I
try to deploy the first pod through the kube-apiserver
. While it
may mean hitting the kube-controller-manager
twice — once before
and once after the kube-scheduler
— I think it is fair given how
important the kube-controller-manager
really is to the control plane.