Common and not-quite so common tasks
- A supplement to the kubernetes.io Cheatsheet page
- Note: This is a study help. If studying for the test, only official k8s.io docs can be used.
Create pod-based deployment and expose:
Name: nginx-test
Image: nginx:1.17
Limits: 200m CPU, 512 Mi RAM
Port: 80
Run commands: echo hello and ls
Make 2 replicas
Always restart
Record it for rollouts
Label the app for QA and add to the Test and Dev namespace
Expose it to the public via NodePort.
$ kubectl create ns testdev
$ kubectl run --generator=run-pod/v1 nginx-test --image=nginx:1.17 --image-pull-policy=IfNotPresent --port=80 --labels="app=nginx-test,evn=qa" --replicas=2 --limits="cpu=200m,memory=512Mi" --restart=Always --namespace=testdev --record --dry-run -o=yaml > nginxtest.yaml
$ kubectl vi nginxtest.yaml
...
name: nginx-test
namespace=testdev
...
<esc>:wq
Note:
Add back in the namespace again. Saving to YAML file allows us to review and make any further additions. Also if you don't do --dry-run and -o=yaml to a file, the namespace passed works. However, if you output to the YAML file, the namespace entry is missing; so we have to recreate it. Been a "bug" since 2017, and through 16.1.3. Closed in Github as not a bug because --namespace should be used with Apply!!! In other words, you can do $kubectl apply -f nginxtest.yaml -n=testdev instead of the command below.
$ kubectl create -f nginxtest.yaml
$ kubectl expose pod nginx-test --type=NodePort --port=80 --record
Perform a rolling update to pod to new image:
$ kubectl set image pod nginx-test nginx=nginx:1.9.1 --record -n=testdev
Note:
You cannot do a rollout history on a pod. It returns: No history viewer has been implemented for "Pod".
Create a deployment of NGINX:
Create in prod namespace.
Use image version 1.17.
Use save-config so that apply can be used later.
$ kubectl create ns prod
$ kubectl create deployment nginx1 --image=nginx:1.17 --namespace=prod --save-config
$ kubectl -n=prod get deployments -o=wide
Perform a rolling update to deployment to new image:
Update the nginx1 deployment just created to image back to image 1.9.1 to test an old package on the version of NGINX current at the time.
$ kubectl set image deployment nginx1 nginx=nginx:1.9.1 --record -n=prod
$ kubectl -n=prod get deployments -o=wide
$ kubectl -n=prod rollout history deployment/nginx1
Note: Running directly from kubectl doesn't drop the namespace, but also doesn't add any history note for the rollout history for #1 first deployment.
Update the nginx1 deployment to the newer 1.17.6 version:
$ kubectl set image deployment nginx1 nginx=nginx:1.17.6 --record -n=prod
$ kubectl -n=prod rollout history deployment/nginx1
Update the nginx1 deployment to the older 1.17.5 version:
$ kubectl set image deployment nginx1 nginx=nginx:1.17.5 --record -n=prod
$ kubectl -n=prod rollout history deployment/nginx1
Revert up to the newest 1.17.6 version:
Note: Since we are only going back one level, we could left off the --to-revision.
$ kubectl rollout undo deployment nginx1 --to-revision=3
$ kubectl -n=prod rollout history deployment/nginx1
Expose as NodePort to make public:
$ kubectl -n=prod expose deployment nginx1 --type=NodePort --port:80
Our NGINX app is more popular, scale it:
$ kubectl -n=prod scale deployment nginx1 --replicas=3 --record
Note: Notice that in K8s 1.16.3, the scale overwrites the previous history where the rollout image was return to image 1.17.6
Tangent:
BTW, you can expose a pod. It doesn't have to be a deployment.
$ kubectl expose pod nginx1 --type=NodePort --port=80
Create a buybox pod and have it run a command:
$ kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600
or
$ kubectl run --generator=run-pod/v1 busybox-test --image=busybox --image-pull-policy=IfNotPresent --port=80 --labels="app=busybox-test,evn=qa" --replicas=1 --limits="cpu=200m,memory=256Mi" --restart=Always --namespace=testdev --record -- sleep 4800
or
$ kubectl run --generator=run-pod/v1 busybox-test --image=busybox --image-pull-policy=IfNotPresent --port=80 --labels="app=busybox-test,evn=qa" --replicas=1 --limits="cpu=200m,memory=256Mi" --restart=Always --namespace=testdev --record --dry-run -o=yaml -- sleep 4800 > busyboxtest.yaml
$ vi busyboxtest.yaml
name: busybox-test
namespace=testdev
...
<esc>:wq
$ kubect create -f busyboxtest.yaml
$ kubectl vi busyboxtest.yaml
...
name: busybox-test
namespace=testdev
...
<esc>:wq
$ kubect create -f busyboxtest.yaml
Create a buybox pod to use for testing services inside the Cluster:
Use the nginx-test pod created above. Without the -rm, this container will stay running after you exit bash.
$ kubectl -n=dev -t -i nginx-test bash
or
$ kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --rm -it -- sh
/ # nc -z -v -w 2 <object> <port>
Don't use this one if you need to pipe something out of the pod.
Better way when you need to pipe/output data from the pod:
Issue the command w/o a shell. This way we can take the output and put to a file, like:
$ kubectl run --generator=run-pod/v1 tmpbusybox --image=busybox:1.28 --command -- sleep 1000
then you can pipe out information directly:
$ kubectl exec tmpbusybox -- nslookup 10-244-0-11.default.pod.cluster.local > /root/tmpbusybox.txt
$ kubectl exec tmpbusybox -- nslookup nginx1.default.svc > /root/tmpbusybox-svc.txt
$ kubectl exec tmpbusybox -- nslookup nginx1 > /root/tmpbusybox-svc2.txt
Get Info on Pods and Deployments:
$ kubectl describe pod nginx-test
$ kubectl describe deployment nginx1
$ kubectl -n=testdev logs nginx-test
$ kubectl -n=testdev logs pod/busybox-test
$ kubectl logs nginx1-5d5cfb97b6-pkj2v
$ kubectl get events
$ kubectl get ep -n=testdev
$ curl 192.168.23.48
Note: Unlike the documentation says, I could not resolve the service at the node nginx-test.apps, nginx-test.apps.svc.cluster.local, 192-168-23-48.apps, etc. But curl'ing the NodePort IP did work. DNS still is only valid on the hosts which are part of the cluster. For a service, I could resolve it using the server.default.svc, on the hosts, so you do need external DNS, like we already do historically.
Create role and role binding (RBAC) for access to a development and test environment:
$ vi devrole.yaml
# This role binding allows "matt" and other developers in devstaff to manage pods and deployments in the "devtest" namespace
# rules - apiGroups - "" indicates the core API group
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: devtest
name: devstaff
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods","deployments", "services", "endpoints"]
verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]
<esc>:wq
$ kubectl create -f devrole.yaml
$ vi devrolebinding.yaml
# This role binding allows "matt" to create/modify pods and deployments in the "devtest" namespace.
# subject - name is case sensitive
# roleRef - name must match the name of the Role to bind
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: devstaff-binding
namespace: devtest
subjects:
- kind: User
name: matt
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: devstaff
apiGroup: rbac.authorization.k8s.io
<esc>:wq
$ kubectl create -f devrolebinding.yaml
More Advanced Tasks:
Use JSONPath and custom-column do custom reporting:
Tip use kubectl get <object> -o json to determine the structure of the json hierarchy to walk
$ kubectl get nodes -o=custom-columns='Node:.metadata.name,CPU:.status.capacity.cpu' --sort-by=.status.capacity.cpu
$ kubectl get nodes --sort-by=.status.addresses[0].address -o=wide
Use --selector to get the nodes that aren't master control planes and save the number to a file:
Hint: We have to use jsonpath or otherwise, we count the labels as a row.
$ kubectl get nodes -o jsonpath='{.items[*].metadata.name}' --selector='!node-role.kubernetes.io/master' | wc -l > /root/numworkernodes.txt
Create a new certificate for the apiserver-etcd-client:
$ openssl x509 -req -in /etc/kubernetes/pki/apiserver-etcd-client.csr -CA /etc/kubernetes/pki/etcd/ca.crt -CAkey /etc/kubernetes/pki/etcd/ca.key -CA createserial -out /etc/kubernetes/pki/apiserver-etcd-client.crt
Isolate all pods in a namespace using a network policy:
$ kubectl create ns pergatory
$ kubectl annotate ns pergatory "net.beta.kubernetes.io/network-policy={\"ingress\": {\"isolation\": \"DefaultDeny\"}}"
Renew main "alpha" certs w/kubeadm:
$ kubeadm alpha certs check-expiration
$ kubeadm alpha certs renew --use-api &
< message returned: [certs] certificate request "kubeadm-cert-kube-apiserver-k8master1" created >
$ kubectl certificate approve kubeadm-cert-kube-apiserver-k8master1
Verify with:
$ kubectl get csr
User CSR Creation and Approval:
1. Create the namespace
$ kubectl create ns my-dev
2. Create CSR for Matt:
Get the IPs of the service/app.
$ kubectl get ep --n=my-dev
... 192.168.23.48:80 ...
$ kubectl get svc --n=my-dev
... 10.110.56.11 ...
$ cat <<EOF | cfssl genkey - | cfssljson -bare matt
{
"CN": "matt",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [{
"O": "matt",
"email": "matt@mw.net"
}]
}
EOF
3. Use the CSR w/API server:
$ cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: matt
spec:
groups:
- system:authenticated
- matt
request: $(cat matt.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
4. Approve the request:
$ kubectl certificate approve matt
...
or
... in bulk:
$ kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
Confirm:
$ kubectl get csr
5. Get the actual crt file approved:
$ kubectl get matt -o jsonpath='{.status.certificate}' | base64 --decode > matt.pem
Create a cluster service account and bind to a cluster role:
In the production namespace, create a service account, and create a cluster-wide role and binding so that users with a service account clusterstorageviewer can view/list the persistent storage for all pods and deployments.
$ kubectl create serviceaccount clusterstorageviewer
$ kubectl create clusterrole clusterstorageviewer-role --resource=persistentvolumes --verb="list"
$ kubectl create clusterrolebinding clusterstorageviewer-rolebinding --clusterrole=custlerstorageviewer --serviceaccount=prod:clusterstorageviewer
$ kubectl -n=prod get deployment nginx-deploy -o=yaml > nginxdeploy.yaml
Add the serviceaccount under the container spec section.
$ vi nginxdeploy.yaml
...
container:
spec:
...
serviceAccountName: clusterstorageviewer
...
<esc>:wq
$ kubectl apply -f nginxdeploy.yaml
Alternately, if you didn't want a copy of the yaml file, you can do an edit for this type of edit on the deployment successfully.
Update your cluster to default that all ingress is blocked by default. Afterwards specifically allow access the nginx production deployment.
Create a default deny ingress policy:
$ vi ingressdefdeny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: prod
spec:
podSelector: {}
policyTypes:
- Ingress
<esc>:wq
$ kubectl create -f ingressdefdeny.yaml
Create policy to allow nginx1 production deployment:
Note: Since we specify by meta
$ vi ingressnginx1allow.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: prod-nginx-allow
namespace: prod
spec:
podSelector:
matchLabels:
app: nginx1
policyTypes:
- Ingress
ingress:
<esc>:wq
$ kubectl create -f ingressnginx1allow.yaml
Check and Fix an Issue with the Kube-System static control-plane pods running on the Masters and worker Nodes.
$ kubectl get pods -n=kube-system
To look at their configuration files:
$ cd /etc/kubernetes/manifests/
$ ls -l
... etcd.yaml
... kube-apiserver.yaml
... kube-controller-manager.yaml
... kube-scheduler.yaml
Single Master Node - Kubeadm Single Control Plane Set-up:
On the kubernetes.io documentation Getting Started tab, start with the Installing kubeadm page. Then proceed to the Creating a single control-plane cluster with kubeadm page. The steps to disable swap are not in the first page. You have to search for those (or look below). The instructions are a bit all-over on the second page. as well, since the have the network add-on table below the init section.
Steps for the first page:
$ sudo -i
Remove swap caching from your system:
# vi /etc/fstab
< put a comment mark, #, in front of the swap line, and save >
So you don't have reboot, disable for this session, too:
# swapoff -a
Setup IPTables:
# modprobe br_netfilter
# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Install docker.io. It must be docker.io, not just docker.
# apt-get update && apt-get install docker.io
Install with or without a specific version specified.
# apt-get install kubeadm=1.17.0 kubelet=1.17.0 kubectl=1.17.0
# apt-mark hold kubelet kubeadm kubectl
Steps for the second page:
# kubeadm init --control-plane-endpoint=12.34.56.78:6443 --pod-network-cidr=<pick from the CNI network add-on table on kubernetes.io> --upload-certs
Or, alternately, use a yaml file (fix indentation before use):
$ vi kubeadminit.yaml
<a>
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.16.2
controlPlaneEndpoint: "k8master.mindwatering.net"
networking:
podSubnet: 192.168.0.0/16
<esc>:wq
# kubeadm init --config=kubeadminit.yaml --upload-certs > kubeadm-results.txt
Once you issue the init, you then can install the network add-on. Use kubect apply -f along with the address for the network add-on chosen.
Calico:
--pod-network-cidr=192.168.0.0/16
Flannel:
--pod-network-cidr=10.244.0.0/16
Update the config path so kubectl works:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
After this, you can use the kubeadm join command listed in the output. You may want to use pipe to tee ( kubeadm init ... | tee savefilename.out ) with the init command, to save the command for the near future.
previous page
|