Overview:
- OpenShift cluster, applications, and users administration
- Maintenance and Troubleshooting of Kubernetes clusters
- Security management of the cluster
- User Provisioned Infrastructure (UPI)
- Cluster applications consist of multiple resources that are configured together, and each resource has a definition document and a configuration applied.
- Declarative paradigm of resource management for specifying desired states that the system will configure, vs imperative commands that manually configure the system step-by step
Software Alphabet Soup:
- Open Shift Container (OC)
- Red Hat OpenShift Container Platform (ROCP), based on Kubernetes
- Single-node base metal (SNO), a single-node implementation, meaning a RHOCP cluster running on a single-node (host server)
Using Deployment Strategies:
- Deployment strategies change or upgrade applications/instances with or without downtime so that users barely notice a change
- Users generally access applications through a route handled by a router, so updates can focus on the DeploymentConfig object features or routing features.
- Most deployment strategies are supported through the DeploymentConfig object, and additional strategies are supported through router features.
- - Object features impact all routes that use the application
- - Router features impact targeted individual routes
- Deployment readiness check fails, the DeploymentConfig object retries to run the pod until it times out. Default time-out is 10m, set in dc.spec.strategy.*parems --> TimeoutSeconds
Rolling Deployment Updates:
- Default deployment strategy when none specified in the DeploymentConfig object
- Replaces instances of previous application/deployment with new versions by deploying new pods and waiting for them to become "ready" before scaling down the old version instances.
- - Waiting on the new versions to be "ready" is called a "canary test", and this method, a "canary deployment".
- Will be aborted if the new pods don't become ready, the deployment rolls back to to its previous version.
- Should not be used if the old instance application is not compatible, and cannot run along side of the new version. Application should be designed to handle "N-1" compatibility.
- The rollingParams defaults:
- - updatePeriodSeconds - wait time for individual pod updates: 1
- - intervalSeconds - wait time after update for polling deployment status: 1
- - timeoutSeconds (optional) - wait time for scaling up/down event before timeout: 600
- - maxSurge (optional) - maximum percentage or number of instance rollover at one time: "25%"
- - maxUnavailable (optional) - maximum percentage or number of instances down/in-process at one time: "25%" (or 1 in OC)
- - pre and post - default to {}, are lifecycle hooks to be done before and after the rolling update
- If you want faster rollouts, use maxSurge with a high value. If you want low resource quotas and partial unavailability is okay, limit with maxUnavailability.
- If you implement complex checks (such as end-to-end workload workflows to the new instance(s)), custom deployment or blue-green deployments strategies are performance instead of a simpler rolling update.
Important:
In ROCP, the maxUnvailable is 1 for all machine config pools, RH recommends not to change this value to 3 for the control plane pool, but update one control plane node at a time.
Rolling Deployment Updates Order:
1. Executes pre lifecycle hook
2. Scales up new replication controller-based instances on surge count/percentage
3. Scales down old replication controller-based instances on max unable percentage
4. Repeats #2-3 scaling until the replication controller has reached the desired replica count and the old replication controller count has reached 0
5. Executes post lifecycle hook
Example rolling deployment demo from RH documentation:
Set-up a application to rollover:
[admin@rocp ~]$ oc new-app quay.io/openshifttest/deployment-example:latest
[admin@rocp ~]$ oc expose svc/deployment-example
[admin@rocp ~]$ oc scale dc/deployment-example --replicas=3
The following tag command will cause a rollover:
[admin@rocp ~]$ oc tag deployment-example:v2 deployment-example:latest
Watch the v1 to v2 rollover with:
[admin@rocp ~]$ oc describe dc deployment-example
Perform a rolling deployment update using the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> select/highlight application node --> Overview (tab/panel)
- In the Overview panel, confirm Update Strategy: Rolling, click Actions (dropdown) --> select Start Rollout
Edit Deployment configuration, image settings, and environmental variables in the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> click/open application --> Details (panel)
- In the Details panel, click Actions (dropdown) --> select Edit Deployment
- In the Edit Deployment window, edit the options desired:
- - Click Pause rollouts to temporarily disable updated application rollouts
- - Click Scaling to change the number of instance replicas
- - Click Save (button)
Recreate Deployment Update:
- Recreate deployment strategy
- Incurs downtime because, for a brief period, no instances of your application are running.
- Old code and new code do not run at the same time.
- Basic rollout behavior
- Use when:
- - Migration data transformation hooks have to be run before the new deployment starts
- - Application doesn't support old and new versions of code running in a rolling deployment
- - Application requires a RWO (read-write-once) volume, not supported being shared between multiple replicas
- Supports pre, mid, and post lifecycle hooks
- The recreateParams are all optional
Recreate Deployment Updates Order:
1. Executes pre lifecycle hook
2. Scales down previous deployment to 0 instances
3. Executes mid lifecycle hook
4. Scales up new deployment
5. Executes post lifecycle hook
Note:
- If number of replicas > 1, the first instance will be validated for readiness (wait for "ready") before scaling up the rest of the instance count. If the first replica fails, the deployment recreate fails and aborts.
Perform a recreate deployment update using the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> click/open application node --> Details (panel)
- In the Details panel, click Actions (dropdown) --> select Edit Deployment Config
- - In the YAML editor, change the spec.strategy.type to Recreate
- - Click Save (button)
- web console --> Developer perspective --> Topology view --> highlight/select application node --> Overview (tab/panel)
- In the Overview panel, confirm Update Strategy: Recreate, click Actions (dropdown) --> select Start Rollout
Imperative Commands vs. Declarative Commands:
Imperative commands in Kubernetes directly manipulate the state of the system by executing specific commands, while declarative commands involve defining the desired state in a configuration file that cumulates to what the state will be.
The imperative approach allows the administrator to issue step-by-step commands, where the result of each gives the administrator the flexibility of adaptability based on the last command's response, whereas the declarative are written instructions, called manifests which Kubernetes read and apply cluster changes to meet the state the resource manifest defines. The industry generally prefers the latter due to:
- Increased reproducibility/consistency
- Better version control
- Better GitOps methodology
Resource Manifest:
- file in YAML or JSON format, and thus a single document which can be version-controlled readily
- simplify administration by encapsulating all the attributes of an application in a file or a set of related files which are then can be run repeatably with consistent results allowing for the CI/CD pipelines of GitOps.
Imperative command example:
[admin@rocp ~]$ kubectl create deployment mysql-pod --port 3306 --image registry.ocp4.mindwatering.net:8443/mysql:latest --env="MYSQL_USER='dbuser'" --env="MYSQL_PASSWORD='hardpassword' --env="MYSQL_DATABASE='dbname'"
deployment.apps/mysql-pod created
Adding the --save-config and --dry-run=client options, respectively allow what would have been run to be saved in that resources configuration nomenclature into a manifest file.
[admin@rocp ~]$ kubectl create deployment mysql-pod --namespace=mysql-manifest --port 3306 --image registry.ocp4.mindwatering.net:8443/mysql:latest --replicas=1 --env="MYSQL_USER='dbuser'" --env="MYSQL_PASSWORD='hardpassword' --env="MYSQL_DATABASE='dbname'" --save-config --dry-run=client > ~/mysql-deployment.yaml
[admin@rocp ~]$ cat mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: mysql-manifest
annotations:
...
creationTimestamp: null
labels:
app: mysql-pod
name: mysql-pod
spec:
replicas: 1
selector:
matchLabels:
app: mysql-pod
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mysql-pod
spec:
containers:
- image: registry.ocp4.mindwatering.net:8443/mysql:latest
name: mysql
env:
- MYSQL_USER=dbuser
- MYSQL_PASSWORD=hardpassword
- MYSQL_DATABASE=dbname
ports:
- containerPort: 3306
resources: {}
status: {}
Notes:
- The order of parameters matter. For example, if the --env are moved earlier in the command they are not added.
- Never include the password like this, abstract with the credential.
- Add the service resource manifest to this one as one file separated by the --- delineator, or keep in separate files which are loaded together.
The declarative command syntax:
[admin@rocp ~]$ kubectl create -f ~/mysql-deployment.yaml
IMPORTANT:
- The kubectl create above does not take into account for the current running state of the resource, in this case, the mysql-pod resource. Executing kubectl create -f against a manifest for a live resource gives an error because the mysql-pod is already running. When using kubectl create to create/deploy a resource, the --save-config option produces the required annotations for future kubectl apply commands to operate.
- The kubectl apply command tries to apply updates w/o causing issues. In contrast, the kubectl apply -f command is declarative, and considers the difference between the current resource state in the cluster and the intended resource state that is expressed in the manifest. If the specified resource in the manifest file does not exist, then the kubectl apply command creates the resource. If any fields in the last-applied-configuration annotation of the live resource are not present in the manifest, then the command removes those fields from the live configuration. After applying changes to the live resource, the kubectl apply command updates the last-applied-configuration annotation of the live resource to account for the change.
- The kubectl apply command compares: the manifest file, the live configuration of the resource(s) in the cluster, and the configuration stored in the last-applied-configuration annotation.
To help verify syntax and whether an applied manifest update would succeed, use the --dry-run=server and the --validate=true flags. The dry-run=client does not have the validation of the cluster resource controllers server-side dry-run.
[admin@rocp ~]$ kubectl apply -f ~/mysql-deployment.yaml --dry-run=server --validate=true
deployment.apps/hello-openshift created (server dry-run)
Diff Tools vs Kubectl Diff:
Kubernetes resource controllers automatically add annotations and attributes to the live resource that make the output of other OS text-based diff tools report many differences that have no impact on the resource configuration, causing confusion and wasted time. Using the kubectl diff command confirms that a live resource matches, or does not match, a resource configuration that a manifest provides. Because other tools cannot know all details about how any controllers might change a resource, the the kubectl diff tool handles whether the cluster would determine if a change is meaningful. Moreover, GitOps tools depend on the kubectl diff command to determine whether anyone changed resources outside the GitOps workflow.
OC Diff Update:
Applying manifest changes with oc diff may not generate new pods for changes in secret and configuration maps because these elements are only read at deployment/pod start-up. Like the kubectl diff command, it compares the running deployment pod's configuration against the file specified in the diff. If the configuration changes require a restart, it has to be done separately. The pod could be deleted, but the oc rollout command will stop and replace pods to minimize downtime.
[admin@rocp ~]$ oc diff -f mysql-pod.yaml
or
[admin@rocp ~]$ cat mysqlservice.yaml | oc diff -f -
[admin@rocp ~]$ oc rollout restart deployment mysql-deployment.yaml
OC Patch Update:
The oc patch command allows partial YAML snippets to be applied to live resources in a repeatably declarative way. The applies to a deployment/pod regardless whether the patched resource/configuration already exists in the manifest yaml file - existing configuration is updated, new configuration is added.
[admin@rocp ~]$ oc patch deployment mysql-pod -p '{<insertjsonsnippet}'
deployment/mysql-pod patched
[admin@rocp ~]$ oc patch deployment mysql-pod -p ~/mysql-deploypatch.yaml
deployment/mysql-pod patched
CLI Reference:
docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/cli_tools/index#cli-developer-commands
Creating Manifests from Git:
Maintaining application manifests in Git provides version controlling, and ability to deploy new versions of apps from Git. When you setup your GitBash access you typically create a folder structure in a specific location where it was run.
In this example, our git folder/project is: ~/gitlab.mindwatering.net/mwdev/mysql-deployment/
The version numbers are set when you commit; this git repo has v1.0, v1.1.
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...
b, Create new OC project:
[admin@rocp ~]$ oc new-project mysql-deployment
Now using project "mysql-deployment" on server ...
Note:
- To switch projects, use the command: oc project <project-name>
c. Switch to the v1.1 of the Git repo:
[admin@rocp ~]$ cd ~/gitlab.mindwatering.net/mwdev/
[admin@rocp ~]$ git clone https://github.mindwatering.net/mwdev/mysql-deployment.git
Cloning into 'mysql-deployment' ...
[admin@rocp ~]$ git log --online
... (HEAD -> main, tag: <branchversion>, origin ...
<Note the tag version number. That's the version of the application manifest for mwsqldb>
[admin@rocp ~]$ cd mysql-deployment/
[admin@rocp ~]$ git checkout v1.1
branch 'v1.0' set up to track 'origin/v1.1' ...
d. In the app's folder, validate the v1.1 version of the mysql-deployment application can be deployed:
[admin@rocp ~]$ oc apply -f . --validate=true --dry-run=server
<confirm dry run>
e. After a successful dry-run, deploy the application:
[admin@rocp ~]$ oc apply -f .
f. Watch the deployments and pod instances and confirm the new app becomes available and its pods have a running state:
(Technically, this watch command looks at all deployments and pods, not just the one just deployed. So we will like see much more than just the new app.)
[admin@rocp ~]$ watch oc get deployments,pods
Every 2.0s: oc get deployments,pods ...
NAME READY UP-TO-DATE AVAILABLE AGE
...
deployment.apps/mysql-pod 1/1 1 1 60s
...
NAME READY STATUS RESTARTS AGE
...
pod/mysql-pod-6fddbbf94f-2pghj 1/1 Running 0 60s
...
<cntrl+c>, to end the watch
g. Review the current deployment manifest:
[admin@rocp ~]$ oc get deployment mysql-pod -o yaml
<view output>
h. Confirm still in the git working folder, and delete the current running deployment:
admin@rocp ~]$ pwd
.../gitlab.mindwatering.net/mwdev/mysql-deployment/
admin@rocp ~]$ oc delete -f .
<view components deleted>
OpenShift CLI (oc) Common Commands and Options:
Login to OpenShift:
[admin@rocp ~]$ oc login <server_url> --token=<your_token>
Get Cluster Information
[admin@rocp ~]$ oc status
Get Projects
[admin@rocp ~]$ oc projects
Switch to a Project
[admin@rocp ~]$ oc project <project_name>
List All Pods
[admin@rocp ~]$ oc get pods
Describe a Pod
[admin@rocp ~]$ oc describe pod <pod_name>
Get Pod Logs
[admin@rocp ~]$ oc logs <pod_name>
Execute a Command Inside a Pod
[admin@rocp ~]$ oc exec <pod_name> -- <command>
List All Deployments
[admin@rocp ~]$ oc get deployments
View Deployment Details
[admin@rocp ~]$ oc describe deployment <deployment_name>
List All DeploymentConfigs
[admin@rocp ~]$ oc get deploymentconfigs
View DeploymentConfig Details
[admin@rocp ~]$ oc describe deploymentconfig <deployment_config_name>
Trigger a New Deployment Rollout (Kubernetes Deployment)
- Restart a Kubernetes Deployment to trigger a new rollout.
[admin@rocp ~]$ oc rollout restart deployment/<deployment_name>
Trigger a New Deployment Rollout (OpenShift DeploymentConfig)
- Trigger a new rollout using an OpenShift DeploymentConfig.
[admin@rocp ~]$ oc rollout latest <deployment_config_name>
Scale Down and Up to Rollout (Kubernetes Deployment)
-Trigger a new rollout by scaling down and then back up.
[admin@rocp ~]$ oc scale deployment <deployment_name> --replicas=0
[admin@rocp ~]$ oc scale deployment <deployment_name> --replicas=1
List Services
[admin@rocp ~]$ oc get svc
Describe a Service
[admin@rocp ~]$ oc describe svc <service_name>
List Routes
[admin@rocp ~]$ oc get routes
Describe a Route
[admin@rocp ~]$ oc describe route <route_name>
List Builds
[admin@rocp ~]$ oc get builds
Start a New Build
[admin@rocp ~]$ oc start-build <build_name>
List Image Streams
[admin@rocp ~]$ oc get is
Describe an Image Stream
[admin@rocp ~]$ oc describe is <image_stream_name>
Scale a Deployment
[admin@rocp ~]$ oc scale --replicas=<number_of_replicas> deployment/<deployment_name>
Scale a DeploymentConfig
[admin@rocp ~]$ oc scale --replicas=<number_of_replicas> deploymentconfig/<deployment_config_name>
Delete an Application
[admin@rocp ~]$ oc delete all -l app=<application_name>
Expose a Service as a Route
[admin@rocp ~]$ oc expose svc/<service_name>
Kubernetes Kustomize:
Kustomize Overview:
- As a standalone tool, customizes Kubernetes objects through a kustomization file
- Makes declarative changes to application configurations and components and preserve the original base YAML files
- Group in a directory the Kubernetes resources that constitute your application, and then use Kustomize to copy and adapt these resource files to your environments and clusters.
- Starting with Kubernetes 1.14, kubectl supported declarative management of Kubernetes objects using kustomization files. oc integrates the Kustomization tool, as well.
- Features:
- - Generating resources from other sources
- - Setting cross-cutting fields for resources
- - Composing and customizing collections of resources
Kustomize File Structure:
Add to your code's base directory a kustomization.yaml file. The kustomization.yaml file has a list resource field to include all resource files. As the name implies, all resources in the base directory are a common resource set. The kustomization.yaml file can create a base application by composing all common resources by referring to one or more directories as bases.
Below is a very basic example showing the contents of a kustomization:
[admin@rocp ~]$ tree ./
base folder
├── configmap.yaml
├── deployment.yaml
├── secret.yaml
├── service.yaml
├── route.yaml
└── kustomization.yaml
[admin@rocp ~]$ cat ~/kustomcodedemo/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- configmap.yaml
- deployment.yaml
- secret.yaml
- service.yaml
- route.yaml
To render/view resources contained in kustomization:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodedemo/base/
<review resources/yaml>
To apply the kustomation(s), use kubectl apply:
[admin@rocp ~]$ kubectl apply -k ~/kustomcodedemo/base/
Kustomize Overlays:
- Overlays declarative YAML artifacts or patches that override the general settings without modifying the original files
Overlay example:
Notes:
- Overlay folder is peer to the base folder
- The overlay folders contain relative references to the base code (e.g. ../../base).
- The dev overlay kustomization.yaml changes the namespace to dev-env.
- The test overlay kustomization.yaml changes the namespace to test-env, and contains the testing patches
- The prod overlay kustomization.yaml contains one patch loading the file patch.yaml, and has allowNameChange: true, so that patch.yaml can change the name.
[admin@rocp ~]$ tree ~/kustomcodedemo/base/
base (folder)
├── configmap.yaml
├── deployment.yaml
├── secret.yaml
├── service.yaml
├── route.yaml
└── kustomization.yaml
overlay (folder)
└── development
└── kustomization.yaml
└── testing
└── kustomization.yaml
└── production
├── kustomization.yaml
└── patch.yaml
[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/development/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev-env
resources:
- ../../base
[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/testing/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-env
patches:
- patch: |-
- op: replace
path: /metadata/name
value: mysql-pod-test
target:
kind: Deployment
name: frontend
- patch: |-
- op: replace
path: /spec/replicas
value: 15
target:
kind: Deployment
name: mysql-pod
resources:
- ../../base
commonLabels:
env: test
[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: prod-env
patches:
- path: patch.yaml
target:
kind: Deployment
name: mysql-pod
options:
allowNameChange: true
resources:
- ../../base
commonLabels:
env: prod
[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/production/patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-pod-prod
spec:
replicas: 5
Apply the production overlay:
[admin@rocp ~]$ kubectl apply -k ~/kustomcodedemo/overlay/production
deployment.apps/mysql-pod-prod created
...
Delete the testing overlay:
[admin@rocp ~]$ oc delete -k ~/kustomcodedemo/overlay/testing
<view pod containers deleted>
Configuration and Sensitive Data with Kustomize:
ConfigMaps and Secrets are used to store configuration or sensitive data that are used by other Kubernetes objects, such as Deployment Pods. The source of truth of ConfigMaps or Secrets are usually external to a cluster, such as a .properties file or an SSH keyfile. Kustomize has secretGenerator and configMapGenerator, which generate Secret and ConfigMap from files or literals.
Using ConfigMaps Notes:
- All entries in an application.properties become a single key in the ConfigMap generated.
- Each variable in the .env file becomes a separate key in the ConfigMap generated.
- ConfigMaps can also be generated from literal key-value pairs; add an entry to the literals list in configMapGenerator.
Using Secrets Notes:
- Generate Secrets from files or literal key-value pairs.
- To generate a Secret from a file, add an entry to the files list in secretGenerator
Using ConfigMaps:
Example configMapGenerator that loads from an application.properties file:
[admin@rocp ~]$ vi ~/kustomcodemap/base/application.properties
FOO=Bar
<esc>:wq (to save)
[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
configMapGenerator:
- name: example-configmap-1
files:
- application.properties
<esc>:wq (to save)
To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
application.properties: |
FOO=Bar
kind: ConfigMap
metadata:
name: example-configmap-1-1abcd7891e
Example configMapGenerator that loads from an .env file:
[admin@rocp ~]$ vi ~/kustomcodemap/base/.env
FOO=Bar
<esc>:wq (to save)
[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
configMapGenerator:
- name: example-configmap-1
envs:
- .env
<esc>:wq (to save)
To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
FOO: Bar
kind: ConfigMap
metadata:
name: example-configmap-1-23abcd123a
Example of a configMap from a literal key-value pair:
[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
configMapGenerator:
- name: example-configmap-2
literals:
- FOO=Bar
<esc>:wq (to save)
To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
FOO: Bar
kind: ConfigMap
metadata:
name: example-configmap-2-a1abcde2ab
Example of application.properties, deployment, with a generated ConfigMap:
[admin@rocp ~]$ vi ~/kustomcodemap/base/application.properties
FOO=Bar
<esc>:wq (to save)
[admin@rocp ~]$ vi ~/kustomcodemap/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: example-configmap-1
<esc>:wq (to save)
[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
resources:
- deployment.yaml
configMapGenerator:
- name: example-configmap-1
files:
- application.properties
<esc>:wq (to save)
To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
application.properties: |
FOO=Bar
kind: ConfigMap
metadata:
name: example-configmap-1-g4hk9g2ff8
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
name: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- image: my-app
name: app
volumeMounts:
- mountPath: /config
name: config
volumes:
- configMap:
name: example-configmap-1-a1bc2d3ef4
name: config
Example of configMapGenerator with all three types (file, envs, and literal key):
[admin@rocp ~]$ cd ~/kustomconfigmaps/
[admin@rocp ~]$ vi application.properties
Day=Monday
Enabled=True
<esc>:wq (to save)
[admin@rocp ~]$ vi configmap2.env
Greet=Welcome
Enable=True
<esc>:wq (to save)
[admin@rocp ~]$ cat kustomization.yaml
...
configMapGenerator:
- name: configmap-props
files:
- application.properties
- name: configmap-envs
envs:
- configmap2.env
- name: configmap-literals
literals:
- name="configmap-literal"
- description="literal key-value pair"
...
[admin@rocp ~]$ kubectl kustomize .
apiVersion: v1
data:
application.properties: |
Day=Monday
Enable=True
kind: ConfigMap
metadata:
name: configmap-1-5g2mh569b5 1
---
apiVersion: v1
data:
Enable: "True"
Greet: Welcome
kind: ConfigMap
metadata:
name: configmap-2-92m84tg9kt 2
---
apiVersion: v1
data:
description: literal key-value pair
name: configmap-3
kind: ConfigMap
metadata:
name: configmap-3-k7g7d5bffd 3
---
...
Using Secrets:
Example Secret that loads from an password.txt file:
[admin@rocp ~]$ vi ~/kustomcodeshh/base/password.txt
username=admin
password=reallygoodpassword
<esc>:wq (to save)
[admin@rocp ~]$ vi ~/kustomcodeshh/base/kustomization.yaml
secretGenerator:
- name: example-secret-1
files:
- password.txt
<esc>:wq (to save)
Example Secret that loads from literal list:
[admin@rocp ~]$ vi ~/kustomcodeshh/base/password.txt
[admin@rocp ~]$ vi ~/kustomcodeshh/base/kustomization.yaml
- name: example-secret-2
literals:
- username=admin
- password=reallygoodpassword
<esc>:wq (to save)
To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodeshh/base/
apiVersion: v1
data:
password: reallygoodpassword
username: admin
kind: Secret
metadata:
name: example-secret-2-t52t6g96d8
type: Opaque
To apply:
[admin@rocp ~]$ kubectl apply -k ~/kustomcodeshh/base/
<confirm created okay>
View the created pod:
[admin@rocp ~]$ oc get all
<view containers>
Example of secret generators with all three types (file, envs, and literal key):
[admin@rocp ~]$ cd ~/kustomsecrets2/
[admin@rocp ~]$ cat kustomization.yaml
...
secretGenerator:
- name: secret-file
files:
- password.txt
- name: secret-envs
envs:
- secret-mysql.env
- name: secret-literal
literals:
- MYSQL_ADMIN_PASSWORD=postgres
- MYSQL_DB=mysql
- MYSQL_USER=user
- MYSQL_PASS=reallygoodpassword
configMapGenerator:
- name: db-config
literals:
- DB_HOST=database
- DB_PORT=5432
...
ConfigMap generatorOptions:
- Alter the default behavior of Kustomize generators.
- Workload resources, e.g. deployments, do not detect changes to configuration maps and secrets unless the name changes.
- By default, a kustomize configMapGenerator and a secretGenerator append a hash suffix to the name of the generated resource(s), which means the name changes each apply, which means the deployment updates.
- Use generated-disable-suffix to disable the hash suffix
Example of configMapGenerator without a hash each time:
[admin@rocp ~]$ vi ~/kustomdisablegeneratedhash/kustomization.yaml
...
configMapGenerator:
- name: my-configmap
literals:
- name="configmap-nohash"
- description="literal key-value pair"
generatorOptions:
disableNameSuffixHash: true
labels:
type: generated-disabled-suffix
annotations:
note: generated-disabled-suffix
...
[admin@rocp ~]$ kubectl kustomize ~/kustomdisablegeneratedhash/
apiVersion: v1
data:
description: literal key-value pair
name: configmap-nohash
kind: ConfigMap
metadata:
annotations:
note: generated-disabled-suffix
labels:
type: generated-disabled-suffix
name: my-configmap
...
Deploy Packaged Templates
Templates Overview:
- A Kubernetes custom resource that describes a set of Kubernetes resource configurations
- Have varied use cases, and can create any Kubernetes resource
- Can have parameters
- Processing templates and provided parameter values creates a set of Kubernetes resources.
- - To run oc process on the templates in the a namespace, you must have write permissions on that namespace.
- The template resource is a Kubernetes extension that Red Hat OpenShift provides (OC).
- - The Cluster Samples Operator populates templates and image streams in the "openshift" namespace.
- - The operator can be set during installation to opt-out of adding templates.
- - The operator can be set to restrict the list of templates provided.
- - Unprivileged users can read the templates in the "openshift" namespace by default, can extract the template from the openshift namespace, create a copy in their own projects where they have wider permissions. By copying a template to a project, they can use the oc process command on the template into that project's namespace.
Methods to Create Kubernetes Related Resources Using Templates:
- Create using the CLI
- Upload a template to a project or the global template library using the web console
Login and display the list of templates in the openshift namespace:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ oc get templates -n openshift
NAME DESCRIPTION PARAMETERS OBJECTS
...
Evaluate a template displayed:
[admin@rocp ~]$ oc describe template <template-name> -n openshift
Notes:
- The describe includes:
- - Name, Namespace, Created, Labels, Description, Annotations, and other meta-data
- - Parameters
- - - Name, Description, Required (or not)
- - Object Labels
- - Message (notes)
- - Objects (resources that make up the template, e.g. configMaps, secrets, services, etc.)
View just the parameters needed for the template from the list:
[admin@rocp ~]$ oc process --parameters <template-name> -n openshift
NAME ... GENERATOR VALUE
...
View just the parameters needed for a template contained in a file (folder):
admin@rocp ~]$ oc process --parameters -f template-name.yaml
NAME ... GENERATOR VALUE
...
To view the manifest file for the template:
admin@rocp ~]$ oc get template <template-name> -o yaml -n openshift
apiVersion: template.openshift.io/v1
kind: Template
labels:
template: <template-name>
metadata:
...
Create a new project and an app/deployment in the new project:
a. Create the new project:
[admin@rocp ~]$ oc new-project mysql-fromtemplate
Now using project "packaged-templates" on server ...
b. Create the new app from template with passed credentials (parameters):
Create a new app from a template and passing parameters like:
oc new-app --template=<template-name> -p PARAMETER_ONE=valueone PARAMETER_TWO=valuetwo
Notes:
- The oc new-app command cannot update an app deployed previous from an earlier version of a template.
- The oc process command can apply parameters to a template, to produce manifests to deploy the templates with a set of parameters, along with local files on the workstation.
[admin@rocp ~]$ oc mysql-app --template=mysql-template -p MYSQL_USER=user1 -p MYSQL_PASSWORD=mypasswd
--> Deploying template "mysql-fromtemplate/mysql-template" to project mysql-fromtemplate
...
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/mysql'
Run 'oc status' to view your app.
b-alt.: Create the new app from template with credentials in a file as the parameter:
[admin@rocp ~]$ vi mysqlUserCred.env
MYSQL_USER=user1
MYSQL_PASSWORD=mypasswd
IMAGE=registry.ocp4.mindwatering.net:8443/mysql:latest
<esc>:wq (to save)
[admin@rocp ~]$ oc mysql-app --template=mysql-template --param-file=mysqlUserCred.env
--> Deploying template "mysql-fromtemplate/mysql-template" to project mysql-fromtemplate
...
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/mysql'
Run 'oc status' to view your app.
c. Wait for deployment to become ready:
[admin@rocp ~]$ watch oc get pods
NAME READY STATUS RESTARTS AGE
...
mysql-1-deploy 0/1 Completed 0 84s
...
<cntrl>c (to exit watch)
d. Expose the service and view app in broswer:
[admin@rocp ~]$ oc expose service/mysql
<view result port>
[admin@rocp ~]$ oc status
<review app info>
[admin@rocp ~]$ oc get routes
<view host and port - we can then bring this host and port in a web browser>
e. Update the application from a new test2 version of the application image:
[admin@rocp ~]$ cp mysqlUserCred.env mysqlUserCredTest.env
[admin@rocp ~]$ vi mysqlUserCredTest.env
...
IMAGE=registry.ocp4.mindwatering.net:8443/mysql:test2
<esc>:wq (to save)
f. Test the new test2 image to the template manifest and confirm with a diff the new image version:
[admin@rocp ~]$ oc process mysql-template --param-file=mysqlUserCredTest.env | oc diff -f -
...
- name: INIT_DB
+ value: "false"
...
+ image: registry.ocp4.mindwatering.net:8443/mysql:test2
...
Notes:
- The template image has been switched from latest to test2.
- The db will not be reinitialized since the INIT parameter was not included. (The default is false.)
g. Apply the new test2 image to the existing app deployed:
[admin@rocp ~]$ oc process mysql-template --param-file=mysqlUserCredTest.env | oc apply -f -
secret/mysql configured
deployment.apps/do123-mysql-app configured
service/do123-mysql-app unchanged
route.route.openshift.io/do280-mysql-app unchanged
h. Use watch to verify the deployment is running:
[admin@rocp ~]$ watch oc get pods
NAME READY STATUS RESTARTS AGE
do123-mysql-app-a5f101bc2-abcde 1/1 Running 0 60s
mysql-1-abcde 1/1 Running 0 53m
mysql-1-deploy 0/1 Completed 0 53m
Helm Applications - Helm Charts
Overview: Deploying and updating applications from resource manifests packaged as Helm charts.
Helm:
- Open-source application for managing K8s app lifecycles
- Helm Chart:
- - A package that describes a set of K8s resources to be deployed
- - Defines values customizable during deployment
- - Contains hooks executed at different points during installation and updates.
- - - Automate tasks with more complex applications than purely manifest-based files
- Includes functions to distribute charts and updates
- Not as complex a model as K8s Operators
- Release: the deployment / app result of deploying/running the chart
- Versions: the chart can have multiple versions for upgrades and app fixes
- Minimum parameters for chart release/installation:
- - Deployment target namespace
- - Default values to override
- - Release name
- Helm charts distributed by/as:
- - Folders/files
- - Archives
- - Container images
- - Repository URLs
Notes:
- Typically, a Helm chart release does not create a namespace, and namespaced resources in the chart omit a namespace declaration. Helm uses the namespace passed (parameter) for the deployment, and Helm creates namespaced resources in "this" namespace.
- When installing a release, Helm creates a secret with the release details. If the secret is deleted or corrupted, Helm cannot operate with the former release anymore. (Secret is kind/type: helm.sh/release.v1)
Chart Structure:
example/
├── chart.yaml
├── templates
| |── example.yaml
└── values.yaml
chart.yaml: Contains chart metadata, including name, version, maintainer, and repository source of the chart.
- View this info with the command helm show chart <chartname> (e.g. example)
templates folder: Contains the resources/manifest files that make up the app / deployment
- can contain any K8s resources
- can include a namespace, or non-namespaced
- typically use the release name with the type of resource as a suffix releasename-deploy, releasename-sa, releasename-i
values.yaml: Contains the default values (like parameters) for the chart
- View these values with the command: helm show values <chartname> (e.g. example)
[admin@rocp ~]$ helm show chart mwMySQL
apiVersion: v1
description: A Helm chart for MW MySQL
name: mwMySQL
version: 0.0.1
maintainers:
- email: devaccount@mindwatering.net
name: MW Developer
sources:
- https://git.mindwatering.net/mwmysql
[admin@rocp ~]$ helm show values mwMySQL
...
image:
repository: "mwmysql"
tag: "1.1.10"
pullPolicy: IfNotPresent
...
route:
enabled: true
host: null
targetPort: http
...
resources: {}
...
Notes:
- All chart values can be overridden / configured using a yaml file. (e.g. --values valuesoverride.yaml )
- If you want to customize the route, create a route.host key file.
Dry run a release, and install a Helm chart release:
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...
b, Create new OC project:
[admin@rocp ~]$ oc new-project mysql-chartsdeployment
Now using project "mysql-deployment" on server ...
c. Create values override file:
[admin@rocp ~]$ $ vi valuesoverride.yaml
image:
repository: registry.ocp4.mindwatering.net:8443/mysql:test2
name: etherpad
tag: 1.8.18
route:
host: mysql-test2.apps.mindwatering.net
<esc>:wq (to save)
d. Perform dry-run:
[admin@rocp ~]$ helm install release-name mwmysql --namespace mwsql --dry-run --values valuesoverride.yaml
The release preview includes 3 sections:
- The Metadata: name, last deployed date/time, namespace, status, revision, hooks, manifest, etc.
- The K8s resources: Deployment, ConfigMaps, ReplicaSets, Service, ServiceAccount, Ingress, etc.
- Notes : Instructions from the developer/owner for deployment processes for release, upgrades, management ports, etc.
Note:
- The install, and the upgrade command below, install the latest chart version unless overridden with the --version option
- The helm install syntax supports --values valuesoverride.yaml or -f valuesoverride.yaml
e. If the dry-run looks correct, install w/o the dry-run parameter:
[admin@rocp ~]$ helm install release-name mwmysql --namespace mwsql --values valuesoverride.yaml
f. View the new Helm deployment along with previous ones in the mwmysql namespace:
[admin@rocp ~]$ helm list --namespace mwsql
NAME NAMESPACE REVISION ... STATUS CHART APP VERSION
mwmysql mwsql 1 ... deployed example-4.12.1 1.8.10
g. Confirm all pods running:
[admin@rocp ~]$ oc get pods -n mwsql
<confirm status column shows RUNNING>
h. View route:
[admin@rocp ~]$ oc get route --namespace mwsql
NAME HOST/PORT
mwMySQL ...
Note:
- Use -n or --namespace to limit
- Use -A or --all-namespaces to view all, but oc get pods and get route will show all namespaces by default
Bring up the app in a browser and verify working.
Upgrade a Helm Release:
- Upgrade using the <release-name> used above
- Upgrade defaults to the latest version if not overwritten
- Always use the dry-run as conflicts or issues upgrading may occur
[admin@rocp ~]$ helm upgrade mwmysql --dry-run --values valuesoverride.yaml
<confirm output is desired, then re-run w/o the --dry-run again>
View Helm Release History:
- View releases by the <release-name> used above
[admin@rocp ~]$ helm history mwmysql
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 <date/time> superceded ...
2 <date/time> deployed ...
Revert to previous Helm release (number):
- Revert using a <release-name> and a <revision-number>
- Use after a history view so you have the revision number needed
Warning:
- Use warily - Rollbacks can be very very bad when the new version of the app is not compatible with the previous application version (e.g. db schema changes, app login changes, etc.)
[admin@rocp ~]$ helm rollback mwmysql 1
Rollback was a success! Happy Helming!
Helm Repositories:
- Use the helm repo command to set-up a Helm Chart repository
- Repo commands:
- - helm repo add: add a new repository
- - helm repo list: list repositories
- - helm repo update: update repository(s)
- - helm repo remove: remove a repository
- - helm search repo: searches all configured repos and lists all available charts (by default, the command only displays the latest version of each chart if the chart contains multiple, use --versions to override and list all versions)
Note:
- The helm repo command updates the local configuration and does not affect any running cluster resources
- The helm add and the helm remove commands update the following config file : ~/.config/helm/repositories.yaml on the administrative workstation
Use the following syntax:
helm repo add <repo-name> <repo-url>
[admin@rocp ~]$ helm repo add mwmysql-charts https://mysqlcharts.mindwatering.net
OCP Authentication and Authorization:
Authentication Components:
User:
- Entities that interact with the API server and identity a Person and assign the person roles either directly or via group membership
- Unauthenticated users are assigned by the authentication layer the system:anonymous virtual user
Identity:
- Keeps a record of successful authentication attempts from a user and his/her identity provider
- Data about the source of authentication is stored on the identity resource
Service Account:
- Application communication with the API independent of user credentials
- Access is given to applications or other components without the need of borrowing a user credential
Group:
- Represents a specific set of users
- Has Users as members of the group
- Authorization policies map groups to assign permissions
- OCP has system groups
- OCP sets up virtual groups provisioned automatically by the cluster
- Unauthenticated users are assigned by the authentication layer the system:unauthenticated virtual group
Role:
- Defines API operations that a user has permissions to perform on specific resource types
- Assigning roles grants permissions to users, groups, and service accounts
- K8s and OCP use role-based-access-control (RBAC) policies to determine user privileges
OCP API Authentication Methods Supported:
- OAuth access tokens
- X.509 client certificates
The OCP Authentication Operator
- Provides the Authentication operator, which runs an OAuth server.
- Provides OAuth access tokens to users when they attempt to authenticate to the API. An identity provider must be configured and available to the OAuth server.
- Uses an identity provider to validate the identity of the requester, reconciles the local user with the identity provider, and creates the OAuth access token for the user.
- Automatically creates identity and user resources after a successful login.
Note:
- In native K8s, OAuth and OpenID Connect (OIDC) authentication provided by an OAuth Proxy and an Identity Provider (e.g. Keycloak)
Identity Providers:
- The OCP OAuth server can be configured to use one or multiple identity providers at the same time.
- An OAuth custom resource is updated with the identity provider(s)
- Includes the more common identity providers:
- - HTPasswd: Validates usernames and passwords against a secret that stores credentials that are generated by using the htpasswd command.
- - Keystone: Enables shared authentication with an OpenStack Keystone v3 server.
- - LDAP: Configures the LDAP identity provider to validate usernames and passwords against an LDAPv3 server via simple bind authentication.
- - GitHub: Configures a GitHub identity provider to validate usernames and passwords against GitHub or the GitHub Enterprise OAuth authentication server.
- - OpenID Connect: Integrates with an OpenID Connect identity provider by using an Authorization Code Flow.
Authenticating as a Cluster Administrator:
- OCP provides two ways to authenticate API requests with cluster administrator privileges:
- - Use the kubeconfig file which embeds an X.509 client certificate that never expires
- - Authenticate as the kubeadmin virtual user. Successful authentication grants an OAuth access token.
Steps:
- Configure an identity provider
- Create any additional users and/or groups w/in the identity provider
- Grant them different access levels by assigning roles to the users/groups
Authenticating with the X.509 Certificate
- During installation, the OpenShift installer creates a unique kubeconfig file in the auth directory.
- The kubeconfig file contains specific details and parameters for the CLI to connect a client to the correct API server, including an X.509 certificate.
- Add KUBECONFIG to the user start-up (bash.rc, etc), to make available the K8s kubectl and OCP oc commands
Export path and X.509 authentication using KUBECONFIG file:
[admin@rocp ~]$ export KUBECONFIG=/home/admin/auth/kubeconfig
[admin@rocp ~]$ oc get nodes
<no password prompt - nodes information is presented>
Alternately, you can pass the path as part of the command:
(But other than an exam question, why would you?)
[admin@rocp ~]$ oc --kubeconfig /home/admin/auth/kubeconfig get nodes
<no password prompt - nodes information is presented>
Authenticating with the kubeadmin Virtual User
- The kubeadmin virtual user created at the end of the OCP installation
- Dynamically generates a unique kubeadmin password for the cluster
- Stores the kubeadmin secret in the kube-system namespace; the secret contains the hashed password for the kubeadmin user
- The kubeadmin user has cluster administrator privileges.
- The login path, username, and password for console access are printed near the end of the installation log
...
INFO Access the OpenShift web-console here:
https://api.ropc.mindwatering.net
INFO Login to the console with user: kubeadmin, password: abCD_EfgH_1234_9876_dcba
After configuring an identity provider, the local kubeadmin user can be deleted "for better security":
[admin@rocp ~]$ oc delete secret kubeadmin -n kube-system
<view confirmation>
Warning:
If you lose the KUBECONFIG file's X.509 certificate and you delete the kubeadmin user there is no other way to administer your cluster when the identity provider has an outage. You will have 100% security in your security analysis, and 0% cluster management productivity.
HTPasswd Identity Provider:
- Uses Linux htpasswd to create a temporary htpasswd file, and applies the file to a OCP/K8s secret
- Requires httpd-tools and the oc binaries installed
Create HTPasswd User Steps:
a. Prepare htpasswd file
b. Create secret in OCP
c. Create htpasswd identity provider custom resource (cr)
a. Create htpasswd file:
[admin@rocp ~]$ cd ~
[admin@rocp ~]$ htpasswd -c -B -b ocpuser1.htpasswd ocpuser1 SuperHumanPwd123
Notes:
-c = create file
-B = bcrypt password hashing
-b = use the password from command line rather than prompting (like a normal user account)
ocpuser1.htpasswd = name of the file (for the -c)
ocpuser1 = login id of the user
b. Create the secret:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ oc create secret generic htpass-ocpuser1-secret --from-file htpasswd=ocpuser1.htpasswd -n openshift-config
secret/htpass-ocpuser1-secret created
htpass-ocpuser1-secret = name of the secret created, used in the mapping CR created next.
c. Create the CR file:
Note:
- Two methods to approach updating the identity provider, either:
- - Download the current OAuth config as YAML, edit and add the new HTPasswd section
-or-
- - If a new cluster, create a file with just what's needed for HTPasswd authentication
[admin@rocp ~]$ oc get oauth cluster -o yaml > ~/htpassocpuser1.yaml
Edit the downloaded yaml adding the new spec identityProviders section, or create a file with these contents.
[admin@rocp ~]$ vi htpassocpuser1.yaml
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: mwlocal
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpass-ocpuser1-secret
<esc>:wq (to save)
Note:
name: mwlocal = the user ID prefix to the login ID created in OCP
mappingMethod: claim = With default claim value, user cannot log in with other identity providers
fileData name: htpass-ocpuser1-secret = the secret just created above
If the yaml config was downloaded and edited, then do a replace:
[admin@rocp ~]$ oc replace -f ~/htpassocpuser1.yaml
- or -
If the yaml file is new, you can alternately do an apply:
[admin@rocp ~]$ oc apply -f ~/htpassocpuser1.yaml
The system will perform a redeployment of the openshift-authentication pods, watch:
[admin@rocp ~]$ watch oc get pods -n openshift-authentication
Try out the new login:
Browser --> api.ropc.mindwatering.net --> Instead of kube:admin (button), choose mwlocal (button), and login as the ocpuser1 user.
Update HTPasswd User Steps:
a. Update the htpasswd file
b. Update the secret in OCP
c. Watch the openshift-authentication pods redeployment
a. Update the htpasswd file:
[admin@rocp ~]$ cd ~
[admin@rocp ~]$ htpasswd -b ocpuser1.htpasswd ocpuser1 SuperHumanPwd1234
b. Update the secret in OCP:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ oc set data secret/htpass-ocpuser1-secret --from-file htpasswd=ocpuser1.htpasswd -n openshift-config
Note:
- After updating the secret in OCP, the openshift-authentication namespace pods are redeployed
- Monitor redeployment as desired/needed
c. Watch the openshift-authentication pods redeploy:
[admin@rocp ~]$ watch oc get pods -n openshift-authentication
Delete HTPasswd User Steps:
a. Delete user from htpasswd
b. Delete the password from the secret
c. Remove the user resource
d. Remove the identity resource
a. Delete a user HTPasswd file (credential):
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ htpasswd -D ocpuser1.htpasswd ocpuser1
b. Delete the password from the secret:
[admin@rocp ~]$ oc set data secret/htpass-ocpuser1-secret --from-file htpasswd=ocpuser1.htpasswd -n openshift-config
c. Remove the user resource:
[admin@rocp ~]$ oc delete user ocpuser1
user.user.openshift.io "ocpuser1" deleted
d. Remove the identity resource:
- Confirm the identity provider name/prefix:
[admin@rocp ~]$ oc get identities | grep ocpuser1
mwlocal:ocpuser1 mwlocal ocpuser1 ocpuser1 ...
- Remove ocpuser1
[admin@rocp ~]$ oc delete identity mwlocal:ocpuser1
identity.user.openshift.io "mwlocal:ocpuser1" deleted
Add a user to the cluster-admin role (cluster administration privileges):
[admin@rocp ~]$ oc adm policy add-cluster-role-to-user cluster-admin ocpuser1
clusterrole.rbac.authorization.k8s.io/cluster-admin added: "ocpuser1"
Note:
- When you execute the oc adm policy add-cluster-role-to-user cluster-admin new-admin command, a naming collision occurs with an existing cluster role binding object.
- The system creates an object and appends to the name -n, where n is an iterating numeral that starts with -0.
To view the new cluster role binding:
- Get all cluster-admin bindings:
[admin@rocp ~]$ oc get clusterrolebinding | grep ^cluster-admin
- Display the last binding for -n:
[admin@rocp ~]$ oc describe <cluster-admin-n>
Role-based Access Control (RBAC):
Overview:
- Technique for managing access to resources in K8s/OCP
- Determines whether a user can perform certain actions within the cluster or project
- Role types:
- - cluster
- - local (by project)
- Two-level hierarchy role types enables:
- - Reuse across multiple projects through the cluster roles
- - Enables customization inside individual projects through local roles
- - Authorization evaluation uses both the cluster role bindings and the local role bindings to allow or deny an action on a resource
Authorization Process:
The authorization process is managed by three RBAC Objects: rules, roles, and bindings.
RBAC Objects:
Rule
- Allowed actions for objects or groups of objects
Role
- Sets of rules. Users and groups can be associated with multiple roles
Binding
- Assignment of users or groups to a role
RBAC Scope
Red Hat OpenShift Container Platform (RHOCP) defines two groups of roles and bindings depending on the user's scope and responsibility: cluster roles and local roles.
RBAC Level Description
Cluster RBAC Roles and bindings that apply across all projects.
Local RBAC Roles and bindings that are scoped to a given project. Local role bindings can reference both cluster and local roles.
Managing RBAC with the CLI:
- Cluster administrators perform these commands
- Rules defined by an action and a resource
- Use the oc adm policy command to add and remove cluster roles and namespace roles
- - add a cluster role to a user, use the add-cluster-role-to-user subcommand
- - add a new user, use create user
- Query access with the who-can subcommand
Add ocpmgr1 as a cluster admin:
[admin@rocp ~]$ oc adm policy add-cluster-role-to-user cluster-role ocpmgr1
To remove ocpmgr1 from being a cluster admin:
[admin@rocp ~]$ oc adm policy remove-cluster-role-from-user cluster-role ocpmgr1
To query who can <action> on a user:
[admin@rocp ~]$ oc adm policy who-can delete ocpuser1
Out-of-the-box Default Roles:
cluster-admin
- Have superuser access to the cluster resources. These users can perform any action on the cluster, and have full control of all projects
cluster-status
- Have access to cluster status information.
cluster-reader
- Have access to access or view most of the objects but cannot modify them
self-provisioner (cluster role)
- Can create their own projects (projectrequests resources)
- By default, the self-provisioners cluster role binding associates the self-provisioner cluster role with the system:authenticated:oauth group
- Add users to the system:authenticated:oauth group
admin (technically a cluster role, but limited by -n to a project namespace)
- Manage all project resources, including granting access to other users to access "this" project
- Uses the oc policy command to add and remove "this" project's namespace roles
- Gives access to project resources including quotas and limit ranges in "this" project
- Gives ability to create/deploy applications in "this" project
basic-user (technically a cluster role, but limited by -n to a project namespace)
- Have read access to "this" project
edit (technically a cluster role, but limited by -n to a project namespace)
- Gives a user sufficient access to act as a developer inside the project, but working under the access limits that a project admin (role) configured
- Can create, change, and delete common application resources on "this" project, such as services and deployments
- Cannot act on management resources such as limit ranges and quotas
- Cannot manage access permissions to "this" project
view (technically a cluster role, but limited by -n to a project namespace)
- Can view "this" project resources
- Cannot modify "this" project resources.
As project admin, give a user basic-user access to my current project:
- oc policy add-role-to-user <role-name> <username> -n <project-namespace>
[admin@rocp ~]$ oc policy add-role-to-user basic-user ocpuser1 -n mwwordpress
Note:
- The self-provisioner cluster role is NOT the self-provisioners cluster role binding
User Types:
- User object represents a user who is granted permissions by adding roles to that user, or that user's group via role bindings
- User must authenticate before they can access OpenShift Container Platform
- API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user
- After successful authentication, a policy determines what the user is authorized to do
Three Types:
Regular users
- Represents a Person with access to the platform
- Interactive users who are represented with the User object
System users
- Created automatically when the infrastructure is defined, mainly for the infrastructure to securely interact with the API
- Include:
- - A cluster administrator (with access to everything)
- - A per-node user
- - Users for routers and registries and various others
- - An anonymous system user is used (by default) for unauthenticated requests
- Have a system: prefix
- - Examples: system:admin, system:openshift-registry, and system:node:rocp.mindwatering.net
Service accounts
- System users that are associated with projects
- Subsequently, also have the system: prefix, along with a project namespace prefix - system:serviceaccount:<project-namespace>:<serviceaccount>
- Typically workloads that use service accounts to invoke Kubernetes APIs
- Some created automatically during project creation
- Project administrators create additional service accounts to grant extra privileges to workloads
- By default, service accounts have no roles - grant roles to service accounts to enable workloads to use specific APIs
- Represented with the ServiceAccount object
- - Examples: system:serviceaccount:default:deployer and system:serviceaccount:mwit:builder
Group Management
- Represents a set of users
- Cluster administrators use the oc adm groups command to add groups or to add users to groups
Add new cluster mwdevelopers group:
[admin@rocp ~]$ oc adm groups new mwdevelopers
Adds the ocpdevuser1 user to the mwdevelopers group:
[admin@rocp ~]$ oc adm groups add-users mwdevelopers ocpdevuser1
Get all role bindings for users can provision new project namespaces with the self-provisioner cluster role:
[admin@rocp ~]$ oc get clusterrolebinding -o wide | grep -E 'ROLE|self-provisioner'
NAME ROLE ... GROUPS ...
self-provisioners ClusterRole/self-provisioner ... system:authenticated:oauth
Setup Wordpress Project RBAC Example:
- As the cluster admin:
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful.
b. Create project
[admin@rocp ~]$ oc new-project mwwordpress
Now using project "mwwordpress" on server "https://api.ropc.mindwatering.net:6443".
...
c. Give the team leader, mwdevadmin, the admin role for the project:
[admin@rocp ~]$ oc policy add-role-to-user admin mwdevadmin
clusterrole.rbac.authorization.k8s.io/admin added: "mwdevadmin"
d. Create the developers group and add the developer:
[admin@rocp ~]$ oc adm groups new mwwpdevgroup
group.user.openshift.io/mwwpdevgroup created
[admin@rocp ~]$ oc adm groups add-users mwwpdevgroup ocpdevuser1
group.user.openshift.io/mwwpdevgroup added: "ocpdevuser1"
e. Create the testing group and add the testers:
[admin@rocp ~]$ oc adm groups new mwwptestgroup
group.user.openshift.io/mwwptestgroup created
[admin@rocp ~]$ oc adm groups add-users mwwptestgroup ocptestuser1
group.user.openshift.io/mwwptestgroup added: "ocptestuser1"
f: Review the groups:
[admin@rocp ~]$ oc get groups -n mwwordpress
NAME USERS
...
mwwpdevgroup ocpdevuser1
mwwptestgroup ocptestuser1
...
- As the project admin:
a. Login:
[admin@rocp ~]$ oc login -u mwdevadmin -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful.
...
Using project "mwwordpress".
b. Set edit privileges for the developers:
[admin@rocp ~]$ oc policy add-role-to-group edit mwwpdevgroup
clusterrole.rbac.authorization.k8s.io/edit added: "mwwpdevgroup"
c. Set view only/read privileges to the testers:
[admin@rocp ~]$ oc policy add-role-to-group view mwwptestgroup
clusterrole.rbac.authorization.k8s.io/view added: "mwwptestgroup"
d. Review the RBAC assignments:
[admin@rocp ~]$ oc get rolebindings -o wide -n mwwordpress | grep -v '^system:'
NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS
admin ClusterRole/admin 60s admin
admin-0 ClusterRole/admin 45s mwdevadmin
edit ClusterRole/edit 30s mwwpdevgroup
view ClusterRole/view 15s mwwptestgroup
- As developer, deploy an Apache HTTP Server:
[admin@rocp ~]$ oc login -u ocpdevuser1 -p <password>
Login successful.
...
Using project "mwwordpress".
[admin@rocp ~]$ oc new-app --name mwwordpress httpd:2.4
...
--> Creating resources ...
deployment.apps "mwwordpress" created
service "mwwordpress" created
--> Success
...
Note:
- When a person does not have access, the command fails:
"Error from server (Forbidden): ..."
Network Security:
Summary:
- Allow and protect network connections to applications inside an OCP cluster
- - Protect External Traffic with TLS
- - Protect Internal Traffic with TLS
- - - Configure Network Policies (internal between apps or between projects)
- Restrict network traffic between projects and their pods
- Configure and use service certificates
External Network Methods:
- service types: NodePort and LoadBalancer
- API types: Ingress and Route
Project App --> API -->
- Round Robin load balancing --> Service --> Internet
(or)
- Route --> Internet
Encrypting Routes
- Routes can be either encrypted or unencrypted
- - Unencrypted routes are the simplest to configure, because they require no key or certificates
- - Encrypted routes encrypt traffic to and from the pods.
- - Encrypted routes support several types of transport layer security (TLS) termination to serve certificates to the client
- - Encrypted routes specify their TLS termination of the routes.
TLS Termination Types (OpenShift Platform Route Encryption):
- Edge
- Passthrough
- Re-encryption
Edge Termination Type:
- TLS termination occurs at the router, before the traffic is routed to the pods
- The router serves the TLS certificates; the routers are configures with the TLS certificates
- If TLS not set-up at the router, OCP assigns its own certificate to the router for TLS termination
- Because TLS is terminated at the router, connections from the router to the internal network endpoints are not encrypted
- For better performance, routers send requests directly to pods based on service configuration's services network
Client --> HTTPS --> Edge route (router - tls.crt / tls.key encryption) --> HTTP --> Container/Pod (Application)
Edge Example:
[admin@rocp ~]$ oc create route edge --service api-frontend --hostname api.apps.mindwatering.net --key api.key --cert api.crt
Notes:
- If --key and --cert are omitted, the OCP ingress operator provides a certificate from the internal CA.
- - The route describe will not reference the certificate, to view the certificate provided: oc get secrets/router-ca -n openshift-ingress-operator -o yaml
Passthrough Termination Type:
- Encrypted traffic is sent straight to the destination pod without TLS termination from the router
- Application is responsible for serving certificates for the traffic
- Passthrough is a common method for supporting mutual authentication between the application and a client that accesses it
Client --> HTTPS --> Pass-through route (router) --> HTTPS --> Container/Pod --> Application - tls.crt / tls.key encryption)
Mounts: /usr/local/etc/ssl/certs from tls-certs (ro)
volumeMounts:
- name: tls-certs
readOnly: true
mountPath: /usr/local/etc/ssl/certs
...
Re-encryption Termination Type:
- Re-encryption is a variation on edge termination, whereby the router terminates TLS with a certificate, and then re-encrypts its connection to the endpoint, which likely has a different certificate
- - External certificate uses public certificate: my-app.mindwatering.net
- - Internal certificate users internal certificate: my-app.project-namespace.svc.cluster.local
- The full path of the connection is encrypted, even over the internal network
- The router uses health checks to determine the authenticity of the host
- The internal certificate is created by the service-ca controller, which generates and signs service certificates for internal traffic
- - Creates a secret populated with a signed certificate and key
- - Deployment mounts the secret as a volume to use the signed certificate/key
Client --> HTTPS --> Edge route (router - tls.crt / tls.key encryption) --> HTTPS (.local certificate) --> Container/Pod --> Application - tls.crt / tls.key .local encryption - service-ca certificate and key)
Expose an unencrypted Apache HTTP app:
[admin@rocp ~]$ oc expose svc mwwordpress-http --hostname mwwordpress-http.apps.mindwatering.net
route.route.openshift.io/mwwordpress-http exposed
[admin@rocp ~]$ oc get routes
NAME HOST/PORT PATH SERVICES PORT ...
mwwordpress-http mwwordpress-http.apps.mindwatering.net mwwordpress-http 8080 ...
browser --> mwwordpress-http.apps.mindwatering.net (http)
Expose an encrypted Apache HTTP app:
(Reuses the previous unsecure service created above)
[admin@rocp ~]$ oc create route edge mwwordpress-https --service mwwordpress-http --hostname mwwordpress-https.apps.mindwatering.net
route.route.openshift.io/mwwordpress-https created
browser --> mwwordpress-https.apps.mindwatering.net (https)
Notes:
- The certificate is via the OCP internal CA, so it will not be trusted unless the CA parent certificate is imported to the workstation/browser
- The traffic is encrypted at the route edge, but the service port is still not encrypted
Delete the encrypted route:
[admin@rocp ~]$ oc delete route mwwordpress-https
route.route.openshift.io "mwwordpress-https" deleted
Expose Passthrough Encrypted Apache HTTP app:
a. Create the certificate and key files:
- Create the key file:
[admin@rocp ~]$ openssl genrsa -out mwwordpressinternal.key 4096
- Create the certificate signing request (CSR):
[admin@rocp ~]$ openssl req -new -key mwwordpressinternal.key -out mwwordpressinternal.csr -subj "/C=US/ST=North Carolina/L=Wake Forest/O=Mindwatering/ CN=mwwordpress-https.apps.mindwatering.net"
- Create a passphrase.txt file and populate with desired password:
[admin@rocp ~]$ vi mwwordpresspassphrase.txt
abcd123abcd123abcd123...
<esc>:wq (to save)
- Create the signed certificate file:
[admin@rocp ~]$ openssl x509 -req -in mwwordpressinternal.csr -passin file:mwwordpressinternalpassphrase.txt -CA mwinternal-CA.pem -CAkey mwinternal-CA.key -CAcreateserial -out mwwordpressinternal.crt -days 1825 -sha256 -extfile mwwordpress.mindwatering.net
Certificate request self-signature ok
subject=C = US, ST = North Carolina, L = Wake Forest, O = Mindwatering, CN = mwwordpress-https.apps.mindwatering.net
b. Create the K8s secret that stores the crt and key files:
[admin@rocp ~]$ oc create secret tls mwwordpressinternal-certs --cert mwwordpressinternal.crt --key mwwordpressinternal.key
secret/mwwordpressinternal-certs created
c. Update the mwwordpress deployment for the new passthrough ingress:
- Export the current deployment to a yaml file:
[admin@rocp ~]$ oc get deployment mwwordpress -o yaml > mwwordpress.yaml
- Edit mwwordpress.yaml and update the volume mounts section and adjust the service port as needed:
[admin@rocp ~]$ vi mwwordpress.yaml
apiVersion: apps/v1
kind: Deployment
...
volumeMounts:
- name: mwwordpressinternal-certs
readOnly: true
mountPath: /usr/local/etc/ssl/certs
...
volumes:
- name: mwwordpressinternal-certs
secret:
secretName: mwwordpressinternal-certs
---
apiVersion: v1
kind: Service
...
ports:
- name: mwwordpress-https
port: 8443
protocol: TCP
targetPort: 8443
...
<esc>:wq (to save)
d. Apply (redeploy) the deployment:
[admin@rocp ~]$ oc apply -f mwwordpress.yaml
Note:
- If creating a new deployment, instead do:
[admin@rocp ~]$ oc create -f mwwordpress.yaml
e. Verify the mwwordpressinternal-certs secret is mounted:
[admin@rocp ~]$ oc set volumes deployment/todo-https
mwwordpress-https
secret/mwwordpressinternal-certs as tls-certs
mounted at /usr/local/etc/ssl/certs
f. Create the passthrough route:
[admin@rocp ~]$ oc create route passthrough mwwordpress-https --service mwwordpress-https --port 8443 --hostname mwwordpress-https.apps.mindwatering.net
route.route.openshift.io/mwwordpress-https created
Configure Network Policies:
Review:
- Restricts network traffic between projects and pods and other projects and pods in the cluster
- Restricts by configuring isolation policies for individual pods
- Control network traffic between pods by using labels instead of IP addresses
- Create logical zones in the SDN that map to your organization network zones
- - With logical zones, the location of running pods becomes irrelevant, because with network policies, you can separate traffic regardless of where it originates
- To manage network communication between pods in two namespaces:
- - Assign a label to the namespace that needs access to another namespace
- - Create a network policy that selects these labels
- - Reuse a network policy to select labels on individual pods to create ingress or egress rules
- Use selectors under spec to assign which destination pods are affected by the policy, and selectors under spec.ingress to assign which source pods are allowed.
- Do not require administrative privileges; thereby, giving developers more control over the applications within their project(s)
- Are K8s native resources, and managed with the oc command
- Do not block traffic between ingress pods that use host networking to other ingress pods using host networking in/on the same node
Important:
- If a pod does not match any network policies, then OpenShift does NOT restrict traffic to that pod.
- When creating an environment to allow network traffic only explicitly, you must include a deny-all policy.
- If a deny-all policy exists and the apps have an OCP ingress and OCP monitoring, then OCP ingress and monitoring will be blocked, as well.
a. Assign network labels to the namespace:
Notes:
- Assumes namespaces specify a VLAN network, and internal departments are segmented/isolated on VLANS.
- Example:
mwnet02 namespace = mwnet02 network VLAN
mwnet03 namespace = mwnet03 network VLAN
[admin@rocp ~]$ oc label namespace mwnet02 network=mwnet02
[admin@rocp ~]$ oc label namespace mwnet03 network=mwnet03
...
b. Allow two internal department's apps to communicate with each other:
- mwitapp on mwnet02
- mwcsrapp on mwnet03
Notes:
- podSelector: If empty, the policy applies to all pods in the namespaces, if specified only those deployment pods use this policy
- ingress: Defines a list of ingress traffic rules to filter and use this policy
- from: Limits traffic source; and the source is NOT limited to only "this" project/namespace
- ports: Limits traffic destination ports of the selected pods
Important:
- This is similar in concept an internal NAT and the old DMZ idea, where the NAT has access to the DMZ, but the DMZ has no access to the NAT (except traffic/requests coming from the NAT)
- To make both ways, the other project/namespace owner, or developer, needs to create the companion policy
- However, network policies manage security between namespaces which are typically actually on the same network segment. So technically, they are the same VLAN or network in reality, but network policies still isolate their traffic.
c1. Allow pods labeled "mwitapp" in the #2 Mindwatering tenant communicate with pods in the #3 tenant, if their deployment pods have label "mwinternal-apps", and only on port 9443.
[admin@rocp ~]$ vi mwnet02-mwnet03-netpolicy_mwitapp.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: mwnet02-policy
namespace: mwnet02
spec:
podSelector:
matchLabels:
deployment: mwitapp
ingress:
- from:
- namespaceSelector:
matchLabels:
network: mwnet03
podSelector:
matchLabels:
role: mwinternal-apps
ports:
- port: 9443
protocol: TCP
c2. Allow all traffic from Mindwatering networks #2 into network #3:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: mwnet02-policy
namespace: mwnet02
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network: mwnet03
c3. Allow traffic between mwdev projects' apps on the mwintdev network:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: mwdev-policy
namespace: mwdev
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network: mwintdev
podSelector:
matchLabels:
app: mobile
c4a. Deny all traffic between all pods:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector: {}
c4b. After the deny all traffic policy above, the below policy allows ingress and monitoring:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-monitoring
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: monitoring
Zero-Trust Environments:
- Assume every interaction begins in an untrusted state where users can only access files and objects specifically allowed
- All communication must be encrypted (TLS)
- Client applications must verify authenticity of servers
- A trusted CA (typically internal) signs the certificates that are used to encrypt traffic
- - Apps recognize other apps authenticity because they both share the same parent signed certificate (CA)
Important:
- Native K8s does not encrypt internal traffic by default
- OCP encrypts network traffic between the nodes and the control plane themselves
- OCP does not automatically encrypt all traffic between applications
OCP Certificate Authority (CA):
- the service-ca controller to generate and sign service certificates for internal traffic/
- The service-ca controller creates a deployment's secret that it populates with a signed certificate and key, which the deployment mounts as volume (see service-ca further above).
Review: Deployment TLS Set-up Steps:
a. Create the secret
b. Mount the secret in the deployment
a. Create an app secret:
- login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...
- switch to the project/namespace:
[admin@rocp ~]$ oc project app-helloworld
Now using project "app-helloworld" on server ...
- create the secret:
[admin@rocp ~]$ oc annotate service app-helloworld service.beta.openshift.io/serving-cert-secret-name=app-helloworld-secret
- validate created okay:
[admin@rocp ~]$ oc describe service app-helloworld
<review and verify the following lines>
...
Annotations: service.beta.openshift.io/serving-cert-secret-name: app-helloworld-secret
service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1234567899
...
[admin@rocp ~]$ oc describe secret app-helloworld-secret
<review and verify the following lines>
Name: app-helloworld-secret
Namespace: app-helloworld
...output omitted...
Type: kubernetes.io/tls
Data
====
tls.key: 1234 bytes
tls.crt: 2345 bytes
...
b. Mount in the app-helloworld deployment:
- Export the current deployment to a yaml file:
[admin@rocp ~]$ oc get deployment app-helloworld -o yaml > app-helloworld.yaml
- Edit app-helloworld.yaml and update the volume mounts section:
[admin@rocp ~]$ vi app-helloworld.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: app-helloworld
annotations:
...
creationTimestamp: null
labels:
app: app-helloworld-pod
name: app-helloworld
spec:
template:
spec:
containers:
- name: helloworld
ports:
- containerPort: 9443
volumeMounts:
- name: app-helloworld-volume
mountPath: /etc/pki/nginx/
volumes:
- name: app-helloworld-volume
secret:
defaultMode: 420
secretName: app-helloworld-secret
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: private/server.key
[admin@rocp ~]$ oc patch --patch-file app-helloworld.yaml
- or -
[admin@rocp ~]$ oc apply -f app-helloworld.yaml
<view and optionally watch the redeploy>
c. Verify using a web browser or openssl client:
[admin@rocp ~]$ oc exec no-ca-bundle -- openssl s_client -connect app-helloworld.apphelloworld.svc:443
<verify certificate there - of course
depth=1 CN = openshift-service-serving-signer@1234567899
CONNECTED(00000004)
---
Certificate chain
0 s:CN = server.network-svccerts.svc
i:CN = openshift-service-serving-signer@1234567899
1 s:CN = openshift-service-serving-signer@1234567899
i:CN = openshift-service-serving-signer@1234567899
---
...output omitted...
verify error:num=19:self signed certificate in certificate chain
DONE
d. Swap the previous configuration, by reconfiguring the app deployment (see step b above) to use a config-map. Create ca-bundle configmap:
[admin@rocp ~]$ oc create configmap ca-bundle
configmap/ca-bundle created
[admin@rocp ~]$ oc annotate configmap ca-bundle service.beta.openshift.io/inject-cabundle=true
configmap/ca-bundle annotated
[admin@rocp ~]$ oc get configmap ca-bundle -o yaml
<confirm that the configmap contains the CERTIFICATE>
...
data:
service-ca.crt: |
-----BEGIN CERTIFICATE-----
...
e. Swap out the end of the spec volume section, and add the configmap to the deployment:
[admin@rocp ~]$ vi app-helloworld.yaml
...
spec:
...
volumes:
- configMap:
defaultMode: 420
name: ca-bundle
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
name: trusted-ca
name: trusted-ca
<esc>:wq (to save)
f. Apply again the new way of delivering the certs:
[admin@rocp ~]$ oc apply -f app-helloworld.yaml
<view and optionally watch the redeploy>
g. Again, verify using a web browser or openssl client:
[admin@rocp ~]$ oc exec no-ca-bundle -- openssl s_client -connect app-helloworld.apphelloworld.svc:443
<verify certificate there - of course
depth=1 CN = openshift-service-serving-signer@1234567899
CONNECTED(00000004)
---
Certificate chain
0 s:CN = server.network-svccerts.svc
i:CN = openshift-service-serving-signer@1234567899
1 s:CN = openshift-service-serving-signer@1234567899
i:CN = openshift-service-serving-signer@1234567899
---
...output omitted...
verify error:num=19:self signed certificate in certificate chain
DONE
CA Certificates - Client Service Application Configuration:
- For a client service application to verify the validity of a certificate, the application needs the CA bundle that signed that certificate.
- Use the service-ca controller to inject the CA bundle
- - Apply the service.beta.openshift.io/inject-cabundle=true annotation to an object
- - Apply this annotation to configuration maps, API services, custom resource definitions (CRD), mutating webhooks, and validating webhooks
- Certificates are valid for 26 months, by default, and is automatically rotated after 13 months
- - A pod's service restart automatically injects the newly rotated CA bundle
- - After rotation is a 13-month grace period where the original CA certificate is still valid while awaiting a service restart
- - Each pod that is configured to trust the original CA certificate must be restarted in some way
Alternatives to Service Certificates:
- service mesh to encrypt service-to-service communication (e.g. Red Hat OpenShift Service Mesh)
- certmanager operator to delegate the certificate signing process to a trusted external service, and renew those certificates
Applying to a Configuration Map:
- Apply the above annotation to a configuration map to inject the CA bundle into the data: { service-ca.crt } field
- The service-ca controller replaces all data in the selected configuration map with the CA bundle
- - Use a dedicated configuration map to prevent overwriting existing data
Example:
[admin@rocp ~]$ oc annotate configmap ca-bundle service.beta.openshift.io/inject-cabundle=true
configmap/ca-bundle annotated
Applying to an API Service:
Apply the above annotation to an API service to inject the CA bundle into the spec.caBundle field.
Applying to a Custom Resource Definition (CRD):
Apply the above annotation to a CRD to inject the CA bundle into the spec.conversion.webhook.clientConfig.caBundle field.
Applying to an Mutating or Validating Webhook:
Apply the above annotation to a mutating or validating webhook to inject the CA bundle into the clientConfig.caBundle field.
Manual Key Rotation:
- Process immediately invalidates the former service CA certificate
- Immediately restart all pods after performing a manual rollover
[admin@rocp ~]$ oc delete secret mycertificate-secret
secret/mycertificate-secret deleted
[admin@rocp ~]$ oc delete secret/mysigning-key -n openshift-service-ca
secret/mysigning-key deleted
[admin@rocp ~]$ oc rollout restart deployment myapp -n myappnamespace
<confirm deployment pods restart with watch as desired>
Adding Liveness and Readiness Probes for Passthrough TLS:
Notes:
- Reusing the app-helloworld example from earlier
- Adding a secret to contain the TLS certificate to the deployment, which is in the ~/certs subdirectory
a. Create the secret:
[admin@rocp ~]$ oc create secret tls app-helloworld-cert --cert certs/helloworld.pem --key certs/helloworld.key
secret/app-helloworld-cert created
b. Update the app to use this secret and add the additional secret volume mount:
[admin@rocp ~]$ vi app-helloworld.yaml
...
spec:
template:
spec:
containers:
- name: helloworld
ports:
- containerPort: 9443
volumeMounts:
- name: app-helloworld-cert
mountPath: /etc/pki/nginx/helloworld
- name: trusted-ca
mountPath: /etc/pki/ca-trust/extracted/pem
volumes:
- name: app-helloworld-cert
secret:
defaultMode: 420
secretName: app-helloworld-cert
- name: trusted-ca
- configMap:
defaultMode: 420
name: trusted-ca
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
<esc>:wq (to save)
c. Update the app again with the liveness and readiness probe additions:
...
spec:
template:
spec:
containers:
- name: helloworld
ports:
- containerPort: 9443
readinessProbe:
httpGet:
port: 9443
path: /readyz
scheme: HTTPS
livenessProbe:
httpGet:
port: 9443
path: /livez
scheme: HTTPS
env:
- name: TLS_ENABLED
value: "true"
- name: PAGE_URL
value: "https://<page_url_target>:9443"
volumeMounts:
- name: app-helloworld-cert
mountPath: /etc/pki/nginx/helloworld
- name: trusted-ca
mountPath: /etc/pki/ca-trust/extracted/pem
volumes:
- name: app-helloworld-cert
secret:
defaultMode: 420
secretName: app-helloworld-cert
- name: trusted-ca
- configMap:
defaultMode: 420
name: trusted-ca
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
<esc>:wq (to save)
d. Apply to update/rolling redeployment of the deployment:
[admin@rocp ~]$ oc apply -f app-helloworld.yaml
<view and optionally watch the redeploy>
e. Expose the passthrough route for the helloworld service outside the cluster:
[admin@rocp ~]$ oc create route passthrough app-helloworld-https --service helloworld --port 8080 --hostname helloworld.apps.ocp4.mindwatering.net
route.route.openshift.io/product-https created
f. Again, verify using a web browser or openssl client:
[admin@rocp ~]$ oc exec no-ca-bundle -- openssl s_client -connect helloworld.apps.ocp4.mindwatering.net
<verify certificate there - of course
depth=1 CN = openshift-service-serving-signer@1234567899
CONNECTED(00000004)
---
Certificate chain
0 s:CN = server.network-svccerts.svc
i:CN = openshift-service-serving-signer@1234567899
1 s:CN = openshift-service-serving-signer@1234567899
i:CN = openshift-service-serving-signer@1234567899
---
...output omitted...
verify error:num=19:self signed certificate in certificate chain
DONE
Non HTTP/HTTPS TCP SNI Applications:
- Like HTTPS before SNI, non HTTP ports typically require a separate IP or port for each service as SNI not historically available
Review HTTP/HTTPS TLS:
- Expose Services with Ingresses and Routes when HTTP/HTTPS
- - Cluster IP (available with the cluster pods
- - Cluster IP external IP feature (outside the cluster)
- - Ingress and Routes - External cluster access (NodePort and LoadBalancer)
MetalLB Component:
- LoadBalancer service provider clusters that do not run on a cloud provider (on-prem / bare metal cluster, or cluster nodes running on VMs)
- Use ping and traceroute commands for testing external cluster IPs/ports and internal cluster IPs/ports
- MetalLB modes;
- - Layer 2
- - Border Gateway Protocol (BGP)
- Installed with Operator Lifecycle Manager:
- - Install the operator
- - Configure using custom resource definitions (typically includes IP address range)
Example Generic LB YAML web server:
[admin@rocp ~]$ cat ./websvr/websvr-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: websvr-service
namespace: websvr
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
name: websvr
type: LoadBalancer
Example Generic LB imperative web server:
[admin@rocp ~]$ kubectl expose deployment websvr --port=8080 --target-port=8080 --name=websvr-service --type=LoadBalancer
-or-
[admin@rocp ~]$ oc expose deployment/websvr --port=8080 --target-port=8080 --name=websvr-service --type=LoadBalancer
To view run one of the following commands:
[admin@rocp ~]$ kubectl describe services websvr-service
<view IPs, Port, TargetPort, NodePort, and Endpoints>
-or-
[admin@rocp ~]$ kubectl get service -n websvr
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
websvr-service LoadBalancer 172.99.11.77 192.168.22.33 8080:31234/TCP
-or-
[admin@rocp ~]$ oc get websvr-service -o jsonpath="{.status.loadBalancer.ingress}"
[{"ip":"192.168.22.33"}]
.spec.externalTrafficPolicy:
- Preserving the client IP routing policy can be specified for applications that need a state maintained
- Changes the default "Cluster" routing policy to "Local"
- Cluster obscures the client source IP and may cause a second hop to another node, but typically has good overall load-spreading
- Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading
[admin@rocp ~]$ cat ./websvr/websvr-lb-local.yaml
apiVersion: v1
kind: Service
metadata:
name: websvr-service
namespace: websvr
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
name: websvr
externalTrafficPolicy: Local
type: LoadBalancer
Delete a Service with:
[admin@rocp ~]$ oc delete service/websvr-service
service "websvr-service" deleted
Note:
- When a service is deleted, the IP(s) it was using are available to other services.
Multis CNI (Network) Secondary Networks:
- Uses the K8s CNI network plugin
- Allows application pods to use custom internal networks rather than the default ones for performance or isolation reasons (or both the default and an additional CNI network)
- Allows applications pods to be exposed externally using a secondary network (overlay)
Network Attachment Custom Resource Types:
- Host device: Attaches a network interface to a single pod.
- Bridge: Uses an existing bridge interface on the node, or configures a new bridge interface. The pods that are attached to this network can communicate with each other through the bridge, and to any other networks that are attached to the bridge.
- IPVLAN: Creates an IPVLAN-based network that is attached to a network interface.
- MACVLAN: Creates an MACVLAN-based network that is attached to a network interface.
Bridges Notes:
- Network interfaces that can forward packets between different network interfaces that are attached to the bridge
- Virtualization environments often use bridges to connect the network interfaces of virtual machines to the network.
VLAN Notes:
- IPVLAN and MACVLAN are Linux network drivers that are designed for container environments.
- Container environments often use these network drivers to connect pods to the network.
- Although bridge interfaces, IPVLAN, and MACVLAN have similar purposes, they have different characteristics:
- - Including different usage of MAC addresses, filtering capabilities, and other features
- - For example, use IPVLAN instead of MACVLAN in networks with a limit of MAC addresses, because IPVLAN uses fewer MAC addresses.
Configuring Secondary Networks:
- Make available the network on cluster nodes
- Use operators to customize node network configuration
- - Kubernetes NMState network operator, or
- - SR-IOV (Single Root I/O Virtualization) network operator
- - - The SR-IOV network operator configures SR-IOV network devices for improved bandwidth and latency on certain platforms and devices
- With the operators, you define custom resources to describe the specified network configuration, and the operator applies the configuration
Example Multis CNI NetworkAttachmentDefinition:
[admin@rocp ~]$ cat ./websvr/multis-vlan-networkattachmentdefinition.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan
namespace: websvr
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "kube-ovn",
"server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
"provider": "macvlan.websvr"
}
}'
Example NetworkAttachmentDefinition using host-device:
[admin@rocp ~]$ cat ./websvr/hostdev-websvr-networkattachmentdefinition.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: hostdev.websvr
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "hostdev.websvr",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "dhcp"
}
}
Note:
- The metadata name and the config name should match.
Example Network adding the NetworkAttachmentDefinition with the spec:
[admin@rocp ~]$ cat ./websvr/hostdev-websvr-network-spec-nad.yaml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
...
additionalNetworks:
- name: websvr
namespace: websvr
rawCNIConfig: |-
{
"cniVersion": "0.3.1",
"name": "websvr",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "dhcp"
}
}
type: Raw
Note:
- To give a static addresses, change the ipam type from dhcp to static like:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
...
additionalNetworks:
- name: websvr
namespace: websvr
rawCNIConfig: |-
{
"cniVersion": "0.3.1",
"name": "websvr",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "static",
"addresses": [
{"address": "192.168.123.0/24"}
]
}
}
type: Raw
Attaching Secondary Networks to Pods:
- To configure secondary networks, create either:
- - A NetworkAttachmentDefinition resource, or
- - Update the configuration of the cluster network operator to add a secondary network
- Network attachment definitions can create and manage virtual network devices, including virtual bridges
- Network attachment definitions can also perform additional network configuration
- - Virtual network devices attach to existing networks that are configured and managed outside OCP, or
- - Other network attachment definitions use existing network interfaces on the cluster nodes
- Network attachment resources are namespaced, and available only to deployment pods using that namespace
Deployment/pods using a network on the cluster nodes:
[admin@rocp ~]$ cat ./websvr/websvr-lb-cni-clusternetwork.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: websvr
namespace: websvr
spec:
selector:
matchLabels:
app: websvr
name: websvr
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: websvr
labels:
app: websvr
name: websvr
spec:
...
Deployment using both the default K8s network and an addition one:
[admin@rocp ~]$ cat ./websvr/websvr-lb-cni-clusterandadditionalnetwork.yaml
apiVersion: apps/v1
...
spec:
...
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: websvr
ovn.kubernetes.io/ip_address: 10.12.13.14
ovn.kubernetes.io/mac_address: 00:00:00:34:6A:B6
macvlan.websvr.kubernetes.io/ip_address: 172.15.0.115
macvlan.websvr.kubernetes.io/mac_address: 00:00:00:15:5A:B5
labels:
app: websvr
name: websvr
spec:
...
previous page
|