Overview:
TKGI deployments on AWS or vSphere w/o NSX can be configured with LBs for the following 3 uses:
- Workload LB: Configure for each application workload to enable external access to the inbound routing/services configured on the cluster
- Kubernetes Cluster LB: Configure for each new cluster to run K8s kubectl commands on the cluster
- TKGI API LB: Use to run TKGI CLI commands from developer or administration workstation
Notes:
- TKGI w/NSX creates the cluster load balancer for CLI access during cluster create, and the LB is configured automatically when workloads are being deployed on the Tanzu K8s clusters.
- Without NSX, create the AWS/Azure LB for CLI access before creating the cluster -- this way you have the external IP/address ready for creating the cluster
- The TKGI CLI must match the TKGI cluster version
- If the K8s control plane node VMs are recreated for any reason, the AWS/Azure cluster load balancers have to be updated (Instances section) to point to the new control plane VMs
- For this example, the cluster is my-cluster, the domain is api.tkgi.mindwatering.net, and the admin id is myadminid.
- As of 2025/10, Azure and vSphere w/NSX support metadata tagging during cluster creation; for AWS, add the tags after creating the LBs and the cluster.
- Tags are case-sensitive.
A. Create the AWS CLI Load Balancer:
1. Create the Cluster CLI LB in the AWS Management console:
AWS Management Console --> Dots (top menu)
--> Compute (drop-down menu) --> EC2 (secondary menu) --> Pick Region ( top-right, if not defaulted)
or
--> EC2 (recents drop-down menu option) --> Pick Region ( top-right, if not defaulted)
--> Load Balancing (twistie, left menu) --> Load Balancers (menu option, under heading) --> Create Load Balancer (button)
--> Classic Load Balancer (twistie, open), click Create (button)
- On the Define Load Balancer page, complete:
- - Load Balancer name: <enter a unique name> (e.g. k8s-master-<name-of-cluster>)
- - Under Scheme: Internet-facing (selected - Used to be labeled: Create an internal load balance)
- - Under Network mapping,
- - - VPC: <select the VPC with the Ops Manager> (Used to be Create LB Inside field)
- - - Availability Zones and subnets: <select at least one availability zone, and then one subnet underneath>
- - Under Security Groups, Security Groups: <select security groups for port 8443 if already existing> (firewall zone rules)
- - - If not existing:
- - - - Create a new security group (button/option):
- - - - - Security Group name: <unique name> (e.g. k8s-LB-master-<name-of-cluster>)
- - - - - Protocol: TCP
- - - - - Ports: 8443
- - - - - Configure Security Settings:
- - - - - - SSL warning: <Ignore> (SSL termination is done on (passed through to) the Kubernetes API)
- - - - - Configure Health Check:
- - - - - - Ping Port: 8443
- - Under Listeners and routing:
- - - Listener protocol: TCP
- - - Listener port: 8443
- - - Instance protocol: TCP
- - - Instance port: 8443
- - Under Instances, click Add Instances (button)
- - Under Attributes:
- - - Enable cross-zone load balancing: <checked>
- - - Enable connection draining <checked>
- - Timeout (draining interval): 300 (seconds)
- - Under Load balancer tags (optional - click twistie to open)
- - - Add new tag (button)
- - - - Enter key and value tags for categorization.
- - Under Review:
- - - Review choices
- - Click Create load balancer (button)
B. Create the Tanzu K8s Cluster:
1. Create the K8s cluster and note the control-plane nodes:
a. Retrieve your cluster API domain name:
- Ops Manager --> Tanzu Kubernetes --> Grid Integrated Edition --> API Hostname (FQDN)
- Note the hostname for the next step. (e.g. api.tkgi.mindwatering.net)
b. Create cluster:
$ tkgi login -a api.tkgi.mindwatering.net -u myadminid -k
$ tkgi create-cluster my-cluster --external-hostname abcdef134567890a12345b54321c12345.us.east-1a.elb.amazonaws.com --plan small --num-nodes 10
c. Get the cluster IPs for the LB:
$ tkgi cluster my-cluster
<view and note the UUID and Kubernetes Master IP(s) column values>
(e.g. ab1234567cd8de9f0a1b )
d. For the cluster deployment name (starts with service-instance), use the BOSH CLI
- SSH to the Ops Manager VM
- Get the tkgi deployments:
$ bosh -e tkgi deployments
<view output, locate the name that begins with service-instance and contains the UUID from step c, get that name for the next step>
(e.g. service-instance-ab1234567cd8de9f0a1b )
e. Using the service instance deployment, get VM ids:
$ bosh -e tkgi -d service-instance-aa1234567bc8de9f0a1c vms
<view output, note the VM IDs under the VM CID column>
C. Deploy a workflow:
(VMware CloudFoundry Nginx deployment example):
1. Verify the deployment manifest:
$ cd ~/nginxsample/
$ cat nginx-lb.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: gcr.io/cf-pks-golf/nginx:1.13-alpine
ports:
- containerPort: 80
Note:
- Ensure the service type is set as LoadBalancer, and note the name of the service (e.g. nginx), and its port (e.g. 80).
- Note this example deploys a 2 replica pods
2. Deploy the Nginx application:
$ kubectl apply -f nginx-lb.yaml
<verify pods deployments>
3. Get the service using its name:
$ kubectl get svc nginx
<view external IP address the deployment was assigned>
4. Verify the deployment service's external IP is working:
$ curl http://<externalnginxip>:80
<confirm page loads>
D-Option1. Re-deploy the nginx workload with the AWS LB extension:
1. Update the service confirmation section of the YAML manifest file and add the AWS LB extension:
$ vi nginx-lb.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: LoadBalancer
---
...
2. Apply the updated nginx-lb.yaml, which will de-provision the last manifest deployed, and deploy a new one.
$ kubectl apply -f nginx-lb.yaml
<verify old deployment pod replicas destroyed and new pod replicas created>
3. Get the service using its name:
$ kubectl get svc nginx
<view external LB IP address the deployment was assigned>
4. Verify the external IP is working through the new AWS load balancer:
$ curl http://<externalnginxip>:80
<confirm page loads>
D-Option2. Re-deploy the nginx workload with the a generic external LB such as F5:
1. Update the service confirmation section of the YAML manifest file and change the type to NodePort:
$ vi nginx-lb.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: NodePort
---
...
2. Apply the updated nginx-lb.yaml, which will de-provision the last manifest deployed, and deploy a new one.
$ kubectl apply -f nginx-lb.yaml
<verify old deployment pod replicas destroyed and new pod replicas created>
3. Get the service using its name:
$ kubectl get svc nginx
<view external LB IP address the deployment was assigned>
- or -
$ bosh vms
<view output>
- or -
$ kubectl get nodes -L spec.ip
<view output>
Note: In any case above, the NodePort is in the 3n,nnn range.
4. Give the F5 team your IP and the high-range external port, and DNS in the real world, along with the instruction that the translated external port will be standard HTTP/80, and wait.
5. Once the F5 team has updated the F5 LB's configuration, verify the external IP is working through the new AWS load balancer:
$ curl http://<externalnginxip>:80
<confirm page loads>
previous page
|