k8s cluster deployment with vmware aria
before we begin⌗
Before we can move on to the real interesting bits, there are a couple of things we need to prepare (which are not in scope of this guide…):
- VMware vSphere and Aria Automation + Orchestrator
- Rocky Linux 9 VM template (base OS install)
- single NIC, set it to DHCP
- install
perl
andcloud-init
packages
- DHCP server on the management network (any kind)
prepare orchestrator ssh key⌗
While Orchestrator can help with installing and configuring SSH keys for interacting with the guest OS on virtual machines, we are going to ignore that functionality (to simply some things about this guide) and instead, we are going to save said pubkey as a secret in Aria Automation.
To do so, open up a connection to Orchestrator and copy its pubkey saved under /var/lib/vco/app-server/conf/vco_key
. Next, navigate to Assembler
-> Infrastructure
-> Administration
-> Secrets
and store in as a secret called vro-ssh-key
.
blueprint⌗
Before going back to Orchestrator, create a new Assembler blueprint for the k8s cluster. The code below deploys a user-defined set of VMs to act as master/worker nodes and configures some essential system settings via cloud-init
:
formatVersion: 1
inputs:
master-count:
type: integer
title: Enter the number of master nodes to deploy
default: 1
minimum: 1
worker-count:
type: integer
title: Enter the number of worker nodes to deploy
default: 1
minimum: 1
mgmt-net:
type: string
title: Enter the name of cluster's management network
description: Name of the port group to be used for cluster management
readOnly: false
default: main
worker-data:
type: integer
title: Enter the size of data disk for each worker node
description: Storage space for persistent volume claims (PVCs) or other pod data
default: 32
resources:
mgmt_net:
type: Cloud.vSphere.Network
properties:
networkType: existing
name: ${input.mgmt-net}
k8sm:
type: Cloud.vSphere.Machine
properties:
count: ${input.master-count}
image: rock000
flavor: cpu2-ram4
networks:
- network: ${resource.mgmt_net.id}
cloudConfig: |
#cloud-config
fqdn: k8sm00${input.master-count[count.index]}
runcmd:
- dnf update -y
- dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- echo -e '[kubernetes]\nname=Kubernetes\nbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64\nenabled=1\ngpgcheck=1\nrepo_gpgcheck=1\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/k8s.repo
- dnf makecache -y
- dnf install -y containerd.io ca-certificates curl gnupg
- containerd config default > config.toml
- sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' config.toml
- mv -f config.toml /etc/containerd/config.toml
- systemctl enable --now containerd.service
- echo -e 'overlay\nbr_netfilter' > /etc/modules-load.d/k8s.conf
- modprobe overlay
- modprobe br_netfilter
- setenforce 0
- sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
- firewall-cmd --permanent --add-port=6443/tcp
- firewall-cmd --permanent --add-port=2379-2380/tcp
- firewall-cmd --permanent --add-port=10250/tcp
- firewall-cmd --permanent --add-port=10251/tcp
- firewall-cmd --permanent --add-port=10259/tcp
- firewall-cmd --permanent --add-port=10257/tcp
- firewall-cmd --permanent --add-port=179/tcp
- firewall-cmd --permanent --add-port=4789/udp
- firewall-cmd --reload
- echo -e 'net.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1' > /etc/sysctl.d/k8s.conf
- sysctl --system
- swapoff -a
- sed -e '/swap/s/^/#/g' -i /etc/fstab
- dnf install -y {kubelet,kubeadm,kubectl} --disableexcludes=kubernetes
- echo -e 'Include /etc/ssh/sshd_config.d/*.conf \nPort 22 \nAddressFamily any \nListenAddress 0.0.0.0 \nPermitRootLogin yes \nPubkeyAuthentication yes \nAuthorizedKeysFile .ssh/authorized_keys \nSubsystem tsftp /usr/libexec/openssh/sftp-server' > /etc/ssh/sshd_config\n
- mkdir /root/.ssh
- echo "ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAGoZ4YstscMlpmX+sy0VHwS6P8TSwaZcZtczF4B5bdzABFR1N5L7pNRY1IcPGw5dLNZ8TqWWaPnFSxgtB+Bj0HKKgG+M+CoXxSTolNwpP1V+7VXkqS5IqtpnAIJvOcWHMDtlzjk/g1nu57/uAXCNVru92XqTbo6SD2dUHziBUyi64w0YA==" > /root/.ssh/authorized_keys
- systemctl restart sshd
- poweroff
k8sw:
type: Cloud.vSphere.Machine
properties:
count: ${input.worker-count}
image: rock000
flavor: cpu2-ram4
networks:
- network: ${resource.mgmt_net.id}
cloudConfig: |
#cloud-config
fqdn: k8sw00${input.master-count[count.index]}
runcmd:
- nmcli general hostname k8sw00
- dnf update -y
- dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- echo -e '[kubernetes]\nname=Kubernetes\nbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64\nenabled=1\ngpgcheck=1\nrepo_gpgcheck=1\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' > /etc/yum.repos.d/k8s.repo
- dnf makecache -y
- dnf install -y containerd.io ca-certificates curl gnupg
- containerd config default > config.toml
- sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' config.toml
- mv -f config.toml /etc/containerd/config.toml
- systemctl enable --now containerd.service
- echo -e 'overlay\nbr_netfilter' > /etc/modules-load.d/k8s.conf
- modprobe overlay
- modprobe br_netfilter
- setenforce 0
- sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
- firewall-cmd --permanent --add-port=179/tcp
- firewall-cmd --permanent --add-port=10250/tcp
- firewall-cmd --permanent --add-port=30000-32767/tcp
- firewall-cmd --permanent --add-port=4789/udp
- firewall-cmd --reload
- echo -e 'net.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1' > /etc/sysctl.d/k8s.conf
- sysctl --system
- swapoff -a
- sed -e '/swap/s/^/#/g' -i /etc/fstab
- dnf install -y {kubelet,kubeadm,kubectl} --disableexcludes=kubernetes
- echo "ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAGoZ4YstscMlpmX+sy0VHwS6P8TSwaZcZtczF4B5bdzABFR1N5L7pNRY1IcPGw5dLNZ8TqWWaPnFSxgtB+Bj0HKKgG+M+CoXxSTolNwpP1V+7VXkqS5IqtpnAIJvOcWHMDtlzjk/g1nu57/uAXCNVru92XqTbo6SD2dUHziBUyi64w0YA==" > /root/.ssh/authorized_keys
- systemctl restart sshd
- poweroff
orchestrator workflow⌗
Now, back in Orchestrator, create a new workflow (ie. configure k8s cluster
) and configure it as follows:
- Variables (name | value | type)
vRA | [object VRA:Host] | VRA:Host
workers | Array[0] | Array/Array
masters | Array[0] | Array/Array
vms | Array[0] | Array/VC:VirtualMachine
uuids | Array[0] | Array/string
- Inputs (name | type | direction)
inputProperties | Properties | Input
- Schema
- 3 JavaScript scripting objects in sequence
schema details⌗
-
scripting object 1 - get vm info from deployment data
- inputs:
inputProperties | Properties | inputProperties
masters | Array/Array | masters
workers | Array/Array | masters
vRA | VRA:Host | vRA
vms | Array/VC:VirtualMachine | vms
uuids | Array/String | uuids
- outputs:
masters | Array/Array | masters
workers | Array/Array | masters
vms | Array/VC:VirtualMachine | vms
- code:
var restClient = vRA.createRestClient() var pathUri = "/deployment/api/deployments/" + inputProperties.deploymentId + "/resources" var request = restClient.createRequest("GET", pathUri) var response = restClient.execute(request); var jsonResponse = JSON.parse(response.contentAsString) var content = jsonResponse['content'] for (var i = 0; i < content.length; i++){ if (content[i]['properties']['name'] == 'k8sm'){ var id = content[i]['properties']['moref'].split(':')[1] var dn = content[i]['properties']['resourceName'] var ip = content[i]['properties']['address'] masters.push([ip, dn]) uuids.push(id) } if (content[i]['properties']['name'] == 'k8sw'){ var id = content[i]['properties']['moref'].split(':')[1] var dn = content[i]['properties']['displayName'] var ip = content[i]['properties']['address'] workers.push([ip, dn]) uuids.push(id) } } var all_vms = VcPlugin.getAllVirtualMachines() for (var i = 0; i < all_vms.length; i++){ var vm = all_vms[i] for (var j=0; j<uuids.length; j++){ if (vm.vimId == uuids[j]){ vms.push(vm) } } }
- inputs:
-
scripting object 2 - wait for cloud-init to complete before proceeding
- inputs:
vms | Array/VC:VirtualMachine | vms
- code:
// wait for each VM to power off and power them back on once they're down // this will ensure cloud-init run is complete as the last cmd is poweroff for (var i = 0; i < vms.length; i++) { var vm = vms[i] while(vm.runtime.powerState.value == 'poweredOn'){ System.sleep(2000) } vm.powerOnVM_Task() } // wait for all VMs to boot and come online for (var i = 0; i < vms.length; i++) { while(vms[i].ipAddress == null){ System.sleep(1000) } }
- inputs:
-
scripting object 3 - setup kubernetes cluster
- inputs:
masters | Array/Array | masters
workers | Array/Array | masters
- code:
// setup master nodes mssh(masters[0][0], [ 'kubeadm init --pod-network-cidr=10.244.0.0/16', 'kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml', 'mkdir /root/.kube', 'cp /etc/kubernetes/admin.conf /root/.kube/config' ]) var token = ssh(masters[0][0], "kubeadm token list | tail -1 | awk '{ print $1 }' | tr -d '\n'") var cert = ssh(masters[0][0], "openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //'") var ip = ssh(masters[0][0], "ip addr show dev ens32 | grep inet | head -1 | awk '{ print $2 }' | cut -d/ -f1 | tr -d '\n'") // TBD: add support for adding HA in control plane //for (var i = 0; i < masters.length; i++){ // ssh(masters[i][0], 'magic') //} for (var i = 0; i < workers.length; i++){ out = ssh(workers[i][0], 'kubeadm join ' + ip + ':6443 --node-name ' + workers[i][1] + ' --token ' + token + ' --discovery-token-ca-cert-hash sha256:' + cert) } function ssh(host, cmd){ session = new SSHSession(host, 'root'); path = '/var/lib/vco/app-server/conf/vco_key' session.connectWithPasswordOrIdentity(false, '', path) session.executeCommand(cmd, true) output = session.getOutput() return output } function mssh(host, cmds){ session = new SSHSession(host, 'root'); path = '/var/lib/vco/app-server/conf/vco_key' session.connectWithPasswordOrIdentity(false, '', path) for (var i=0; i<cmds.length; i++){ session.executeCommand(cmds[i], true) } }
- inputs:
subscription⌗
Last thing to do is to attach the new workflow to the previously created blueprint. This way, the newly deployed VMs will be configured as a kubernetes cluster as soon as the deployment is complete. To do so, navigate to Assembler
-> Extensibility
-> Subscriptions
and create a new subscription like so:
name: k8s cluster config
event topic: Deployment completed
filter event in topic: event.data.blueprintId =='replace-me-with-uuid-of-your-blueprint'
action/workflow: configure k8s cluster