YAK2 (Yet Another K8S Kickstart)

YAK2
Did we really need Yet Another Kubernetes Kickstart (YAKK –> YAK2) script to nearly automatically deploy a K8S cluster? Didn’t we already have enough alternatives (you can search on Google to find tons of examples) to setup a demo/lab/test/POC environment? Probably yes, nevertheless…
I created mine. It all started as self-practice to learn how to setup Kubernetes using kubeadm
, then it evolved into something more structured and – in the end – I eventually decided to share it with anyone who may benefit from it.
The starting point was using a given Linux distro we – when we deal with VMware products – know very well: Photon OS, “a Linux based, open source, security-hardened, enterprise grade appliance operating system that is purpose built for Cloud and Edge applications” (the description was promising).
Topology & Components
The typical, recommended topology will be comprised of:
- 1x K8S Control Plane node
- 3x K8s Worker Nodes (but they can be as many as you like)
- 1x NFS Server to support dynamic K8S Persistent Volumes delivery based on a Storage Class
Main components installed will include:


VM Template Preparation
- Get the latest copy of VMware Photon OS 5 (.ova format).
- Deploy the template with VMware Workstation, VMware Fusion or VMware vSphere, and configure its minimum resources as follows:
- CPU: 2 vCPUs (1 vCPU is the default) or more.
- MEMORY: 2 GBs (the default) or more.
- STORAGE: 16 GBs (the default) or more, thin provisioned.
- NETWORK: 1 vNIC (the default).
- CD-ROM/FLOPPY: you can safely remove them.
- Install the operating system (it will boot using DHCP), login as user
root
passwordchangeme
and customize the root password. - Assuming you can now connect to the Internet, update the operating system with
tdnf makecache && tdnf -y update && reboot
Get Ready to Start
- According to the topology above, deploy 5 VMs and power them on.
- Get a copy of
yakk.sh
from GitHub, save it to the/root
folder and set permissions withchmod 744 ~/yakk.sh
- [Optional] If you want to have your script starting once – as soon as you login into your VM – then use
echo ~/yakk.sh >> ~/.bash_login
. There is a check in the script that looks for the~/.bash_login
file and – if found – deletes it, so the script won’t start again at next login. - Launch the script with
./yakk.sh

Script Variables Default Values Customization
At the beginning of the script, you will find a list of variables used during the nodes deployment: you can customize them to speed up the process.
VARIABLE | DESCRIPTION |
---|---|
GENERAL SETTINGS | |
LOG_FILE_NAME=”yakk-“$(date +%Y.%m.%d-%H.%M.%S-%Z)”.log” | Deployment log file name |
NODE_HOSTNAME=”” | Current node hostname |
NODE_DOMAIN=”fdlsistemi.local” | Current node domain |
NODE_IP=”” | Current node IP address |
NODE_NETMASK=”/24″ | Current node netmask in /XX format |
NODE_GATEWAY=”172.20.10.2″ | Current node gateway IP address |
NODE_DNS1=”8.8.8.8″ | Current node first DNS server |
NODE_DNS2=”8.8.4.4″ | Current node second DNS server |
CP_NODE_IP=”172.20.10.41″ | Control Plane Node IP Address |
CP_NODE_PWD=”VMware1!VMware1!” | Contorl Plane Node root password |
GITHUB_API_TOKEN_VAR=”” | [Optional] used by lastversion to increase GitHub API rate limits – See https://github.com/settings/tokens |
HELM_TIMEOUT=”30m0s” | A Go duration value for Helm to wait for all Pods to be in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable) Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful. If timeout is reached, the release will be marked as FAILED. |
NFS SERVER VM AND NFS SUBDIR K8S DEPLOYMENT | |
NFS_BASEPATH=”/nfs-storage” | Basepath of the mount point to be used (both NFS Server export and NFS Subdir Helm Chart deployment parameter) |
NFS_NAMESPACE=”nfs-subdir” | K8S Cluster namespace to be used for deploying the NFS subdir external provisioner |
NFS_IP=”172.20.10.40″ | NFS Server IP Address |
NFS_SC_NAME=”nfs-client” | K8S Cluster storageClass name (‘nfs-client’ is the NFS Subdir Project default storageClass name) |
NFS_SC_DEFAULT=true | Shall this K8S Cluster storageClass be the default? (true|false) |
NFS_SC_RP=”Delete” | Method used to reclaim an obsoleted volume (Retain|Recycle|Delete) |
NFS_SC_ARCONDEL=false | Archive PVC when deleting |
METALLB K8S DEPLOYMENT | |
METALLB_REL_NAME=”metallb” | Helm release name for deploying MetalLB |
METALLB_NAMESPACE=”metallb-system” | K8S Namespace for deploying MetalLB |
METALLB_RANGE_FIRST_IP=”172.20.10.150″ | First IP in the Address Pool for MetalLB-backed K8S Service of type LoadBalancer |
METALLB_RANGE_LAST_IP=”172.20.10.199″ | Last IP in the Address Pool for MetalLB-backed K8S Service of type LoadBalancer |
METALLB_IP_POOL=”metallb-ip-pool” | K8S IPAddressPool name for deploying MetalLB – built as ‘$METALLB_RANGE_FIRST_IP-$METALLB_RANGE_LAST_IP’ |
METALLB_L2_ADVERT=”metallb-l2-advert” | K8S L2Advertisement name for deploying MetalLB |
KUBEAPPS K8S DEPLOYMENT | |
KUBEAPPS_REL_NAME=”kubeapps” | Helm release name for deploying Kubeapps |
KUBEAPPS_NAMESPACE=”kubeapps” | K8S Namespace for deploying Kubeapps |
KUBEAPPS_SERV_ACC=”kubeapps-operator” | K8S ServiceAccount for deploying Kubeapps |
KUBEAPPS_SERV_ACC_SECRET=”kubeapps-operator-token” | K8S Secret for deploying Kubeapps |

Basic Settings (all VMs)
Provide or confirm the following inputs to configure node networking:
- Hostname
- Domain Name
- IP Address
- Netmask (in /XX format)
- Default Gateway
- Primary and Secondary DNS
- [Optional] GitHub API token for lastversion: a tiny command line utility to retrieve the latest stable version of a GitHub project. The token is recommended to increase GitHub API rate limits, especially when you test the deployment multiple times in a short time range. Not required when you just deploy the K8S cluster once or twice.
Choose Nodes Roles
The script now prompts you to choose which role you want to configure on each node. The three options provided will guide you through distinct setup processes, requiring different parameters.
Configure your nodes in the following order:
- NFS Server
- Control Plane Node
- Worker Nodes

NFS Server
After providing details about:
- Control Plane Node IP Address.
- Control Plane Node Password.
The script will:
- Wait for the Control Plane Node to be reachable via the network.
- Install
nfs-utils
. - Configure the NFS shared storage path (based on the customizable script variables) in
/etc/exports
. - Enable the
nfs-server
daemon. - Complete execution. If any error or warning was encountered, it will be reported in the completion screen. (In the example, the warning refers to a reminder for
iptables
being disabled).
Control Plane Node
First Stage
After selecting Control Plane Node, the script will:
- Install some prerequisite packages and utilities.
- Report the latest packages releases and give you the option to customize which versions to install.
- Prompt for an IP Range for MetalLB-backed K8S LoadBalancer Service.
- Configure, yet disable, iptables.
- Install and configure containerd, runc and CNI network plugins.
- Install and configure kubelet, kubeadm and kubectl.
- Install openvswitch.
- Run
kubeadm init
(pulling K8S images requires at least 3 minutes) to initialize the K8S Cluster. - Install antrea.
- Start waiting for the Worker Nodes to become Ready before installing helm.
Worker Nodes
After selecting Worker Node, the script will:
- Install some prerequisite packages and utilities.
- Prompt for Control Plane Node IP Address and Password.
- Fetch the software versions config from the Control Plane Node.
- Configure, yet disable, iptables.
- Install and configure containerd, runc and CNI network plugins.
- Install and configure kubelet, kubeadm and kubectl.
- Install openvswitch.
- Wait for the K8S API Server to start listening on the Control Plane, then it will retrieve token and discovery token.
- Run
kubeadm join
to join the K8S Cluster. After getting the antrea agents installed by the Control Plane, the Worker Nodes become Ready in the Cluster. - Install
nfs-utils
for the NFS client. - Complete execution. If any error or warning was encountered, it will be reported in the completion screen. (In the example, the warning refers to a reminder for
iptables
being disabled).
Control Plane Node
Second Stage
Once all Worker Nodes have been tested Ready, the script will:
- Install helm.
- Install
nfs-utils
for the NFS client. - Retrieve the current number of Worker Nodes to calculate the required number of replicas for the NFS Subdirectory External Provisioner K8S Pod.
- Use helm to deploy and configure the NFS Subdirectory External Provisioner that will be used for the K8S Persistent Volumes.
- Use helm to deploy and configure MetalLB that will be used for the K8S LoadBalancer Service.
- Use helm to deploy and configure Kubeapps sample application.
- Complete execution. If any error or warning was encountered, it will be reported in the completion screen. (In the example, the warning refers to a reminder for
iptables
being disabled). - The Setup Complete page will report the kubeapps IP Address and the location of the token required for logging into the HTML portal UI.
Deployment Completed
You can now login to the Control Plane Node and use kubectl
to work with your Kubernetes Cluster.
In example, using kubectl get pods -A -o wide
, you can retrieve all the pods running in your cluster.
