6.20 Kubernetes Deployment Using Ubuntu
Kubernetes is a tool for automating deployment, scaling, and management of containerized applications. It allows you to easily launch applications in containers and manage their lifecycle.
To test and deploy applications, we need a configured Kubernetes cluster. In this article, we will look at deploying a cluster on a single server (single-node cluster
) and on several nodes (multi-node cluster
), explain the nuances of installation and configuration, and also show how to run a test application.
In this instruction, our server will act as a master node, to which other nodes will then connect.
Server requirements
Before you begin, you need to make sure that your server meets the minimum requirements:
- OS: Ubuntu 22.04 or CentOS 8 (Ubuntu is preferable, as it has better Kubernetes support);
- Processor: 4 cores or higher;
- RAM: minimum 8 GB (for multi-node clusters, 16 GB or more is recommended);
- Disk: minimum 50 GB of free space;
- Network connection: static IP and open ports (
6443
,10250
,10255
,30000
-32767
and others). - Kubernetes requires
swap
to be disabled, otherwise kubelet will not work. This is implemented by default on our VPS.
For stable operation of the Kubernetes master node, we recommend servers with a tariff of at least Cloud-8.
What is a master node, node, pod? Review the related terminology
Here are some key concepts to know before you begin:
-
Container — An isolated environment containing an application and all its dependencies. Works the same on any server.
-
Pod — The smallest unit in Kubernetes that contains one or more containers.
-
Manifest — A YAML file describing a Kubernetes object (such as a deployment or service).
-
Node — A physical or virtual machine in a cluster.
-
Master — The main node that manages the cluster.
-
Worker — The node that runs containers (pods).
Step-by-step instructions
Kubernetes uses the Docker container engine to run containers.
Tip: we have separate, more detailed material on installing and using Docker. You can read it at link.
Installing Docker
- Update packages and install Docker:
sudo apt update && sudo apt install -y docker.io
- Check Docker version:
docker --version
- Add Docker to autostart:
sudo systemctl enable --now docker
- Add current user to Docker group (to avoid using sudo when running containers):
sudo usermod -aG docker $USER
newgrp docker
- Check Docker operation:
docker run hello-world
If it displays a message about correct installation, then it works:
Installing Kubernetes
- Adding the Kubernetes repository:
sudo apt update && sudo apt install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
- Install Kubernetes components:
sudo apt update && sudo apt install -y kubelet kubeadm kubectl
- Add to startup:
sudo systemctl enable --now kubelet
- Check the Kubernetes version:
kubectl version --client
Cluster setup
- Initialize the cluster (only for master node):
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
Please note: parameter --pod-network-cidr
specifies the IP range for future pods. Different network plugins (Flannel
, Calico
, etc.) may require different values.
After successful initialization, a command to connect worker nodes (kubeadm join
) will appear. Save it.
- To work with the cluster, configure
kubectl
:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Check functionality:
kubectl get nodes
A master node should appear (Ready
will appear after installing the network plugin):
Installing the network plugin
To enable pods to communicate, install the network plugin. For example, Flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
After a minute, check the nodes:
kubectl get nodes
The master node should now be in the Ready
state:
Adding worker nodes
On each worker node, install Docker and Kubernetes, then run the kubeadm join
command obtained during master node initialization:
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash <hash>
Check that the nodes have connected:
kubectl get nodes
Deploying a test application
Create a deployment.yaml
file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Applying the manifest:
kubectl apply -f deployment.yaml
Checking:
kubectl get pods
If you see running pods, then everything is configured correctly.
Conclusion
Now you have a ready-made server for testing and deploying applications on Kubernetes. We looked at installing, configuring and running a test application.
Next steps may include:
-
Set up a load balancer (
ingress-nginx
). -
Add monitoring (
Prometheus
,Grafana
). -
Explore configuration management (
ConfigMaps
,Secrets
). -
Set up CI/CD for automated deployment.
On this page
Similar articles
- aaPanel Control Panel. How to Install aaPanel on a Clean Virtual/Dedicated Server
- Woocommerce. Installation and Initial Setup
- Hestia Control Panel. How to Install Hestia Control Panel on a Clean Virtual/Dedicated Server
- FastPanel control panel. How to install FastPanel on a pure virtual/dedicated server