In the future, step-tracking could be a more accurate measure of how many steps you take. For now, this isn’t possible because there’s no easy way to identify individual users’ stride patterns. Instead, it is often measured as an aggregate that combines all strides by every user in order to determine total mileage traveled each day or week. However, with new technologies like blockchain and GPS tracking on smartphones becoming more accessible and affordable these days–the cost of calculating aggregates may soon drop too low for companies interested in using them.,
The “step program” is a command-line tool that allows users to create step programs. The step programs are run in the background, and will update you on your progress.
Kubernetes is your buddy if you’re stuck managing hundreds of containers. This guide is your best buddy if you need to install Kubernetes on Ubuntu.
Kubernetes is an open-source technology for automating containerized application deployment, scaling, and management. This article will walk you through installing Kubernetes on an Ubuntu computer and launching your first container.
Let’s get started!
Prerequisites for the tutorial
This article will be a step-by-step guide. Make sure you have two Ubuntu 14.04.4 LTS or higher PCs with Docker installed to follow along. Each computer in this lesson runs Ubuntu 18.04.5 LTS with Docker 19.03.8 loaded.
How to Install and Use Docker on Ubuntu is a related article (In the Real World)
Although Kubernetes may be installed on a single node, it is not advised. Fault tolerance and high availability are provided by separating nodes.
The Ubuntu hosts used in this tutorial will be designated MASTER for the master node and WORKER for the worker node, with an IP address of 10.0.0.200 for the master node and 10.0.0.200 for the worker node.
Prerequisites for deploying Kubernetes
Before installing Kubernetes on Ubuntu, you need do a few preliminary steps to guarantee a smooth installation.
To begin, use your preferred SSH client, connect to MASTER, and follow the instructions.
Setting up SSH in Linux (A Windows Guy in a Linux World)
1. (Optional) Use the hostnamectl command to assign the master-hostname node’s to MASTER. You don’t have to rename the nodes, but doing so will give them everyone a distinct hostname, making it simpler to identify them while dealing with the cluster.
sudo hostnamectl set-hostname master-node # Assigning the hostname master-node to the first Ubuntu machine
Assign a worker-node hostname to WORKER.
2. Inventive+ phrasing (Optional) After that, use the hostname command to see whether the hostname has been changed correctly. You’re fine to go if hostname returns the expected hostname.
Changes to the hostname
3. Finally, run sudo apt update to verify that Ubuntu has all of the most recent packages available for installation. You should do this so that apt can discover all of the appropriate package repositories when the time comes.
# Updating Package Repositories with the Most Up-to-Date Version # sudo apt update sudo apt update sudo apt update sudo apt update sudo apt update sudo apt update sudo apt update sudo apt update sudo apt update sudo
Keeping Package Repositories Up to Date with the Latest Version
Although the root user is used in this article, it is usually advisable to use a less-privileged account that belongs to the sudoers group.
4. Now, use apt install to install the transport-https and curl Kubernetes dependent packages. You’ll need these packages later to get the Kubernetes packages you’ll need.
# apt-get install -y apt-transport-https curl sudo apt-get install -y apt-transport-https curl sudo apt-get install -y apt-transport-https curl sudo apt-get install -y apt-transport-https curl sudo apt-get install -y
On each Ubuntu machine, install the transport-https and curl packages.
5. Use curl to retrieve and install the needed GPG security key, which you’ll need later to login to the Kubernetes package repository using apt-key. You should get an OK answer in your terminal if you were successful.
# Installing the Kubernetes GPG Key on each Ubuntu machine curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
6. To add the Kubernetes package repository to Ubuntu, use the apt-add-repository command. This step is required since the Kubernetes repository is not included in the /etc/apt/sources.list file by default.
# Installing the Kubernetes repository sudo apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main”
On each Ubuntu machine, add the Kubernetes repository.
7. Finally, use sudo apt update to compel apt to read the updated package repository list and verify that all of the most recent packages are accessible for installation.
8. Repeat steps 1–7 on WORKER as well.
Kubernetes Installation on the Master and Worker Nodes
It’s time to set up Kubernetes now that you’ve loaded the required software on both MASTER and WORKER. kubeadm, kubelet, and kubectl are the three packages/tools that make up Kubernetes. Each of these packages includes all of the binaries and settings needed to get a Kubernetes cluster up and running.
Kubernetes: An Architecture Diagram of How Components Fit Together
Assuming you’re still using SSH to connect to the MASTER:
1. Install the kubeadm, kubectl, and kubelet packages using the apt-get install command.
- Kubelet is a container management agent that runs on each worker node and handles all pod containers.
- Kubeadm is an utility that aids in the setup and management of Kubernetes clusters.
- You may use kubectl to perform commands on Kubernetes clusters.
# On each Ubuntu computer, install the kubeadm kubelet kubectl package/tool. sudo apt-get install kubeadm kubelet kubectl kubeadm kubelet kubectl kubectl kubectl ku
On each Ubuntu computer, install the kubeadm kubelet kubectl package/tool.
2. Use the version argument to run each previously installed item. For convenience, the code sample below combines all of the instructions on a single line.
# On each system, check the version of each tool. kubeadm version && kubelet –version && kubectl version
If everything works properly, the following commands should produce a kubeadm version, a line for kubelet that states Kubernetes vX.XX.X, and a Client Version for kubectl.
On each system, check the version of each tool.
3. Repeat steps one and two on WORKER as well.
Setting up a Kubernetes Cluster
Kubernetes should be installed on both the master (MASTER) and worker nodes by now (WORKER). However, Kubernetes on Ubuntu is useless if it isn’t operating. The cluster must now be initialized on MASTER.
Assuming you’re still using SSH to connect to the MASTER:
1. To start the Kubernetes cluster, use the kubeadm init command. The —apiserver-advertise-address argument in the command below informs Kubernetes where its kube-apiserver is located. That IP address is the master node in this situation.
Using the -pod-network-cidr argument, the command below also defines the range of IP addresses to utilize for the pod network. Pods may communicate with one another thanks to the pod network. The master node will be instructed to issue IP addresses to each node automatically if the pod network is configured this way.
—pod-network-cidr=10.244.0.0/16 —apiserver-advertise-address=10.0.0.200 kubeadm init
If everything went well, you should see something like the screenshot below.
Make careful to replicate the two blue-highlighted instructions. These instructions will be needed later to connect the worker node to the cluster.
Initialized Kubernetes
2. Now, on the master node, run the first highlighted box instructions from step 1. For security concerns, these instructions compel Kubernetes to operate as a non-root account.
# Creating a directory that will store settings such as the admin key files, which are necessary to connect to the cluster, and the cluster’s API address. mkdir -p mkdir -p mkdir -p mkdir -p mkdir -p mkdir -p mkdir -p mkdir -p mkdir -p mkdir -p mkdir -p $HOME/.kube # sudo cp -i /etc/Kubernetes/admin.conf /etc/Kubernetes/admin.conf /etc/Kubernetes/admin.conf /etc/Kubernetes/admin.conf /etc/Kubernetes $HOME/.kube/config # $HOME/.kube/config # $HOME/.kube Change the user from root to a non-root account by using sudo chown. $(id -u):$ $(id -u):$ $(id -u (id -g) $HOME/.kube/config
3. Now SSH onto the worker node and perform the kubeadm join command as stated in step 1’s second highlighted box.
The worker node is added to the cluster using this command. The master node generates both the —token and —discovery-token-ca-cert-hash values in step 1 to enable the worker node to connect securely to the master node and to confirm that the root CA public key is the same as the master node’s.
# To join the Kubernetes cluster, run the following command on the Worker Node: kubeadm join 10.0.0.200:6443 —discovery-token-ca-cert-hash sha256:7c12e833b38f210625301b31a030ed20172227cbc625c04a71e0256c6e86b555 —token ismq95.dtz5nn9rej9w5m30
The cluster now contains a Kubernetes worker node.
4. Run the kubectl command on the master node to ensure the worker node has been added to the cluster successfully. If everything went well, both the master and worker nodes should have a STATUS of NotReady.
Because you haven’t built up the network for them to interact, the nodes have a state of NotReady.
# kubectl get nodes kubectl get nodes kubectl get nodes kubectl get nodes kubectl get nodes kubectl get
Verifying the Kubernetes cluster’s nodes
5. Use the kubectl apply command to download and install a popular pod network YAML configuration to the master node. The creation of a pod network connects two pods at two distinct nodes.
YAML configuration files are used to build up pod networks in Kubernetes. Flannel is one of the most popular pod networks. Flannel is in charge of assigning each node an IP address lease.
The different configurations required for setting up the pod network are included in the Flannel YAML file.
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml establishing network link between two MASTER AND WORKER
Creating a network connection between two MASTER AND WORKER computers.
6. Run kubectl get nodes again to ensure that both nodes on the master node have a STATUS of Ready.
double-check both nodes
Running Your First Kubernetes Application
You should now have a Kubernetes environment up and running. You did an excellent job! It’s nice to have a working Kubernetes cluster, but it’s useless until you start deploying containers on it.
Let’s take a quick look at how to set up a Kubernetes deployment and service to host a small NGINX container in this last segment.
Assuming you’re still using SSH to connect to the MASTER:
To construct a deployment configuration, use the kubectl create deployment command. Kubernetes uses deployments to generate and update application instances. A deployment also enables Kubernetes to continuously monitor the application and, using a self-healing method, repair any errors.
The command below establishes the nginx deployment settings, gets the nginx Docker image from Docker Hub, and builds a pod (and container inside) on the worker node.
—image=nginx kubectl create deployment nginx
2. Use kubectl get deployments to verify that Kubernetes has built the deployment.
nginx nginx nginx nginx nginx nginx nginx nginx
3. To build a service, execute kubectl create service. A service is a means for Kubernetes to expose an application operating on one or more pods across the network.
The command below establishes a network service that exposes the TCP port 80 of the nginx deployment (and pod) to the rest of the cluster.
—tcp=80:80 kubectl create service nodeport nginx
4. Verify that Kubernetes built the service using the kubectl get svc command. The kubectl get svc command is a wonderful way to see all of your Kubernetes services in one place.
You may also run kubectl describe services nginx to get additional information about the service.
Kubernetes has successfully built the NGINX service, which is presently executing on port 80 on the master node and mapping to port 31414 on the pod, as seen below.
Inspection of the service
5. Run curl to check you can access the default NGINX web page as the pod executing the application is now accessible through port 31414 via the service.
# Using curl localhost:31414 to test the NGINX service on the MASTER Node, which is listening on port 31414
On the MASTER Node, listening on Port 31414, we’re testing the NGINX service.
Conclusion
You should now be able to successfully install Kubernetes on Ubuntu. You go through each step of setting up a Kubernetes cluster and deploying your first application in this lesson. You did an excellent job!
What apps are you intending to install next to your Kubernetes cluster now that it’s up and running?
The “step exam” is a type of test that is used to measure an individual’s knowledge. It can be administered in many different ways and can also be taken as part of a larger test.
Related Tags
- step login
- step travel
- step number
- step website
- step bank