Linode, LLC is an American privately-owned cloud hosting company that provides virtual private servers

The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads.

LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes.

When you deploy an LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes.

Your LKE cluster’s Master node runs the Kubernetes control plane processes including the API, scheduler, and resource controllers.

Additional LKE features:

etcd Backups: A snapshot of your cluster’s metadata is backed up continuously, so your cluster is automatically restored in the event of a failure.

High Availability: All of your control plane components are monitored and automatically recover if they fail.

Kubernetes Dashboard All LKE installations include access to a Kubernetes Dashboard installation.

You need to install the kubectl client to your computer before proceeding.

Follow the steps corresponding to your computer’s operating system.

macOS:

If you don’t have Homebrew installed, visit the Homebrew home page for instructions.

Curl on Intel Mac

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"

Curl on Apple Silicon

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"

Note:

To download a specific version, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt ) portion of the command with the specific version.

For example, to download version v1.23.0 on Intel macOS, type:

Linux:

Download the latest kubectl release:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

Make the downloaded file executable:

chmod +x ./kubectl

Move the command into your PATH:

sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl

Note: Make sure /usr/local/bin is in your PATH environment variable.

Windows:

curl -LO "https://dl.k8s.io/release/v1.23.0/bin/windows/amd64/kubectl.exe"

Test to ensure the version you installed is up-to-date:

kubectl version --client

Note You can also install kubectl via your package manager.

Append or prepend the kubectl binary folder to your PATH environment variable.

Log into your Linode Cloud Manager account.

From the Linode dashboard, click the Create button at the top of the page and select Kubernetes from the dropdown menu.

image

The Create a Kubernetes Cluster page will appear.

At the top of the page, you’ll be required to select the following options:

In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account.

This name will be how you identify your cluster in the Cloud Manager’s Dashboard.

From the Region dropdown menu, select the Region where you would like your cluster to reside.

From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.

In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster.

To the right of each plan, select the plus + and minus - to add or remove a Linode to a node pool one at time.

Once you’re satisfied with the number of nodes in a node pool, select Add to include it in your configuration.

If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.

Select 2 nodes of Linode 2 GB in Shared CPU

image

Once a pool has been added to your configuration, you will see it listed in the Cluster Summary on the right-hand side of the Cloud Manager detailing your cluster’s hardware resources and monthly cost.

Additional pools can be added before finalizing the cluster creation process by repeating the previous step for each additional pool.

When you are satisfied with the configuration of your cluster, click the Create Cluster button on the right hand side of the screen.

Your cluster’s detail page will appear on the following page where you will see your Node Pools listed.

From this page, you can edit your existing Node Pools, access your Kubeconfig file, and view an overview of your cluster’s resource details.

After you’ve created your LKE cluster using the Cloud Manager, you can begin interacting with and managing your cluster.

You connect to it using the kubectl client on your computer. To configure kubectl, download your cluster’s kubeconfig file.

Access and Download your kubeconfig

Anytime after your cluster is created you can download its kubeconfig.

The kubeconfig is a YAML file that will allow you to use kubectl to communicate with your cluster.

This configuration file defines your cluster, users, and contexts.

To access your cluster’s kubeconfig, log into your Cloud Manager account and navigate to the Kubernetes section.

From the Kubernetes listing page, click on your cluster’s more options ellipsis and select Download kubeconfig. The file will be saved to your computer’s Downloads folder.

image

You can also download the kubeconfig from the Kubernetes cluster’s details page.

When viewing the Kubernetes listing page, click on the cluster for which you’d like to download a kubeconfig file.

On the cluster’s details page, under the kubeconfig section, click the Download icon.

The file will be saved to your Downloads folder.

To view the contents of your kubeconfig file, click on the View icon.

A pane will appear with the contents of your cluster’s kubeconfig file.

imageTo improve security, change the kubeconfig.yaml file permissions to be only accessible by the current user:
chmod go-r /Downloads/kubeconfig.yaml

Open a terminal shell and save your kubeconfig file’s path to the $KUBECONFIG environment variable.

In the example command, the kubeconfig file is located in the Downloads folder, but you should alter this line with this folder’s location on your computer:

export KUBECONFIG=~/Downloads/kubeconfig.yaml

Note It is common practice to store your kubeconfig files in ~/.kube directory.

By default, kubectl will search for a kubeconfig file named config that is located in the ~/.kube directory.

You can specify other kubeconfig files by setting the $KUBECONFIG environment variable, as done in the step above.

View your cluster’s nodes using kubectl.

kubectl get nodes

Note

If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context.

You are now ready to manage your cluster using kubectl.

If you create a new terminal window, it does not have access to the context that you specified using the previous instructions.

This context information can be made persistent between new terminals by setting the KUBECONFIG environment variable in your shell’s configuration file.

These instructions persist the context for users of the Bash terminal.

They are similar for users of other terminals:

Navigate to the $HOME/.kube directory:

cd $HOME/.kube

Create a directory called configs within $HOME/.kube.

You can use this directory to store your kubeconfig files.

mkdir configs

Copy your kubeconfig.yaml file to the $HOME/.kube/configs directory.

 cp ~/Downloads/kubeconfig.yaml $HOME/.kube/configs/kubeconfig.yaml

Note: Alter the above line with the location of the Downloads folder on your computer.

Optionally, you can give the copied file a different name to help distinguish it from other files in the configs directory.

Open up your Bash profile (e.g. ~/.bash_profile) in the text editor of your choice and add your configuration file to the $KUBECONFIG PATH variable.

If an export KUBECONFIG line is already present in the file, append to the end of this line as follows; if it is not present, add this line to the end of your file:

export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config:$HOME/.kube/configs/kubeconfig.yaml

Close your terminal window and open a new window to receive the changes to the $KUBECONFIG variable.

Use the config get-contexts command for kubectl to view the available cluster contexts:

kubectl config get-contexts

You should see output similar to the following:

CURRENT   NAME          CLUSTER   AUTHINFO        NAMESPACE
*         lke1234-ctx   lke1234   lke1234-admin   default

If your context is not already selected, (denoted by an asterisk in the current column), switch to this context using the config use-context command. Supply the full name of the cluster (including the authorized user and the cluster):

kubectl config use-context lke1234-ctx

You should see output like the following:

Switched to context "lke1234-ctx".

You are now ready to interact with your cluster using kubectl.

You can test the ability to interact with the cluster by retrieving a list of Pods.

Use the get pods command with the -A flag to see all pods running across all namespaces:

kubectl get pods -A

You should see output like the following:

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-dc6cb64cb-4gqf4   1/1     Running   0          11d
kube-system   calico-node-bx2bj                         1/1     Running   0          11d
kube-system   calico-node-fg29m                         1/1     Running   0          11d
kube-system   calico-node-qvvxj                         1/1     Running   0          11d
kube-system   calico-node-xzvpr                         1/1     Running   0          11d
kube-system   coredns-6955765f44-r8b79                  1/1     Running   0          11d
kube-system   coredns-6955765f44-xr5wb                  1/1     Running   0          11d
kube-system   csi-linode-controller-0                   3/3     Running   0          11d
kube-system   csi-linode-node-75lts                     2/2     Running   0          11d
kube-system   csi-linode-node-9qbbh                     2/2     Running   0          11d
kube-system   csi-linode-node-d7bvc                     2/2     Running   0          11d
kube-system   csi-linode-node-h4r6b                     2/2     Running   0          11d
kube-system   kube-proxy-7nk8t                          1/1     Running   0          11d
kube-system   kube-proxy-cq6jk                          1/1     Running   0          11d
kube-system   kube-proxy-gz4dc                          1/1     Running   0          11d
kube-system   kube-proxy-qcjg9                          1/1     Running   0          11d

You can use the Linode Cloud Manager to modify a cluster’s existing node pools by adding or removing nodes.

You can also recycle your node pools to replace all of their nodes with new ones that are upgraded to the most recent patch of your cluster’s Kubernetes version, or remove entire node pools from your cluster.

For an automated approach, you can also enable cluster autoscaling to automatically create and remove nodes as needed.

This section covers completing those tasks.

For any other changes to your LKE cluster, you should use kubectl.

Access your Cluster’s Details Page

Click the Kubernetes link in the sidebar.

The Kubernetes listing page appears and you see all of your clusters listed.

imageClick the cluster that you wish to modify.

The Kubernetes cluster’s details page appears.

Select the add a node pool option to the right of the node pools section.

image

In the new window that appears, you may select the hardware resources that you’d like to add to your new Node Pool.

To the right of each plan, select the plus + and minus - to add or remove a Linode to a node pool one at time.

Once you’re satisfied with the number of nodes in a node pool, select Add Pool to include it in your configuration.

If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.

select one Linode 2 GB in Shared CPU and click add pool

imageRemove Existing Node Pools

On your cluster’s details page, click the Resize Pool option at the top-right of each entry in the Node Pools section.

Using the sidebar that appears to the right of the page, you can now remove - or add + Linodes to the pool, and the total cost of your new resources will be displayed.

To accept these changes, select the Save Changes button to continue.

Caution: Shrinking a node pool will result in deletion of Linodes.

Any local storage on deleted Linodes (such as “hostPath” and “emptyDir” volumes, or “local” PersistentVolumes) will be erased.

Cancel on changes here

imageTo remove a node pool from the cluster’s details page, click the Delete Pool option at the top-right of each entry in the Node Pools section. image

A pop-up message will then appear confirming that you’re sure you’d like to proceed with deletion.

imageSelect the Delete option, and your Node Pool will proceed to be deleted.

Note: Your cluster must always have at least one active node pool.

In Kubernetes, Cluster Auto-Scaling refers to a method by which Kubernetes users can configure their cluster to automatically scale the amount of physical nodes available in a node pool up and down as hardware needs of the the pool increase or decrease.

While this feature can be applied manually using resources like the Cluster Autoscaler provided by Kubernetes, LKE can manage this potential automatically through the Cloud Manager and the Linode API.

The LKE autoscaler will only apply changes when the following conditions are met:

If Pods are unschedulable due to an insufficient number of Nodes in the Node Pool, the auto-scaler will increase the number of physical nodes to the amount required.

If Pods are able to be scheduled on less Nodes than are currently available in the Node Pool, Nodes will be drained and removed automatically.

Pods on drained nodes will be immediately rescheduled on pre-existing nodes.

The Node Pool will be decreased to match only the needs of the current workload.

LKE Autoscaling is configured for individual Node Pools directly through the Linode Cloud Manager.

To Enable cluster autoscaling, access the cluster’s details page.

Click the Autoscale Pool option at the top-left of each entry in the Node Pools section. The Autoscaling menu will appear.

imageSelect the autoscaler switch toggle to turn the feature on.

Once the Autoscaler is enabled, the Minimum Min and Maximum Max fields can be set.

Both the Minimum and Maximum field can be any number between 1 and 99. Each number represents a set of Nodes in the node pool.

A minimum of 10 for example, will allow for no less than ten nodes in the node pool, while a maximum of 10 will allow for no more than ten nodes in the node pool.

Select the Save Changes button to complete the process, and officially activate the autoscaling feature.

Select 2 and min and 4 as max and save

image

You can delete an entire cluster using the Linode Cloud Manager.

These changes cannot be reverted once completed.

Click the Kubernetes link in the sidebar.

The Kubernetes listing page will appear and you will see all your clusters listed.

imageYou can also delete cluster from details page imageClick on delete

A confirmation pop-up will appear. Enter in your cluster’s name and click the Delete button to confirm.

imageThe Kubernetes listing page will appear and you will no longer see your deleted cluster.

Summary

In this lab, you learned how to create a Linux VM using the Linode portal.

You then connected to the public IP address of the VM and managed it over SSH.