Categories
Kubernetes

MetalLB Deployment and Monitoring on K3S Cluster

To replicate services like you get in AWS or Azure I prefer to have a load balancer. In my Lab I am Running a AVI load balancer but my PI cluster is running in my Home Network Where I do not have the resources available to deploy a dedicated external Load balancer and for this MetalLB is perfect.

The MetalLB installation is 2 step process. Firstly we will deploy all the resources and in step 2 we will do the configuration.

MetalLB Deployment

The first part can be done as described in the documentation from MetalLB.

sudo kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
sudo kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

Awesome, Now we have a new Namespace with MetalLB deployed.

Configuration

For the configuration we need to create a configmap telling MetalLB what IP range it should use. For this we create a new file called config.yaml with the below code.

Modify the addresses to match a part of your network which MetalLB can control. This should not overlap any DHCP scope.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.30.0.30-10.30.0.50

and we can apply it with

sudo kubectl apply -f config.yaml

Now we have a functioning loadbalancer

if we have a look at the services we can see that before the deployment the external IP for Traefik was pending. this is because we did not use the serviceLB option when deploying K3S

Before MetalLB

Once MetalLB is deployed and configured we can see that Traefik got an IP from the range we supplied in our configmap

Up Next Monitoring

Monitoring

While doing my research for this project I watch a lot of video made by Jeff Geerling and his Video Series on Pi Clusters was very helpful. As with most open source projects there is a wealth of information out there but the trick is to find the right combination for your project. As part of his project he again piggybacked on some work done by Carlos Eduardo. I use this in all my K3S projects as it just works. So below is the Short of it.

Install the Pre Requisites

sudo apt update && sudo apt install -y build-essential golang

Clone the Project

git clone https://github.com/carlosedp/cluster-monitoring.git
cd cluster-monitoring

in the vars.jsonnet file we need to set the K3S options and also update out URL for monitoring.

Here I set the k3s master node URL as well as the Traefik ingress url

I also enabled the armExporter to get stats for my Raspberry PI’s

Now we can build and deploy the solution

make vendor && make && sudo make deploy

Once the deployment is done you should have access to your dashboard on the URL specified in the vars file

The default username and password is admin and admin

Categories
Kubernetes

K3S Deployment

Setting up the Raspberry Pi’s

This guide is great for the setup up the Raspberry Pi’s and also include the K3S and MetalLB deployment described here.

Once the OS is installed and the Nodes are accessible using SSH we can begin the deployment process

As part of the OS installation with the Raspberry Pi Imager I already set my Hostnames as required

I also Used my DHCP server to assign fixed IP’s to each Pi

SHomeK3SM01 – 10.30.0.10 – Master Node

SHomeK3SW01 – 10.30.0.11 – Worker Node 1

SHomeK3SW01 – 10.30.0.12 – Worker Node 2

SHomeK3SW01 – 10.30.0.13 – Worker Node 3

You can use hostname and ip a to confirm this

ubuntu@SHomeK3SM01:~$ hostname
SHomeK3SM01
ubuntu@SHomeK3SM01:~$ ip a show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether e4:5f:01:0a:5e:1c brd ff:ff:ff:ff:ff:ff
    inet 10.30.0.10/24 brd 10.30.0.255 scope global dynamic eth0
       valid_lft 82137sec preferred_lft 82137sec
    inet6 fe80::e65f:1ff:fe0a:5e1c/64 scope link
       valid_lft forever preferred_lft forever

Preparing OS

Our First task is to get the OS ready to run Containers.

The below needs to be run on all Nodes. As per the Guide mentioned this can be done using Ansible but I Used mtputty to just run it on al 4 nodes at once

sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' /boot/firmware/cmdline.txt

Fixing IP Tables can be done by creating a k3s.conf file with the config

echo "net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1" > ~/k3s.conf

now this needs to be moved to the correct path and the correct permissions and owner set

sudo mv k3s.conf /etc/sysctl.d/k3s.conf
sudo chown root:root /etc/sysctl.d/k3s.conf
sudo chmod 0644 /etc/sysctl.d/k3s.conf

If you also dislike IPv6 then append the below to your /etc/sysctl.d/99-sysctl.conf file

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 1

Time for a Reboot before we continue.

Installing K3S

Time to start the K3S setup, This is done easily with a Install script and some parameters.

I will start with the single master node.

sudo curl -sfL https://get.k3s.io | K3S_TOKEN="Your Super Awesome Password" sh -s - --cluster-init --disable servicelb

The –cluster-init is to initialize the first node. We also use –disable servicelb as we will be using MetalLB for load balancing.

Give it about a minute to settle. you can check that the node is up using sudo kubectl get nodes

Next up the Worker Nodes. The K3S_URL should be set to the IP of the Master Node you deployed earlier, the FQDN can also be used.

sudo curl -sfL https://get.k3s.io | K3S_URL="https://10.30.0.10:6443" K3S_TOKEN="Your Super Awesome Password" sh -

Now I like to do a watch and just check that all the nodes are up

Great Now we have a nearly Functioning Kubernetes Cluster(still needs a Load Balancer).

To make it look a bit prettier I like to label my Worker Nodes

Labelling Workers

In the next Post we will deploy MetalLB and the Monitoring.

Categories
Kubernetes

Building a Raspberry Pi Kubernetes Cluster for my Home

Overview

Photo by Asya Vlasova: https://www.pexels.com/photo/raspberry-pie-3065576/

I use a few selfhosted services in my Home and because I tend to break my Lab environment weekly these services are running on 2 Raspberry Pi’s and my NAS on the Home side of my Network. But I have, in the past few months, started using more services from my Lab in my Home and I would like to move these to a “Production” Kubernetes environment.

The aim here is to build a Small Scalable Raspberry Pi Cluster to Host my Dashboards, DNS, Monitoring and some other services in my Home network side.

Bill of Materials

2 X Raspberry Pi 4 8GB

2 X Raspberry Pi 4 2GB

Unifi Switch Flex mini

8 Port USB Power supply

Transparent Case

4 X 64Gb Kingston Canvas Select Plus microSD

Hardware Considerations

I installed some small heatsinks on the Pi’s just to lower the temperature a little. They did not overheat but were running close to 80°C and I did not like that.

The cases and Cabling are all aesthetic and as long as the USB cables can supply the power and the Ethernet Cables are Cat 5e it should work.

The Unifi Flex mini was cheapish and sized just right for my initial build.

Software Considerations

I opted to run 64bit Ubuntu 20.04 on the Pi’s. I am comfortable to use Ubuntu and this will also keep the OS the same as all the Servers in my Lab environment. For my Kubernetes Environment I will be using K3S for the ease of setup, I will also use MetalLB as my Load balancer. Storage will be NFS shares from my Synology NAS or local storage.

To Start with the Services I will be running on the cluster include

Central Grafana Instance for Dashboards

PiHole

OpenSpeedtest for internal Performance Testing

Uptime Kuma for Network and Internet Monitoring

Linktree for Bookmark Management

Network Considerations

I opted to put my Cluster in its own VLAN with Firewall rules allowing only traffic out to the Internet.

To allow UptimeKuma to monitor devices in my own network I allowed this traffic on a per Service/IP basis

There is also a rule to allow for my NFS storage to be accessible from the cluster.

The cluster has its own Class C network with Nodes in the range .10 to .19

MetalLB will be able to mange the .30 – .50 space for loadbalancer IP’s

Sources

Exit mobile version