A Four Node Kubernetes Cluster for Your Home Lab

Introduction

While I have this on my mind, I decided to build a microk8s cluster from a bunch of raspberry pis that I have laying around. Why? In practice more useful than minikube which is also great for local development.

What we will be using

  1. Ubuntu 22.04.03 Linux distribution
    • 32 bit for contro pi 3b armhf
    • 64 bit for 4 arm64
  2. Ansible
  3. microk8s
  4. prometheus, grafana, node_exporter

Using five raspberry pi’s. One for an Ansible control to deploy software and four for our microk8s cluster

Hardware Setup

IP Allocation

I added address reservations on my router to automatically assign an IP to a known MAC address on a Raspberry pi. This ensures that the pi will have the same IP address each time it boots. No messing with the network configuration in the operating system every time I re-install the OS. Since this is technically a test environment where I will burn-and-build at will. I will not go through this excercise as there ae many ways to do this on a DHCP server.

To obtain the MAC aaddress, I first installed an OS on the pi and noted down the MAC.

OS Installation

Use raspberry pi imager. You can download the softwaere here https://www.raspberrypi.com/software/

Ansible

for automation

Copy Ansible keys to other hosts from cloudletcmd

ssh-copy-id -i ansible_rsa.pub pi@192.168.1.200
ssh-copy-id -i ansible_rsa.pub pi@192.168.1.201
ssh-copy-id -i ansible_rsa.pub pi@192.168.1.202
ssh-copy-id -i ansible_rsa.pub pi@192.168.1.203

ansible-playbook update.yml –ask-become-pass
ansible all -m ping

For microk8s

sudo nano /boot/firmware/cmdline.txt

add the following to the file
cgroup_enable=memory cgroup_memory=1

then reboot
sudo reboot

MicroK8s

sudo snap install microk8s --classic
sudo apt install linux-modules-extra-raspi
sudo microk8s stop; sudo microk8s start

sudo microk8s.add-node <- have to run for each node

From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.200:25000/cfd6f5addae6e0959a4c7f1de1f47a47/583d9c2f5994

Use the ‘–worker‘ flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.1.200:25000/cfd6f5addae6e0959a4c7f1de1f47a47/583d9c2f5994 --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.1.200:25000/cfd6f5addae6e0959a4c7f1de1f47a47/583d9c2f5994

192.168.1.200 cloudlet01
192.168.1.201 cloudlet02
192.168.1.202 cloudlet03
192.168.1.203 cloudlet04

microk8s join 192.168.1.200:25000/0c4812f17a41a0a6c9e0a931d263d0eb/583d9c2f5994
microk8s enable ha-cluster on main node

kube config

to add to .kube/config run sudo microk8s config and add output to config.

sudo usermod -a -G microk8s pi

Prometheus

Set up

Manually setting up Prometheus node exporter

sudo groupadd --system prometheus
sudo useradd -s /sbin/nologin --system -g prometheus prometheus
sudo apt update
sudo apt -y install wget curl vim
sudo apt autoremove
curl -s https://api.github.com/repos/prometheus/node_exporter/releases/latest | grep browser_download_url | grep linux-arm64 | cut -d '"' -f 4 | wget -qi -
tar xvf node_exporter-1.5.0.linux-armv7.tar.gz
cd node_exporter-1.5.0.linux-armv7/
sudo mv node_exporter /usr/local/bin
node_exporter --version

A basic service config for node_exporter:
sudo vim /etc/systemd/system/node_exporter.service

[Unit]
Description=Prometheus
Documentation=https://github.com/prometheus/node_exporter
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=prometheus
Group=prometheus
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/local/bin/node_exporter \
--collector.cpu \
--collector.diskstats \
--collector.filesystem \
--collector.loadavg \
--collector.meminfo \
--collector.filefd \
--collector.netdev \
--collector.stat \
--collector.netstat \
--collector.systemd \
--collector.uname \
--collector.vmstat \
--collector.time \
--collector.mdadm \
--collector.zfs \
--collector.tcpstat \
--collector.bonding \
--collector.hwmon \
--collector.arp \
--web.listen-address=:9100 \
--web.telemetry-path="/metrics"

[Install]
WantedBy=multi-user.target

sudo systemctl daemon-reload; sudo systemctl start node_exporter; sudo systemctl enable node_exporter; sudo systemctl status node_exporter

Remove microk8s

sudo microk8s reset

sudo snap remove microk8s

Resources

https://microk8s.io/docs/install-raspberry-pi

Up and running chef-solo instance using Vagrant

This will describe a quick and easy way to get a chef-solo instance up and running using Vagrant. This document is intended for learning Chef and not intended for production use.

I am using Mac OS X for this tutorial. I am sure it is simiar for Windows and Linux.

Prerequisites

  • Oracle Virtual Box – Download
  • Vagrant – Download
  • Working knowledge of Vagrant

 Setting It Up

Create a directory for the Vagrantfile and bootstrap.sh. I named mine chef_solo. Change into that directory and create a Vagrantfile with the following:

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision :shell, path: "bootstrap.sh"
end

This will spin up an Ubuntu server 14.04 64 bit instance and run bootstrap.sh.

Now edit bootstrap.sh and add the following:

#!/usr/bin/env bash

apt-get update
apt-get install -y curl
curl -L https://www.opscode.com/chef/install.sh | bash
chef-solo -v

Once the VM is up and running, bootstrap.sh will apply updates, install curl, download and install chef-solo using the Omnibus installer, and print out the version of Chef.

Chef-solo up and running

Now simply run vagrant up and your chef-solo instance will be installed.

Icinga Monitoring System

After many installations of Icinga, I finally decided to cobble together the documentation that I use and add it to a central location. Links to the original documents are found throughout this post. This document provides instructions on setting up the Icinga monitoring system. It will go over the following:

  • Icinga and IDOUtils
  • Icinga Web
  • PNP4Nagios
  • Icinga Mobile

Pre-Requisites

  • Ubuntu Server 14.04 LTS 64 Bit
  • 2GB RAM
  • 2 CPU
  • 60 GB HDD

Icinga and IDOUtils

The installation of the following software will be covered in this section:

  • Icinga Core
  • Icinga Classic UI (the “CGIs”)
  • Icinga IDOUtils
    • Mysql or Postgresql as database
  • Icinga Docs

Icinga Core

This document describes the procedure for installing Icinga core using the official packages. There is also documentation for installing the package maintainer version (PPA) which may be more up to date.

The following command will install all required packages:

sudo apt-get install icinga icinga-doc icinga-idoutils mysql-server libdbd-mysql mysql-client

This installation will use MySQL as the database server. Postgress can also be used, but will not be covered in this document.

Plugins

The following command will install the Nagios plugins.

sudo apt-get install nagios-plugins

Configuration

Enable IDO2DB

Edit the following file:

/etc/default/icinga

change the following line:

IDO2DB=no

to

IDO2DB=yes

Then restart the ido2db service:

sudo service ido2db start

check to see if it is enabled on startup:

sh -x /etc/init.d/ido2db start | grep IDO2DB

 

 

Enable IDOMOD

Copy the example configuration file into the Icinga configuration directory:

sudo cp /usr/share/doc/icinga-idoutils/examples/idoutils.cfg-sample /etc/icinga/modules/idoutils.cfg

Restart Icinga with:

sudo service icinga restart

Check the logfile (/var/log/icinga/icinga.log) to see if the module has been loaded and connected to IDO2DB:

[1399482608] idomod: IDOMOD 1.10.3 (02-11-2014) Copyright(c) 2005-2008 Ethan Galstad, Copyright(c) 2009-2013 Icinga Development Team (https://www.icinga.org)
[1399482608] idomod: Successfully connected to data sink. 0 queued items to flush.
[1399482608] Event broker module 'IDOMOD' version '1.10.3' from '/usr/lib/icinga/idomod.so' initialized successfully.

Enable External Commands

This may have been enabled during the package installation. To check, do the following:

See if:

check_external_commands=1

exists in /etc/icinga/icinga.cfg

If it is set to 0, change it to 1 as above and do the following:

sudo service icinga stop
sudo dpkg-statoverride --update --add nagios www-data 2710 /var/lib/icinga/rw
sudo dpkg-statoverride --update --add nagios nagios 751 /var/lib/icinga
sudo service icinga start

Classic UI Authentication

Users can be added to access the portal with the following command:

sudo htpasswd /etc/icinga/htpasswd.users <username>

PNP4Nagios

PNP is an addon for the Nagios Network Monitoring System. PNP provides easy to use, easy to configure RRDTools based performance charts feeded by the performance data output of the Nagios Plugins.” – http://docs.pnp4nagios.org/pnp-0.6/start

This document covers installing PNP for Icinga.

Requirements

  • Icinga running
  • Icinga Classic UI or Icinga Web running

Setup

First, ensure the system is up to date with:

sudo apt-get update

Then install the pnp4nagios package:

sudo apt-get install --no-install-recommends pnp4nagios

without –no-install-recommends, icinga will be installed as a dependancy if not already installed. Icinga installation was covered in the previous section.

Enable RRDCached in PNP4Nagios

“rrdcached is a daemon that receives updates to existing RRD files, accumulates them and, if enough have been received or a defined time has passed, writes the updates to the RRD file. A flush command may be used to force writing of values to disk, so that graphing facilities and similar can work with up-to-date data.” – http://oss.oetiker.ch/rrdtool/doc/rrdcached.en.html

General purpose – decrease I/O, increase performance.

Installation

Install the software with the following command:

sudo apt-get install rrdcached

Now, add it to system init:

sudo update-rc.d rrdcached defaults

RRDCACHED Configuration

Create the cache directory:

sudo mkdir -p /var/cache/rrdcached

Add the nagios user to the www-data group with:

sudo usermod -aG www-data nagios

Now the nagios user account can write to web directories.

Edit the configuration file, /etc/default/rrcached and add the following line:

OPTS="-w 1800 -z 1800 -j /var/cache/rrdcached -s nagios -m 0660 -l unix:/var/run/rrdcached.sock"

Now, restart the service:

sudo service rrdcached restart

PNP4Nagios Configuration for RRDCACHED

Add the following line to /etc/pnp4nagios/process_perfdata.cfg

RRD_DAEMON_OPTS = unix:/var/run/rrdcached.sock

Then modify the following line in /etc/pnp4nagios/config.php

$conf['RRD_DAEMON_OPTS'] = 'unix:/var/run/rrdcached.sock';

Also, change the nagios_base  in config.php to the following:

$conf[‘nagios_base’] = “/cgi-bin/icinga”;

Restart Services

sudo service rrdcached restart

and

sudo service apache2 restart

Enable PNP4Nagios in Icinga

Configuration

Add performance data in icinga.cfg. Edit /etc/icinga/icinga.cfg and add the following lines:

# performance data
process_performance_data=1

service_perfdata_file_template=DATATYPE::SERVICEPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tSERVICEDESC::$SERVICEDESC$\tSERVICEPERFDATA::$SERVICEPERFDATA$\tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tSERVICESTATE::$SERVICESTATE$\tSERVICESTATETYPE::$SERVICESTATETYPE$

host_perfdata_file_template=DATATYPE::HOSTPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tHOSTPERFDATA::$HOSTPERFDATA$\tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$

service_perfdata_file_mode=a
host_perfdata_file_mode=a

service_perfdata_file_processing_interval=30
host_perfdata_file_processing_interval=30

service_perfdata_file=/var/spool/pnp4nagios/nagios/service-perfdata
host_perfdata_file=/var/spool/pnp4nagios/nagios/host-perfdata

service_perfdata_file_processing_command=pnp-bulknpcd-service
host_perfdata_file_processing_command=pnp-bulknpcd-host

Add commands to /etc/icinga/commands.cfg:

# pnp4nagios
define command{
        command_name    pnp-bulknpcd-service
        command_line    /bin/mv /var/spool/pnp4nagios/nagios/service-perfdata /var/spool/pnp4nagios/npcd/service-perfdata.$TIMET$
}

define command{
        command_name    pnp-bulknpcd-host
        command_line    /bin/mv /var/spool/pnp4nagios/nagios/host-perfdata /var/spool/pnp4nagios/npcd/host-perfdata.$TIMET$
}

Web Configuration

Edit the /etc/apache2/conf-available/pnp4nagios.conf file and change to the following:

        AuthName "Icinga Access"
        AuthType Basic
        AuthUserFile /etc/icinga/htpasswd.users

Then restart apache with service apache2 reload.

Enable NPCD

Edit /etc/default/npcd and edit the fllowin gline to look like:

Run="yes"

Then restart the service with:

sudo service npcd start

Integration Into Icinga

Create the file /etc/icinga/objects/pnptemplates.cfg

define host {
        name       pnp-hst
        register   0
        action_url /pnp4nagios/graph?host=$HOSTNAME$' class='tips' rel='/pnp4nagios/popup?host=$HOSTNAME$&srv=_HOST_
}

define service {
        name       pnp-svc
        register   0
        action_url /pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$' class='tips' rel='/pnp4nagios/popup?host=$HOSTNAME$&srv=$SERVICEDESC$
}

Then use them in host and service configurations:

define host{
        name                            generic-host
        use                             pnp-hst
...

define service{
        name                            generic-service
        use                             pnp-svc
...

Copy the ssi file to the web UI:

sudo cp /usr/share/doc/pnp4nagios/examples/ssi/status-header.ssi /usr/share/icinga/htdocs/ssi/status-header.ssi
sudo chmod 644 /usr/share/icinga/htdocs/ssi/status-header.ssi

then reload icinga for the changes to take effect:

sudo service icinga reload

Apache2 on Ubuntu 14.04 LTS

I am just poking around the latest LTS from Canonical. It looks like the Apache configs have a new directory hierarchy. There are now conf-available and conf-enabled directories. A small intellectual jump, before looking things up, led me to the command a2enconf. How nice! I wonder if this will complicate things…

How-to Install Linux Integration Components on Ubuntu 12.04 LTS

I am using Ubuntu Server 12.04.04 LTS that I have migrated from VMWare ESXi to Hyper-V. Microsoft has supplied the Linux community with Linux Integration Components (LIC) that provide additional support for Ubuntu Linux.

Integration services are built in modules that need to be added to startup. To add them edit the modules file:

vi /etc/initramfs-tools/modules

append the following modules to the file:

hv_vmbus
hv_storvsc
hv_blkvsc
hv_netvsc

Now, update initramfs:

sudo update-initramfs –u

and now reboot.

Upon reboot, you should see the modules with lsmod.

Thanks to:

http://baudlabs.com/how-to-install-hyper-v-integration-services-in-ubuntu-12-04-lts/

Java on Ubuntu 13.04 (Raring Ringtail)

Installing Oracle Java on Ubuntu can be a pretty daunting task. After reading the Ubuntu documentation here:

Java

I came across this nice github project:

oab-java6

Simply run

git clone https://github.com/flexiondotorg/oab-java6.git

Then cd into oab-java6 and run

sudo ./oab-java.sh

This will set up a local apt repository for Sun Java 6 and Oracle Java 7. Now run

sudo apt-get update

to add the packages from the repository. As the documentation states, you can now run

sudo apt-get install sun-java6-jre

To install Java 6 runtime environment.

One small thing to note:

I had to run

sudo update-alternatives --config java

and select the Sun JRE since the system defaulted to IcedTea.

Now do

java -version

to see what you have!

Install Oracle JDK 8 on Ubuntu Server

I am trying to evaluate the MySQL plugin for NewRelic so I have to install a JRE on the MySQL server. Gut instinct tells me to go with Oracle JRE….not the alternatives. So, Here is what I did to install JRE 8.

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

If you receive an error while adding the ppa, something like this:

error sudo: add-apt-repository: command not found

then

sudo apt-get install python-software-properties

Add Your own Script to Default Run Levels in Ubuntu

This uses the old Sys V init.d scripts. Not upstart. I may write a post on upstart at a later date. So, you write a cool script that you want to run at startup. Place it in /etc/rc.d/init.d and make it executable. Next run the following command:

sudo update-rc.d my_cool_script defaults

This will set the script to run on all the default run levels.

Change the Envelope From Header for apache, php

On an Ubuntu installation, the php sendmail function will send out email as the apache user www-data@servername.domain. This can make the email flag as spam on some mail servers. To get around this edit your php.ini and restart apache.

Edit the php.ini

sudo vi /etc/php5/apache2/php.ini

Edit the following string:

sendmail_path = '/usr/sbin/sendmail -t -i -fno-reply@yourdomain.com -Fno-reply'

Now restart apache:

sudo service apache2 restart

Install vmware tools on Ubuntu 13.04 (Raring Ringtail)

I am doing this on server edition.
First do an apt-get update, then apt-get upgrade to bring the entire system current. Then install build tools and kernel headers with the following command:

sudo apt-get install build-essential linux-headers`uname -r`

Now click on the Install VMWare Tools option on your Guest. On your VM, run the following command:

sudo mount /dev/sr0 /mnt

Then copy the tools image to the Guest:

cp /mnt/VMwareTools-8.6.10-913593.tar.gz /tmp

You may have a different version of VMwareTools. Now ruun the following commands:

cd /tmp
tar -xzvf VMwareTools-8.6.10-913593.tar.gz
cd vmware-tools-distrib

Now start the install:

sudo ./vmware-install.pl

I chose all the default values. I did not install any options marked [EXPERIMENTAL]. When the installer is searching for valid headers, I received an error that they could not be found in “”. So, a little searching and I found that some of the headers moved so:

sudo ln -s /usr/src/linux-headers-$(uname -r)/include/generated/uapi/linux/version.h /usr/src/linux-headers-$(uname -r)/include/linux/version.h