Overly Complex Home Server
My new overly complex home server setup, because why not
- small form factor, quiet and low power
- dual gigabit ethernet
- a cpu powerful enough to create a few virtual machines and containers for home infrastructure
- 16GB of ram
- mirrored data disks
The Hardware
- SHUTTLE DS81
- Intel Core i5-4690K Processor
- Crucial 16GB Kit (8GBx2) DDR3/DDR3L-1600 MHz
- Transcend TS16GMSA300 16GB MSATA SSD
- ORICO 9528U3 Aluminum Tool Free 2 bay 3.5" SATA to USB 3.0 External Hard Drive Enclosure
- 2x WD Red 4TB
Installed Fedora 22 on the internal 16GB MSA drive. Base install with no swap ( adding that later on the data disks ). Nothing fancy yet. Bond and bridge the network interfaces. I once again tried building a bridge on top of the teaming driver, it works when you do it manually but NetworkManager still cannot handle it. So ... traditional bridge on bond it is. Use gdisk to create a raid auto detect partition on each disk Make a raid10 array out of the two disks
mdadm --create /dev/md0 --level=10 --metadata=0.90 --raid-devices=2 --layout=f2 /dev/sdb1 /dev/sdc1
- Create encrypted volume from /dev/md0 with a keyfile and separated header
truncate -s 2M header.img
dd bs=512 count=4 if=/dev/urandom of=/mnt/keyfile01 iflag=fullblock
cryptsetup -v --cipher aes-xts-plain64 --key-size 512 --hash sha1 --iter-time 1000 --use-urandom luksFormat /dev/md0 --header header.img /mnt/keyfile01
cryptsetup --key-file /mnt/keyfile01 luksOpen /dev/md0 --header /mnt/header.img crypt01
- Create a systemd.service for the encrypted volume
vi /etc/systemd/system/systemd-cryptsetup@crypt01.service
[Unit]
Description=Cryptography Setup for %I
Documentation=man:crypttab(5) man:systemd-cryptsetup-generator(8) man:systemd-cryptsetup@.service(8)
SourcePath=/etc/crypttab
DefaultDependencies=no
Conflicts=umount.target
BindsTo=dev-mapper-%i.device
IgnoreOnIsolate=true
After=cryptsetup-pre.target
Before=cryptsetup.target
RequiresMountsFor=/sdcard/keyfile01
BindsTo=dev-md0.device
After=dev-md0.device
Before=umount.target`
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutSec=0
ExecStart=/usr/lib/systemd/systemd-cryptsetup attach 'crypt01' '/dev/md0' '/sdcard/keyfile01' 'luks,header=/sdcard/header.img'
ExecStop=/usr/lib/systemd/systemd-cryptsetup detach 'crypt01'
- Create PV and VG on the encrypted volume
pvcreate /dev/mapper/crypt01
vgcreate --dataalignment 1024K --physicalextentsize 262144K pearl_vg /dev/mapper/crypt01
- Create an LV and filesystem
lvcreate -L50g -n virt_lv pearl_vg
mkfs.xfs /dev/pearl_vg/virt_lv
mkdir /srv/virt
- Create a systemd.mount unit for any mount points that exist on the encrypted drive so that they mount after the drive is open
vi /etc/systemd/system/srv-virt.mount
[Unit]
SourcePath=/etc/fstab
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
Before=local-fs.target
Requires=systemd-cryptsetup@crypt01.service
[Mount]
What=/dev/disk/by-uuid/7989f864-fce8-42c7-8a95-582044a4ff7a
Where=/srv/virt
Type=xfs
- Create a systemd.target to group all the stuff I want to load once the luks header and key are available. “pearl.target” because that's the hostname.
vi /etc/systemd/system/pearl.target
[Unit]
Description=Operational Target
Documentation=man:systemd.special(7)
Requires=default.target
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes
mkdir /etc/systemd/system/pearl.target.wants
cd /etc/systemd/system/pearl.target.wants
ln -s /etc/systemd/system/systemd-cryptsetup@crypt01.service
ln -s /usr/lib/systemd/system/docker.service
ln -s /usr/lib/systemd/system/libvirtd.service
ln -s /etc/systemd/system/virt.mount
- Disable services that I don't want to start until after the encrypted volume is open
systemctl disable docker.service
systemctl disable libvirtd.service
- Modify services so they require cryptsetup to run first
systemctl edit docker.service
Requires=systemd-cryptsetup@crypt01.service
systemctl edit libvirtd.service
Requires=systemd-cryptsetup@crypt01.service
After=srv-virt.mount
Docker
LVM Thin Pool
After some reading I decided to go with lvm thin pools for my docker storage. * Create a thin pool logical volume
lvcreate -L50g -T pearl_vg/docker-pool
- Edit the docker-storage config
vi /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/pearl_vg-docker--pool
Kubernetes
I will create a VM to act as the Kubernetes master and use the existing server at the Minion.
pearl 192.168.1.22 (minion)
kuber 192.168.1.19 (master, vm)
- Once created set the VM used for the master to autostart
virsh autostart kuber
Minion
dnf install kubernetes-node
vi /etc/kubernetes/kubelet
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=pearl.zews-home.org"
# location of the api-server
KUBELET_API_SERVER="--api_servers=http://kuber.zews-home.org:8080"
# Add your own!
KUBELET_ARGS=""
vi /etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kuber.zews-home.org:8080"
systemctl restart kube-proxy
systemctl restart kubelet
cd /etc/systemd/system/pearl.target.wants
ln -s /usr/lib/systemd/system/kubelet.service
ln -s /usr/lib/systemd/system/kube-proxy.service
Master
dnf install kubernetes-master etcd
vi /etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://kuber.zews-home.org:8080"
systemctl disable firewalld
systemctl stop firewalld
vi /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet_port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
vi /etc/etcd/etcd.conf
- change the following
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
systemctl restart etcd
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
systemctl enable etcd
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
- add pearl as a node
vi /tmp/node.json
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "pearl",
"labels":{ "name": "pearl-node-label"}
},
"spec": {
"externalID": "pearl.zews-home.org"
}
}
kubectl create -f /tmp/node.json
kubectl get nodes
- If it says "Ready" for the status then we're good
Rancher
docker run -d --restart=always -p 8080:8080 rancher/server
- Open a browser and go to http://$HOSTNAME:8080
- Add a host, click custom and copy/paste the command into your terminal
- Now the rancher server and agent are running
Atomic
- Go here and download the atomic host qcow2 image https://getfedora.org/cloud/download/atomic.html
- Copy the image to /srv/virt/images twice as
atomic01.qcow2
andatomic02.qcow2
- Create two new virtual machines with virt-manager, "Import existing disk image" using the two qcow2 files we just downloaded
- Create and add a second 10GB disk to each new virtual machine
- Follow the steps here http://www.projectatomic.io/docs/quickstart/ for virt-manager
- Then follow these steps http://www.projectatomic.io/docs/gettingstarted/
More to come as soon as I do it ..