OCP on libvirt

Requirements

  • A libvirt host configured as explained here

https://github.com/openshift/installer/blob/master/docs/dev/libvirt/README.md

  • An HAproxy virtual machine with one interface on your LAN and another on a privte network. Once the cluster install has started we'll change the second interface to the private network created by the installer. Here's a working example haproxy configuration. Change it to match your LAN subnet.

https://raw.githubusercontent.com/rhocpws/notes/master/haproxy.cfg

  • A wildcard DNS record on your DNS server that will point to the IP address of your HAproxy VM LAN interface.

Example Wildcard: apps.example.com

  • A pull secret from cloud.redhat.com

https://cloud.redhat.com/openshift/install/aws/installer-provisioned

Install golang

sudo dnf install golang -y

Create the golang build dir and cd to it

mkdir -p ~/go/src/github.com/openshift/
cd ~/go/src/github.com/openshift/

Clone the installer and cd to it

  • Change the branch to the version you're planning to install
git clone -b release-4.3 https://github.com/openshift/installer.git
cd installer

Build the installer with libvirt support

TAGS=libvirt hack/build.sh

Copy the installer to a bin dir

sudo cp bin/openshift-install /usr/local/bin/

Create a cluster install directory

mkdir ~/cluster-install
cd ~/cluster-install

Create an install-config

openshift-install create install-config
  • select and ssh key
  • select libvirt
  • qemu+tcp://192.168.124.1/system
  • tt.testing
  • ocp
  • copy/paste your pull secret

Edit the install-config

  • Edit the newly created install-config.yaml and change the master and worker replicas to 3.
apiVersion: v1
baseDomain: tt.testing
compute:
- architecture: amd64
  hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 3
controlPlane:
  architecture: amd64
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 3

Create manifests and edit the ingress

openshift-install create manifests
  • edit the ingress definition to match your wildcard domain
vi manifests/cluster-ingress-02-config.yml
apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
  creationTimestamp: null
  name: cluster
spec:
  domain: apps.example.com
status: {}

Start the cluster install

  • Select a release from the same version as the installer branch.

https://openshift-release.svc.ci.openshift.org/

Example: quay.io/openshift-release-dev/ocp-release:4.3.10-x86_64

OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/openshift-release-dev/ocp-release:4.3.10-x86_64 openshift-install create cluster
  • When you instller create the new private network change the second interface on your HAproxy VM to that new network.

Monitor the install process

  • Once the bootstrap node is created you can ssh into it if you added an ssh key to the install-config
ssh core@192.168.126.10
journalctl -b -f -u bootkube.service
  • Once the API service is up you can export the kubconfig and use oc commands to monitor the install from the libvirt host
export KUBECONFIG=~/cluster-install/auth/kubeconfig
  • Check pod creation
oc get pods -A
  • Check cluster operator status
oc get co
  • The install will most likely take longer than the 30 minute timeout.
  • Once the console and authentication pods are running you shuld be able to login

Log into the console

  • get the console URL
oc get routes -n openshift-console
  • get the kubeadmin password
cat ~/cluster-install/auth/kubeadmin-password