OCP 3.2 on Atomic with Gluster

Hostnames

192.168.122.130    atomic00.local (installer)
192.168.122.131    atomic01.local (master + node)
192.168.122.132    atomic02.local (node)
192.168.122.133    atomic03.local (node)

I'm using libvirt on a Fedora 24 laptop to do all of this.

DNS Wildcard with libvirt

Wanring this is much more of a pain in the ass than it should be.

  • enable a local dnsmasq instance on the host your creating your VMs on

    vi /etc/dnsmasq.conf
    

    set these fields, create them if they don't exist

    bind-interfaces
    interface=lo
    address=/apps.local/192.168.122.121
    

    the address contains your wildcard domain name and the ip of the node that the router will be deployed on, in my case atomic01.local

    systemctl restart dnsmasq.service
    
  • set first dns server to localhost

I'm using my laptop so I'm always connecting to different wifi so I just set dhclient to prepend to the local dnsmasq instance

    vi /etc/dhcp/dhclient.conf

add this

    prepend domain-name-servers 127.0.0.1;

restart NetworkManager

    systemctl restart NetworkManager.service

Now you can resolve you wild card domain from the VMs you create. That would be much easier if you could set custom dnsmasq options via libvirt.

Create virtual machines

  • create 3 atomic host virtual machines using rhel-atomic-cloud-7.2.5.x86_64.qcow2 available at access.redhat.com
  • add a second disk to each VM to expand the default volume group
  • attach cloud-init ISO to each VM
  • create a 4th full rhel virtual machine to use as the installer node

Creating the cloud-init iso

For each VM we have to create a cloud-init iso to set the network paramaters and password. For some good reading check out link

mkdir -p cloud-init/atomic01

cd cloud-init/atomic01

vi meta-data

instance-id: oshift01
local-hostname: oshift01
network-interfaces: |
  iface eth0 inet static
  address 192.168.1.128
  network 192.168.1.0
  netmask 255.255.255.0
  broadcast 192.168.1.255
  gateway 192.168.1.1
  dns-nameservers 192.168.1.1
bootcmd:
  - ifdown eth0
  - ifup eth0

vi user-data

#cloud-config
password: shift
chpasswd: {expire: False}
ssh_pwauth: True
ssh_authorized_keys:
  - ssh-rsa AAAAB3N......
  • use your id_rsa.pub for the ssh-rsa line

genisoimage -output oshift01.iso -volid cidata -joliet -rock user-data meta-data

Repeat for each open shift VM and attach the iso to the appropriate VM

Once booted you can now login to each VM with user name cloud-user and the password or ssh key you selected

OSE entitlements

  • master and nodes

sudo subscription-manager register sudo subscription-manager list --available

find the pool ID for the entitlement you want to use

sudo subscription-manager attach --pool=your-pool-id

  • installer

Install, update and get a RHEL 7 host up on the network, not going to explain it, go consult the google.

sudo subscription-manager register

sudo subscription-manager list --available

find the pool ID for the entitlement you want to use

sudo subscription-manager attach --pool=your-pool-id

sudo subscription-manager repos --disable="*"

subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-server-ose-3.2-rpms"

yum install atomic-openshift-utils

configure docker and storage
  • on the atomic hosts
  • your disk names may differ if you're on a different hypervizor technology
    sudo pvcreate /dev/vdb
    sudo vgextend atomicos /dev/vdb
    
setup ssh keys from installer to master and nodes

on the installer run

ssh-keygen

accept the defaults and enter a blank password

for host in atomic01.local \
  atomic02.local \
  atomic03.local; \
  do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
  done

enter "yes" and the password for each host

install openshift with ansible
  • on the installer

sudo vi /etc/ansible/hosts

change hostsnames and such to match your environment

[OSEv3:children]
masters
nodes

[OSEv3:vars]

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

openshift_master_default_subdomain=apps.local

ansible_ssh_user=cloud-user
ansible_sudo=true
containerized=true
deployment_type=openshift-enterprise

[masters]
atomic01.local

[nodes]
atomic01.local openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
atomic02.local openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
atomic03.local openshift_node_labels="{'region': 'primary', 'zone': 'west'}"

run the ansible playbook

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Users

I'm using the htpasswd plugin because it's quick and simple for a demo environment. Consult the documentation if you'd like to use a different auth method. We have to create the htpasswd file on the install node because we cannot install rpms on the atomic nodes. In this example I create a user and admin account, you can of course create as many as you would like to.

  • on the installer node

yum -y install httpd-tools

htpasswd -c htpasswd user

Enter a password twice

htpasswd htpasswd admin

Enter a password twice

scp htpasswd cloud-user@atomic01:

  • on the master

sudo cp htpasswd /etc/origin/master/htpasswd

Using a web browser login to the master as both users so openshift can create the users in it's own database. https://atomic01.local:8443

  • on the master

sudo su -

oadm policy add-cluster-role-to-user cluster-admin admin

You can now delete the installer node and use any machine with the openshift client tools to administer the cluster with the admin user.

https://docs.openshift.com/enterprise/latest/cli_reference/get_started_cli.html

Deploy the registry

For this example I'm using a local file system on the master node to host the registry, consult the OpenShift installation documentation at docs.redhat.com if you want to use a different method.

  • on the master

sudo mkdir /var/mnt/registry

sudo lvcreate -L 5G -n registry atomicos

mkfs.xfs /dev/atomicos/registry

vi /etc/fstab

add the following line

/dev/mapper/atomicos-registry /var/mnt/registry xfs defaults 0 0

mount /var/mnt/registry

chown 1001:root /var/mnt/registry/

chcon -t svirt_sandbox_file_t /var/mnt/registry/

  • on the installer

Converged Gluster Storage

  • Create and add 3 additional disks to each node you're going to run gluster on
  • Modify on all gluster nodes

vi /etc/sysconfig/iptables

-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m multiport --dports 49152:49664 -j ACCEPT

systemctl reload iptables

  • create project

oc login

oc new-project storage-project

oadm policy add-scc-to-user privileged -z router oadm policy add-scc-to-user privileged -z default

  • Install Templates

First install the heketi-templates package on the host you're running commands from. Use the installer node if needed. Then install the templates.

oc create -f /usr/share/heketi/templates

oc get templates

  • Deploy Containers

For each node run the following

oc process glusterfs -v GLUSTERFS_ NODE=atomic01.local | oc create -f -

oc process deploy-heketi -v \
    HEKETI_KUBE_NAMESPACE=storage-project \
    HEKETI_KUBE_APIHOST='https://atomic01.local:8443' \
    HEKETI_KUBE_INSECURE=y \
    HEKETI_KUBE_USER=admin \
    HEKETI_KUBE_PASSWORD=admin | oc create -f -
  • Set up the Heketi server

vi topology.json

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "atomic01.local"
                            ],
                            "storage": [
                                "192.168.122.121"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdc",
                        "/dev/vdd",
                        "/dev/vde"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "atomic02.local"
                            ],
                            "storage": [
                                "192.168.122.122"
                            ]
                        },
                        "zone": 2
                    },
                    "devices": [
                        "/dev/vdc",
                        "/dev/vdd",
                        "/dev/vde"

                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "atomic03.local"
                            ],
                            "storage": [
                                "192.168.122.123"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdc",
                        "/dev/vdd",
                        "/dev/vde"

                    ]
                }
                    ]
                }
            ]
        }
    ]
}

export HEKETI_CLI_SERVER=http://deploy-heketi-storage-project.apps.local

heketi-cli topology load --json=topology.json

Verify it loaded properly

heketi-cli topology info

Create heketi volume

heketi-cli setup-openshift-heketi-storage