Origin 1.5 on Centos

This is a basic setup of a two VM Openshift environment. With metrics, logging and registry on an NFS share.

Requirements
  • 2 VMs using CentOS 7 Atomic Host
  • 1 installation host to run git and ansible commands from
  • 1 NFS server (not documented here)
  • Basic Linux administration knowledge

1. Atomic Hosts

Download CentOS 7 Atomic

https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/

I tested with the qcow2 image on a Fedora 25 host running libvirt.
Create to VMs using the CentoOS Atomic image of your choice.
For an initial test I used only 2 VMs, a master and node with 2 vCPU and 4GB of RAM. This is a very small POC use case but you can scale the environment as you wish.

In this example I created atomic01 and atomic02

1.1 Create cloud-init images

Basic documentation here
http://www.projectatomic.io/docs/quickstart/

mkdir cloud-init
cd cloud-init
vi meta-data
instance-id: atomic01
local-hostname: atomic01.example.org
vi user-data
#cloud-config
password: put.default.password.here
chpasswd: {expire: False}
ssh_pwauth: True
ssh_authorized_keys:
  - ssh-rsa AAAA.......

Change "put.default.password.here" to your password of choice and put your installation host's pub ssh key at the bottom.

Create ISO image

genisoimage -output atomic01.iso -volid cidata -joliet -rock user-data meta-data

Create an ISO for each atomic host and attach each to the appropriate VM.
Boot VMs, login via the console and set static IP addresses for each VM or use static DHCP assignments.

Note: the default login name is centos

1.2 Expand Atomic Host Storage - optional

Add a second virtual disk to each Atomic Host VM.

Login to each host.

vi /etc/sysconfig/docker-storage-setup
GROWPART=true
DEVS="/dev/vdb"
ROOT_SIZE=4G

Replace /dev/vdb with the name of the disk device you just added.

sudo docker-storage-setup

sudo xfs_growfs /

2. Installation host

I'm using a Fedora 25 host to run the installation from. Whatever you're using make sure you have git and ansible installed.

clone openshift-ansible

git clone https://github.com/openshift/openshift-ansible.git
cd openshift-ansible

create an inventory file

vi openshift-inventory
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
# ansible_python_interpreter=/usr/bin/python3 # only needed on Fedora
ansible_ssh_user=centos
ansible_become=yes
deployment_type=origin
openshift_deployment_type=origin
debug_level=2
containerized=true
openshift_release=v1.5.0
openshift_image_tag=v1.5.0

openshift_docker_disable_push_dockerhub=True

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': 'HTPASSWD_HASH', 'user': 'HTPASSWD_HASH'}

osm_use_cockpit=true
openshift_master_default_subdomain=apps.FQDN
openshift_hosted_router_selector='region=infra'
openshift_hosted_registry_selector='region=infra'

# REGISTRY
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_host=NFS_SERVER.FQDN
openshift_hosted_registry_storage_nfs_directory=NFS_PATH
openshift_hosted_registry_storage_volume_name=REGISTRY_DIR_IN_NFS_PATH
openshift_hosted_registry_storage_volume_size=10Gi

# METRICS
openshift_hosted_metrics_deploy=true
openshift_hosted_metrics_storage_kind=nfs
openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
openshift_hosted_metrics_storage_host=NFS_SERVER.FQDN
openshift_hosted_metrics_storage_nfs_directory=NFS_PATH
openshift_hosted_metrics_storage_volume_name=METRICS_DIR_IN_NFS_PATH
openshift_hosted_metrics_storage_volume_size=10Gi

# LOGGING
openshift_hosted_logging_deploy=true
openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_host=NFS_SERVER.FQDN
openshift_hosted_logging_storage_nfs_directory=NFS_PATH
openshift_hosted_logging_storage_volume_name=LOGGING_DIR_IN_NFS_PATH
openshift_hosted_logging_storage_volume_size=10Gi

# host group for masters
[masters]
atomic01.FQDN containerized=true

# host group for etcd
[etcd]
atomic01.FQDN containerized=true

# host group for nodes, includes region info
[nodes]
atomic01.FQDN openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true containerized=true
atomic02.FQDN openshift_node_labels="{'region': 'primary', 'zone': 'default'}" openshift_schedulable=true containerized=true

Note: Replace the fileds in CAPS with your information.

run the ansible installer

ansible-playbook -vvv -i openshift-inventory playbooks/byo/config.yml

If the playbook fails and says that you have to install a specific package on your controller host, install that package on the system your running the playbook from.

If all goes well you'll have an Openshift Origin environment up and running in about 20 minutes depending on your hardware. The metrics and logging install will continue running for a while after the playbook completes.

Point your browser to https://atomic01.FQDN:8443 to connect ot the web console.

3. Adding Storage

On your NFS server create some shares for persistent volumes to be used by containers.

I used NFSv4 with subdirectory mounts to make this simpler.

3.1 Setup NFS Server

vi /etc/exports
/srv/nfs    192.168.1.0/255.255.255.0(rw,sync,no_subtree_check,crossmnt)

make a bunch of sub directories in your NFS root

for i in `seq -w 00 99`; do mkdir pv$i; done

3.1.1 Add storage to Openshift

Login to your Openshift master and make sure you're system:admin

[root@atomic01 ~] oc whoami
system:admin

Create a template JSON file

vi pv.json
{
    "kind": "PersistentVolume",
    "spec": {
        "accessModes": [
            "ReadWriteOnce"
        ],
        "capacity": {
            "storage": "10Gi"
        },
        "nfs": {
            "path": "${nfspath}${pvname}",
            "server": "${nfsserver}"
        },
        "persistentVolumeReclaimPolicy": "Recycle"
    },
    "apiVersion": "v1",
    "metadata": {
        "name": "${pvname}"
    }
}

Now let's do another for loop and pass it variables then pipe the output to oc create to define the PVs in Openshift.
Make sure to replace the CAPS with your nfs root and server name

for i in `seq -w 00 99`; do pvname=pv${i} nfspath=/NFS/ROOT/ nfsserver=NFS-SERVER.FQDN envsubst < pv.json | oc create -f - ; done

You should get output like

persistentvolume "pv00" created
persistentvolume "pv01" created
persistentvolume "pv02" created
persistentvolume "pv03" created
persistentvolume "pv04" created
persistentvolume "pv05" created
persistentvolume "pv06" created
persistentvolume "pv07" created
persistentvolume "pv08" created
persistentvolume "pv09" created
...

Verify PV creation with

[root@atomic01 ~] oc get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                         REASON    AGE
pv00                10Gi       RWO           Recycle         Available                                           27s
pv01                10Gi       RWO           Recycle         Available                                           27s
pv02                10Gi       RWO           Recycle         Available                                           27s
pv03                10Gi       RWO           Recycle         Available                                           27s
pv04                10Gi       RWO           Recycle         Available                                           27s
pv05                10Gi       RWO           Recycle         Available                                           26s