Tutorial: how to install OpenShift on anything
Tutorial: how to install OpenShift on anything
14 December 2022
Jeroen Vermeylen
Red Hat’s OpenShift container platform has several automated installs available for a wide range of platforms. However, if your platform of choice is not supported, or if you want more configuration options, you can also install OpenShift manually. In this tutorial, we will discuss CoreOS and how to set up your own OpenShift install. Let’s get started!
CoreOS
OpenShift 4 nodes run on the Red Hat Enterprise Linux CoreOS operating system. This is an immutable OS: most of it is read-only, and you are not meant to change a lot on these machines manually.
If you wouldn’t change anything, every machine running CoreOS would be the same. However, we do not want every node in our cluster to be identical. That’s why we can configure CoreOS using Ignition files. If one of our machines fails, we can quickly boot a new machine with the same configuration. To do so, just boot from a CoreOS image with the ignition file of the failed machine.
Networking
OpenShift 4 has a few networking requirements when setting up your cluster, which we will go over in this part.
DHCP
When booting, CoreOS uses DHCP to receive an IP address and other information like gateway and DNS servers. Although there are ways to manually set the IP in CoreOS using the installation ISO instead of the CoreOS image, it is easier for our example to set up static DHCP leases for the nodes.
CoreOS nodes get their hostname from a DNS lookup, so it is best to assign FQDNs to the IPs of the nodes. The FQDN will be `<NODE_NAME>.<CLUSTER_NAME>.<BASE_DOMAIN>`. The BASE_DOMAIN and CLUSTER_NAME will also be used in the config file for the `openshift-install` tool.
We will also have to create records for the (internal) api and the ingress routes. Ingress routes have `.apps.<CLUSTER_NAME>.<BASE_DOMAIN>` as a suffix. Summarized, your DNS will have following records (replace the second column with IPs):
| FQDN | Destination | Record | |----------------------------------------|------------------|--------| | bootstrap.<CLUSTER_NAME>.<BASE_DOMAIN> | bootstrap server | A, PTR | | master01.<CLUSTER_NAME>.<BASE_DOMAIN> | master01 server | A, PTR | | master02.<CLUSTER_NAME>.<BASE_DOMAIN> | master02 server | A, PTR | | master03.<CLUSTER_NAME>.<BASE_DOMAIN> | master03 server | A, PTR | | worker01.<CLUSTER_NAME>.<BASE_DOMAIN> | worker01 server | A, PTR | | worker02.<CLUSTER_NAME>.<BASE_DOMAIN> | worker02 server | A, PTR | | worker03.<CLUSTER_NAME>.<BASE_DOMAIN> | worker03 server | A, PTR | | api-int.<CLUSTER_NAME>.<BASE_DOMAIN> | Load balancer | A, PTR | | api.<CLUSTER_NAME>.<BASE_DOMAIN> | Load balancer | A, PTR | | *.apps.<CLUSTER_NAME>.<BASE_DOMAIN> | Load balancer | A, PTR |
Load Balancer
Your level 4 load balancer should be configured to distribute traffic over the worker or master nodes depending on the port it’s trying to reach.
– **6443** is the port used for the Kubernetes API and should be balanced across your **master nodes**.
– Port **22623** serves the machineconfig (Ignition) files for other nodes and has to be balanced across the **master nodes**.
– **80** and **443** are the default HTTP(S) ports and should point to the **worker nodes** that contain the ingress controller pods (in our case, all worker nodes).
– Metrics are served over port **1936**, and are balanced over the same **worker node** with ingress controller pods.
During installation, the **6443** and **22623** ports will also have to point to the bootstrap node. After installation, the bootstrap node and DNS record can be deleted together with the load balancing rules for the node.
Configuration
Before we install the cluster, we have to write the configuration. This config file will be used by the OpenShift installer to generate the ignition files for Core OS.
apiVersion: v1 baseDomain: '<BASE_DOMAIN>' compute: - hyperthreading: Enabled name: worker replicas: 0 controlPlane: hyperthreading: Enabled name: master replicas: 3 metadata: name: '<CLUSTER_NAME>' networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: none: {} fips: false pullSecret: '<PULL_SECRET>' sshKey: '<SSH_KEY>'
Fill in these variables:
– **BASE_DOMAIN** is the domain in which the cluster will be placed.
– **CLUSTER_NAME** will be the name of the cluster. URLs to the environment will end with CLUSTER_NAME.BASE_DOMAIN.
– **PULL_SECRET** can be generated on the Red Hat console.
– **SSH_KEY** is your public key, used for authenticating to cluster nodes.
Worker nodes have to be set to 0. When setting up a cluster manually, you add the worker nodes manually once the master nodes are up and running.
More information about the config can be found in the official documentation.
Generate Ignition files
We saved the filled in config file in a new directory called `ocp4demo`. Now we can use the `OpenShift-install` CLI tool to generate Ignition files based on the config file we just wrote. The tool will transform the config file to Ignition files, and the original config file will disappear.
We can use the same tool again to transform the Ignition files back to the config (or manifest) file, but we recommend keeping a backup of the config file somewhere for when you want to start over.
~ openshift-install create ignition-configs --dir /home/flowfactor/ocp4demo
With the Ignition files generated, we can get started installing the cluster.
Installation
The nodes will have to be booted in a specific order:
1. First, we boot the **bootstrap node**. This node will serve files and configuration for the master nodes.
2. Next, we boot the **master nodes**. Once these are booted, the cluster is ready.
3. Finally, we boot the **worker nodes** and join them in the cluster to run workloads.
Bootstrap
Create a machine that boots from a CoreOS image and provide the Ignition file. The way to server this file is specific to each platform. For an overview, check the official documentation
This bootstrap Ignition file can get very large, and not all platforms support such a big file size. To get around this, you could host your Ignition file on a web-accessible file share. Then, you write a small Ignition file that fetches the original file from that share:
{ "ignition": { "version": "3.0.0", "config": { "replace": { "source": "http://example.com/config.json", "verification": { "hash": "sha512-0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" } } } } }
Masters
The generated Ignition file for the master nodes is just a small Ignition file that refers to the (internal) API of the cluster. When you boot up those nodes, the load balancer should refer them to the bootstrap node, as that is the only one responsive on that port. Once the bootstrap node is up and running, the master nodes will start receiving their complete configuration.
Once all masters have parsed their config and set themselves up, the cluster is ready. You can monitor the bootstrap process using the OpenShift install tool:
~ ./openshift-install --dir /home/flowfactor/ocp4demo wait-for bootstrap-complete --log-level=info
Once all masters are up, you can delete the bootstrap from the load balancer. You may also delete the node itself and the DNS records.
In the directory with the install files, there is also a generated `auth` folder. This contains the `kubeconfig` file, which lets you connect to the cluster using the `kubectl` (or `oc`) CLI tool.
~ KUBECONFIG=/home/flowfactor/ocp4demo/auth /kubeconfig oc get nodes NAME STATUS ROLES AGE VERSION master01 Ready master 13m v1.23.5+9ce5071 master02 Ready master 13m v1.23.5+9ce5071 master03 Ready master 13m v1.23.5+9ce5071
Workers
When the cluster is ready, we still have to add the worker nodes, which will run the workloads. The generated Ignition file for the workers also just points to the cluster, since they will get their configuration from the master nodes. The nodes will boot and get their configuration, but some actions on the cluster are necessary to allow those nodes in the cluster.
Using the `oc` CLI tools, ask for the Certificate Signing Requests (CSRs) pending on the cluster using `oc get csr`. Next, we can approve the CSRs using `oc adm certificate approve <csr_name>`.
You will have to do this two times; the first time is for the client CSRs. Once these are approved, server CSRs will appear and need to be approved.
Console
OpenShift comes with a handy web-based console to manage the cluster and projects. The OpenShift console will become accessible once worker nodes are added to the cluster. You can reach it on `console-openshift-console.apps.<CLUSTER_NAME>.<BASE_DOMAIN>`. The password for the admin account can be found in the `auth` folder in the installation directory.
Do you have any remaining questions about installing OpenShift? Don’t hesitate to contact us, we’d love to help you out!
Inspecting images not in repositories
The previous commands have always used remote container images by specifying the `docker://` prefix in front of the image name. However, the `skopeo inspect` command also supports other sources through the following transports: containers-storage, dir, docker, docker-archive, docker-daemon, oci, oci-archive, ostree, sif, and tarball.
If you’d like to inspect a **local image**, you could use the following:
``` ~ skopeo inspect docker-daemon:alpine:3.16.2 { "Name": "docker.io/library/alpine", "Digest": "sha256:6c43f7b4d8c89005a55bdd3bfd15daa1c36f81d880bdda5593921bc6d428a24a", "RepoTags": [], "Created": "2022-08-09T17:39:42.400443113Z", "DockerVersion": "20.10.12", "Labels": null, "Architecture": "arm64", "Os": "linux", "Layers": [ "sha256:5d3e392a13a0fdfbf8806cb4a5e4b0a92b5021103a146249d8a2c999f06a9772" ], "LayersData": [ { "MIMEType": "application/vnd.docker.image.rootfs.diff.tar.gzip", "Digest": "sha256:5d3e392a13a0fdfbf8806cb4a5e4b0a92b5021103a146249d8a2c999f06a9772", "Size": 5572608, "Annotations": null } ], "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ] } ``` If you saved an image as a **tar file**, you can inspect the tar like this: ``` ~ skopeo inspect tarball:alpine-3_16_2.tar { "Digest": "sha256:460b0247ac095501f5643fe0c1307ce8bb305167eed76d9e5dabc894454d9e2f", "RepoTags": [], "Created": "2022-11-02T11:54:02.052991701+01:00", "DockerVersion": "", "Labels": null, "Architecture": "arm64", "Os": "darwin", "Layers": [ "sha256:66b0f10e840c41b23d5a7dfceab0ac000e1f627530786010416165d3930092e9" ], "LayersData": [ { "MIMEType": "application/vnd.oci.image.layer.v1.tar", "Digest": "sha256:66b0f10e840c41b23d5a7dfceab0ac000e1f627530786010416165d3930092e9", "Size": 5581824, "Annotations": null } ], "Env": null } ```
Manage images
Now we know how to use Skopeo to get information about packages. The next step is to use Skopeo to transfer images between repositories. To get started with this, we’ll also have to log in to the second repository:
``` ~ skopeo login containers.flowfactor.be Username: ***** Password: Login Succeeded! ```
When we check which tags are available for the Alpine image, we can see that only 3.16.1 has been uploaded to the registry:
``` ~ skopeo list-tags docker://containers.flowfactor.be/alpine { "Repository": "containers.flowfactor.be/alpine", "Tags": [ "3.16.1" ] } ```
Since version 3.16.2 is available by now, we would like to push this to the repository. Skopeo can do so by using the `skopeo copy` command:
``` ~ skopeo copy docker://alpine:3.16.2 docker://containers.flowfactor.be/alpine:3.16.2 Getting image source signatures Copying blob 213ec9aee27d done Copying config 9c6f072447 done Writing manifest to image destination Storing signatures ~ skopeo list-tags docker://containers.flowfactor.be/alpine { "Repository": "containers.flowfactor.be/alpine", "Tags": [ "3.16.1", "3.16.2" ] } ```
The `skopeo delete` command is available if we want to delete the older image. This could be convenient if the image contains a bug or security issue and we want to make sure the developers can no longer use it.
``` ~ skopeo delete docker://containers.flowfactor.be/alpine:3.16.1 ~ skopeo list-tags docker://containers.flowfactor.be/alpine { "Repository": "containers.flowfactor.be/alpine", "Tags": [ "3.16.2" ] } ```
Syncing between repositories
If we ever want to sync all tags of an image to another repository, we can use the `skopeo sync` command. This is convenient if you want to create a mirror for a specific image. You could even use `–preserve-digests` so the digests of the images don’t get changed. This is required when something tries to pull an image with a specific digest through your mirror.
``` ~ skopeo sync --src docker --dest docker alpine containers.flowfactor.be Getting image source signatures Copying blob sha256:2a3ebcb7fbcc29bf40c4f62863008bb573acdea963454834d9483b3e5300c45d Writing manifest to image destination Storing signatures Getting image source signatures Copying blob sha256:4dea34575ff3c97f7f897fcb8dbbceb88b791840971ada8b373f427c92843b97 Writing manifest to image destination Storing signatures Getting image source signatures Copying blob sha256:9107ff4def222271fd4da41e2ec13d0cb34049c29102476ac8547579a60cd9b1 ... Getting image source signatures Copying blob sha256:88ecf269dec31566a8e6b05147732fe34d32bc608de0d636dffaba659230a515 Copying config sha256:49b6d04814d5d56f2c3d3cfdaddb0ff16437d6089dbb624433aa4515da9f667e Writing manifest to image destination Storing signatures Getting image source signatures Copying blob sha256:213ec9aee27d8be045c6a92b7eac22c9a64b44558193775a1a7f626352392b49 Copying config sha256:9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5 Writing manifest to image destination Storing signatures
After we’ve run the sync command, we can see that the Alpine image is now available with all of its tags:
``` ~ skopeo list-tags docker://containers.flowfactor.be/alpine { "Repository": "containers.flowfactor.be/alpine", "Tags": [ "2.6", "2.7", ... "3.9.6", "edge", "latest" ] } ```
Remarks
We wrote this article using a MacBook with a M1 chip. This is a CPU architecture and OS for which just a few images have been created. Because of this, Skopeo threw errors when we wanted to find images. We can use CLI flags to tell Skopeo to search for a specific architecture and OS:
``` --override-arch amd64 --override-os linux ```
Conclusion
In disconnected environments, Skopeo can be a lifesaver. It allows you to quickly move container images between repositories. The sync module can be especially handy when you want to create a mirror for certain images. Skopeo also lets you host your images on a registry where you can implement scanning for vulnerabilities.
Do you have any remaining questions about using Skopeo? Don’t hesitate to contact us, we’d love to help you out!
Sorry, the comment form is closed at this time.