Installing OpenShift 4.6 on VMware vSphere (2023)

The following table provides an overview of the significant changes up to this current version. The table does not provide an exhaustive list of all changes or new features up to this release.

Release version of the Cisco ACI CNI plugin



Kompatibilitet med Red Hat OpenShift 4.6 en VMware vSphere 7 User-Provisioned Infrastructure (UPI).


Added OpenShift Decommissioning section.

Openshift 4.6 en VMware vSphere

Cisco ACI supports Red Hat OpenShift 4.6 on VMware vSphere 7 User-Provisioned Infrastructure (UPI). This document provides instructions on using Ansible Playbooks to provision OpenShift 4.6 on VMware vSphere with the Container Network Interface (CNI) plugin.

Ansible playbooks provide virtual machines (VMs) with the necessary interface configuration and generate the startup configuration files. You must implement your own DHCP, DNS and load balancing infrastructure following high availability best practices.

Ansible playbooks are available atGithub.

The following are the Ansible playbooks:

  • claim.etc: This playbook performs basic validation of variable declarations itodo.ymlfile.

  • configuration.yml: This playbook performs the following tasks:

    • Configure the orchestrator node:

      • Installs Terraform, the OpenShift client, and the OpenShift installer. Create the following: Terraform variables for the boot, master, and worker nodes; the main operator and machine setup worker; The OpenShift installation configuration file.

      • Configure the load balancer node: disable Security-Enhanced Linux (SELinux), configure HAProxy, configure DHCP and DNS if selected.

        This optional step configures only these three components if you configureprovision_dhcp,provision_dns, yprovision_lbvariables to true.

  • oshift_prep.yml:

    • Set installation and startup folders.

    • Generate manifests using openshift-install.

    • Adds the additional machine configuration operator manifests.

      (Video) Install OpenShift 4.6 via UPI on VMware

    • Adds the Cisco ACI-CNI manifests.

    • Back up the manifests.

    • Configures bootstrap, master, and worker node powerup files.

    • Copies the startup stream file to the load balancer node.

  • create_nodes.yml:

    • Provides boot, master and worker nodes using Terraform.

    • Configures a cron job to approve Cisco Certificate Signing Requests (CSRs), if selected.

  • delete_nodes.yml: Removes all master and worker nodes.

Prerequisites for installing OpenShift 4.6 on VMware vSphere

To install OpenShift Container Platform (OCP) 4.6 on VMware vSphere, you must meet the following prerequisites:

Cisco ACI

  • Download version or later of the acc-provision tool.

    Specify the value of the "--flavor" option as "openshift-4.6-esx" and use the "-z" option. The tool creates a .tar archive file as specified by the value of the "-z" option. You will need this archive file during installation.

    Ensure that the Cisco ACI container images provided as input to the acc provisioning tool are version or later.

VMware vSphere

Obtain privileged user credentials to create virtual machines (VMs).


Get the following from the Red Hat website:

  • OCP4 Open Virtualization Appliance (OVA) - Be sure to download the appropriate version of the OVA image. navigate tomirroron the OpenShift website where all RHCOS versions are listed and select the desired version. downloadrhcos-vmware.x86_64.ovafile.

  • OCP4 Client Tools: Go tomirroron the OpenShift website where the client and installation tool versions are listed and select the desired version. downloadopenshift-cliente-linux.tar.gzyopenshift-install-linux.tar.gzfiler

    (Video) *Updated : Installing OpenShift 4.7 on VMware via UPI & Static IP addressing

  • throw away the secret

Installing OpenShift 4.6 on VMware vSphere

Installing OpenShift 4.6 on VMware vSphere (1)

VerExample Ansible batch filessection.

Before you start

Complete the tasks inprevious requirementssection.

It is recommended to seeRedHat OpenShift documentationfor prerequisites and other details about installing a cluster on vSphere.


Paso 1

Provision the Cisco ACI fabric using the acc-provision tool:

acc-provision -a -c acc_provision_input.yaml -u admin -p ### -f openshift-4.6-esx -z manifests.tar.gz

VerExample of layout according tosection.


Due to Python 3 dependencies currently only available on RHEL8, the access provisioning tool is only supported to run on the RHEL8 operating system.

Paso 2

After provisioning the Cisco ACI fabric, verify that a port group is namedsystem_id_vlan_kubeapi_vlanit is created under the distributed switch.

This document refers to this group of ports asapi-vlan-portgroup.


api-vlan-progroupThe port group on the VMware distributed virtual switch is created using the custom VLAN ID specified in the acc_provision_input file askubeapi_vlan.

Installing OpenShift 4.6 on VMware vSphere (2)

The kube_api VLAN is added to the dynamic VLAN group associated with the VMware VMM domain. The mapping mode is set to Static.

Installing OpenShift 4.6 on VMware vSphere (3)

Paso 3

In VMware vSphere, import the Open Virtual Appliance (OVA) image from OpenShift Container Platform 4 (OCP4).

To specifyapi-vlan-portgroupas a port group for the network interface.

Stages 4

Provision a Red Hat Enterprise load balancer virtual machine (VM) with the network interface connected toapi-vlan-portgroup.

Ansible Playbooks optionally configures this virtual machine as a load balancer, DNS server, and DHCP server for the OpenShift cluster.

Paso 5

Provision a Red Hat Enterprise Orchestrator VM with the network interface connected toapi-vlan-portgroup.

Ansible playbooks are played from orchestrator VM.

Paso 6

Perform the following tasks on the orchestrator virtual machine:

  1. Clone the Git repository on GitHub:

  2. Take a look at the "ocp46" branch.

  3. Generate Secure Shell (SSH) keys and copy them to the load balancer virtual machine.

  4. activateansible-2.9-para-rhel-8-x86_64-rpmsdepot.

  5. Update the Ansible package to the latest version.

  6. Change directory to the cloned Git directory.

  7. Installer Ansible modulkrav:

    ansible-galaxy installer -r requisitos.yaml
  8. editgroups/all.etcyhosts.inifiles and set the values ​​your site requires.


    VerExample hosts.ini filessection. Watch out forAnsible sample filesection.

  9. Perform basic validation of variable values ​​usingclaim.etcPlaybook:

    Ansible-Playbook claims.yml
  10. Copy the compressed file created by the acc-provision utility to the archives directory and name itaci_manifests.tar.gz.

  11. Run the Ansible configuration manual:

    ansible-playbook setup.yml

    drivingsettingPlaybook configures the orchestrator and load balancer virtual machines.

  12. Run Ansibleoshift_prepPlaybook:

    ansible playbook oshift_prep.yml

    drivingoshift_prepgenerates the OpenShift manifests and ignition files.

  13. Run Ansiblecreate_nodesPlaybook:

    ansible playbook create_nodes.yml

    Hecreate_nodesplaybook creates the virtual machines. After the virtual machines are created, the OCP4 installation process starts in the background. At this stage you should be able to access the cluster APIs usingkubeconfigfiles created by the installer.

    Hekubeconfigthe files are inbase_dir/boot/authorizationdirectory. Hebase_dirHe is preparedgroup_vars/all.ymlfile, and the default is /root/ocpinstall.

what to do then

You can use the commandsopenshift installation wait for startup to completeyopenshift installation wait for the installation to completeto check the installation progress. Run the commands from the boot folder.

Updating standard input handlers

To update the Ingress Controller's default release strategy to use the ACI Load Balancer, log in as a user with cluster administrator privileges and run the following:

oc replace --force --wait --filename - <

For more details, seeSetting the default input controller for your cluster to be internalsection iIngress operator on the OpenShift container platform4.6 Red Hat Guide.

Configuring MachineSets with ACI CNI

The Machine API is a combination of primary resources based on the bottom-up cluster API project and OpenShift Container Platform custom resources.

For OpenShift Container Platform 4.6 clusters, the Machine API performs all node-host provisioning operations after the cluster installation is complete. OpenShift Container Platform 4.6 offers a dynamic and elastic provisioning method on top of public or private cloud infrastructure thanks to the Machine API.

The two main resources are:

  • Machines: A basic unit that describes the host of a node. A machine has a vendor specification that describes the types of compute nodes offered for different cloud platforms. For example, a machine type for a worker node in Amazon Web Services (AWS) can define a specific machine type and the required metadata.

    (Video) Installing OpenShift 4.7 on VMware via IPI

  • MachineSet - MachineSet resources are groups of machines. Machine kits are to machines what replica kits are to pods. If you need more machines or need to downsize, change the replica field in the machine set to meet your computing needs.

Changes to operating systems on OpenShift Container Platform nodes can be made by creating MachineConfig objects that are managed by the machine configuration operator.

Creating a machine configuration file

Use this procedure to create a MachineConfig file that configures the network interfaces for the new nodes with:

  • ACI infrastruktur interface (ens224)

  • ACI infrastructure subinterface (ens224.{InfraVlanID})

  • Opflex route for BUM traffic replication (

The required (example) settings are shown in the procedure below, but should be adapted to match your specific environment. In general, the following changes are required:

  • Replace each occurrence of {InfraVLAN} with the ACI infrastructure VLAN for your fabric.

  • Replace each occurrence of {MTU} with the MTU you have chosen for your cluster.

Installing OpenShift 4.6 on VMware vSphere (4)

Make sure your network interface isens224(Not another name.)


Paso 1

Create a route 80 opflex.

#!/bin/bash if [ "$1" == "ens224.{InfraVLAN}" ] && [ "$2" == "up" ]; then route add -net netmask dev ens224.{InfraVLAN} fi
Paso 2

Make aconnection ens224.nm.

[conexión]id=ens224type=ethernetinterface-name=ens224multi-connect=1permissions=[ethernet]mac-address-blacklist=mtu={MTU}[ipv4]dns-search=method=disabled[ipv6]addr-gen-mode= eui64dns-search=método=deshabilitado[proxy]
Paso 3

Make aens224.{InfraVLAN}.nmconexión.

[conexión]id=ens224.{InfraVLAN}type=vlaninterface-name=ens224.{InfraVLAN}multi-connect=1permissions=[ethernet]mac-address-blacklist=[vlan]egress-priority-map=flags=1id={ InfraVLAN}ingress-priority-map=parent=ens224[ipv4]dns-search=method=auto[ipv6]addr-gen-mode=eui64dns-search=method=auto[proxy]
Stages 4

Convert the above three configurations (steps 1,2,3) tobase64encoded strings and use it in the machineconfig template (after base64) as shown below. storage:files:-content:source: data : text/plain;charset=utf-8;base64, IyEvYmluL2Jhc2gKIGlmIFsgIiQxIiA9PSAiZW5zMjI0LjM0NTYiIF0gJiYgWyAi JDIiID09ICJ1cCIgXTsgdGhlbgogcm91dGUgYWRkIC1uZXQgMjI0Lj AuMC4 wIG5ld G1hc2sgMjQwLjAuMC4wIGRldiBlbnMyMjQuMzQ1NgogZmk= mode: 0755 overwrite: true route: /etc/NetworkManager/dispatcher.d/80-opflex-route - content: source: data:text/ simple ; charset=utf-8;base64, W2Nvbm5lY3Rpb25dCmlkPWVuczIyNAp0eXBlPWV0aGVybmV0CmludGVyZmFjZS1uYW1lPWVuczIy NAptdWx0aS1yNAp0eXBlPWV0aGVybmV0CmludGVyZmFjZS1uYW1lPWVuczIy NAptdWx0aZEKWNlvb2Gb5 p bZXRoZXJuZXRdCm1hYy1hZGRyZXNzLWJs YWNrbGlzdD0KbXR1PTkwMDAKCltpcHY0XQpkbnMtc2VhcmNoPQptZXRob2Q9ZGlzYWJsZWQK2Wp 1 1aTY0Cm Rucy1zZWFyY2g9Cm1ldGhvZD1kaXNhYmxlZAoKW3Byb3h5XQ== tilstand: 0600 overskrivning: sand sti: /etc/NetworkManager/system-connections/ens224.nmconnection - tekst/plain;charset: data: =utf -8 ;base64, W2Nvbm5lY3Rpb25dCmlkPWVuczIyNC4zNDU2CnR5cGU9dmxhbgppbnRlcmZhY2UtbmFtZT1lbnMyMjQuMz Q1NgptdWx2PTdWx0aSlvvnW2NgptdWx0aS1Vcvnwn CgpbZ XRoZXJuZXRdCm1hYy1hZGRyZXNzLWJsYWNrb GlzdD0KClt2bGFuXQplZ3Jlc3MtcHJpb3JpdHktbWFwPQpmbGFncz0xCmlkPTM0NTYBlCaWnccm1CmlkPTM0NTYBlCaWcn1Sccm1cvvzccm1cwc1cvc1cvccm1cwc1cvccm1cvvcccm100000000c D1 lbnMyMjQKCltpcHY0XQpkbnMtc2VhcmNoPQptZXRob2Q9YXV0bwoKW2lwdjZd CmFkZHItZ2VuLW1vZGU9ZXVpNjQKZG5zLXNlYXJjaFaD9 tilstand: 0600 overskriv: sand sti: /etc/NetworkManager/system-connections/ens224.{InfraVLAN}.nmconnection

Replace {InfraVLAN} with your ACI InfraVLAN ID on the route:/etc/NetworkManager/system-connections/ens224.{InfraVLAN}.nmconnection(last line in the example above). You can also customize the name of the MachineConfig. In the example above, the name is00-working-cni.

Paso 5

Useoc crear -f Command to create MachineConfig for your cluster.

The machine configuration is applied to the already existing worker nodes (two) and replaces the three existing files with identical copies. When you include the md5 configuration in the machine configuration, three files are created on the node. The older worker nodes are restarted one at a time.

Paso 6

Get your cluster ID usingoc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastrukturklynge domain.

Paso 7

Create the machine set.

apiVersion: MachineSetmetadata: name: ocp4aci-jlff9-worker namespace: openshift-machine-api tags: ocp4aci-jlff9spec: replicas: 1 selector: matchLabel : ocp4aci-jlff9 ocp4aci-jlff9-worker template: metadata: labels: -cluster : ocp4aci-jlff9 : worker : worker : ocp4aci-jlff9- worker-specs: metadata: {} provider-specs: value: numCoresPerSocket: 1 diskGiB: 120 snapshot: '' userDataSecret: name: worker-user-data memoryMiB: 8192 credentialsSecret: name: vsphere-cloud-credentials: -: device network name : ocp4aci_vlan_35 - network name: ocp4aci metadata: CreationTimestamp: null numCPUs: 2 kinds: VSphereMachineProviderSpec workspace: datacenter: my-dc datastore: mydatastore directory: /mydatastore/vm/ocp4aci server: myvspherephere.lo RHCcalprovider.lo Template: piVersion.phere :pi /v1beta1

The above is an example configuration, you may need to modify it as follows:

  • The cluster ID isocp4aci-jlff9. Replace it with the appropriate cluster ID.

  • Substituteocp4aci_vlan_35yocp4aciwith the names of your port groups (the order is important, make sure you don't change it).

  • choosereplicasvalue to how many new workers you want beyond the one created during cluster activation.

  • edit allworkplaceparameters (such as data center, data storage) to match the vSphere configuration.

  • Modify the virtual machine specifications if necessary (memoryMiB, numCPU and diskGiB).


MachineSet does not manage the original workers and you can choose to delete them. If you delete a worker, you must wait for the OCP routers to start on the new node.

Scaling compute nodes using MachineSet

Use this procedure to scale compute nodes using MachineSet.


Paso 1

Create a virtual machine folder -/dc name/mv/cluster namein vCenter; thatcluster nameis the OpenShift cluster ID. In the example, the folder has been named:/mydatastore/vm/ocp4aci.

Paso 2

Create a template in vCenter.

make sure thattemplate namematchercluster namein the RHCOS46 template.

Paso 3

Create a category TAG calledIDand vCenter.

Stages 4

Configure the DHCP server to assign IP addresses to the nodes.

Sample files for installing OpenShift 4.6 on VMware vSphere

This section contains sample files that you need to install OpenShift 4.6 on VMware vSphere.

Tryacc determination inputFile

The following is an exampleacc-provision-input.yaml. The highlighted or bold values ​​are the ones you need to change to meet your site's requirements.

(Video) Demonstration of OpenShift Assisted Installer integration with vSphere

## Konfiguration for ACI Fabric#aci_config: system_id: ocp4aci #apic-refreshtime: 1200 apic_hosts: - vmm_domain: encap_type: vxlan mcast_range: # Cada opflex VMM debe usar un range end: 8 distinto end:25.2.8. .255 nested_inside: type: vmware navn: my-vswitchelag_name:# Starting with Cisco APIC 5.0(1), you can configure VMware teaming policy # when using Link Aggregation Groups (LAGs). installer_provisioned_lb_ip: # The following resources must already exist in the APIC. # They are used, but not created, by the provisioning tool. aep: my-aep vrf: # This VRF was used to create all kubernetes EPs name: myl3out_vrf tenant: common l3out: name: myl3out external_network: - myl3out_net## Network used by containers ACI#net_config: node_subnet: .168. : # Subnet for use with Kubernetes # Pods/CloudFoundry containers extern_dynamic: # Subnet for use with dynamic external IP addresses extern_static: # Subnet for use with addresses Staticsvc external IPs node_svc external IP addresses : 10.5. 0.1/24 # Subnet to use for service graph kubeapi_vlan: 35 service_vlan: 36 infra_vlan: 3901 #interface_mtu: 1600 #service_monitor_interval: 5 # IPSLA interval polling time for PBR monitoring # Default is 0, set > 0 max: 65, 3 pbr_tracking_non_snat: true # Default is false, set to true for IPSLA # to be efficient with non-snat services## Configuration for container logging# Update if configured custom container logging#kube-config: image_pull_policy: Always ovs_memory_limit: 1Giprefix: quay_ . io/noiro

Ansible Examplegroup_vars/all.ymlFile

The following is an examplegroup_vars/all.yml. The highlighted or bold values ​​are the ones you need to change to meet your site's requirements.

#domainname# type: string, base dns domainname, cluster metaname is added as a subdomain to this # required: yesdomainname:ocplab.local#provision_dns# type: boolean, true or false# required: yes# notes: if set to true, the load balancer is configured as a dns server.# If false, the dns server is assumed to already exist.provision_dns:RIGHT#dns_forwarder:# type: IP address# required: yes# notes: this value is used when configuring a dhcp service and also for the 'forwarders' value in dns configuration.dns_forwarder: type: resolvable IP address or hostname# required: yes# notes: This host is configured as a load balancer for the cluster and also as a dhcp and dns server if needed. This IP address is the same as the one you configured in installer_provisioned_lb_ip in acc-provision config.loadbalancer_ip: This IP address is the same as the one you configured in installer_provisioned_lb_ip in the provisioning configuration according to #auto_approve_csr:# type: boolean# required: yes# notes: when set to true, configures a cron job to automatically approve openshift csrauto_approve_csr :RIGHT#proxy_env#proxy_env: #don't remove the dummy field, regardless of whether the configuration requires a proxy or not. dummy: dummy #set http/https proxy server, if the configuration does not need proxy, please comment the values ​​below. #these values ​​are used for possible tasks and also passed to the openshift installerhttp_proxy: https_proxy: no_proxy:, defines the URLs to download terraform, the openshift client, and the openshift installers from.packages: validate_certs: False terraform_url: .zip . openshift_client_linux_url: openshift_install_linux_url: /pub /openshift-v4/clients/ocp/4.6.21/openshift-install-linux-4.6.21.tar.gz#default_aci_manifests_archive:# default filename to look for in archive directory.# this will be you can override by pass additional parameter aci_manifests_archive on command line ansibledefault_aci_manifests_archive: aci_manifests.tar.gz#opflex_interface_mtu:# required: yes# MTU size for interface attached to fabric must be greater than 1564opflex_interface_mtu:pherevspher:evs:server:1800myvsphere.local.labuser:administrator@vsphere.localpassword:xxxxallow_unverified_ssl: sandt datacenternavn:min-dccluster name:mi clusterdata_store_name:with data warehouseRHCOS_template_name:RHCOS46#base_dir# type: directory path# required: yes# notes: All installation files and directories are created under this base_dir directory:/root/ocpinstallDetails of the #node's network. This is common for boot, master and worker nodes. default to 4 memory_KB:16384#optional: Default is 8192 disk_size_MB:40#optional: Default is 40masters_vars: cpu_count:8#optional: default to 4 memory_KB:16384#optional: Default is 16384 disk_size_MB:40#optional: Default is 40 nodes: #mac address and IP address are required for each node - master-1: ip: maestro-2: ip: maestro-3: ip: cpu_count:8#optional: default to 4 memory_KB:16384#optional: Default is 16384 disk_size_MB:40#optional: Default is 40 nodes: #mac address and IP address are required for each node - worker-1: ip: worker-2: ip: required: no# notes: if specified, this key is set on nodes, otherwise the current user's ssh key is used.user_ssh_key:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD...#additional_trust_bundle:# required: no# notes: use this field to add a certificate to the private store## example:#additional_trust_bundle: |# -----BEGIN CERTIFICATE-----# MIIDDDCCAfQCCQDuOnV7XBjpODANBgkqhkiG9w0BAQCAGEDBIMJVQVGAVGAQCAGEDBIMJVQVQQQCAGEDBIMJWQ 0ExDDAKBgNVBAcMA1NKQzEOMAwGA1UEC gwFQ2lzY28xDjAM# - - - - -END CERTIFICATE-----#openshift_pullsecret:# required: yes# notes: see # example:# openshift_pullsecret: {"auths": {"":{"authorization":.........}openshift_pullsecret:xxx


The following is an examplehosts.ini. The highlighted or bold values ​​are the ones you need to change to meet your site's requirements.


OpenShift shutdown

Use this procedure to retire OpenShift and remove the ACI-supplied configuration from ACI.

Installing OpenShift 4.6 on VMware vSphere (5)

Starting with Cisco APIC Release 5.2, VMM domains for OpenShift cannot be removed from the APIC GUI. It is only possible to use the REST API, therefore it is a good idea to use the acc provisioning tool to remove the VMM domain and other related objects used by the decommissioned OpenShift cluster. Make sure you haveacc-input-config.yamlfile and certificates used by the acc-provision tool to access the APIC.

Before you start

In the event of dismounting or deleting the Openshift cluster, the ACI configuration provisioned for that cluster must be deleted from ACI. The acc-provision tool can be used to remove these settings.


Use the following command from the machine and directory used to provision the ACI infrastructure to remove the preprovisioned configurations and the VMM domain.

acc-provision -d -f openshift-4.6-esx -c acc input fil -tu user -side password

(Video) Red Hat OpenShift on VMware vSphere


acc-provision -d -f openshift-4.6-esx -c acc-input-config.yaml -u admin -p contraseña


What are the requirements for OpenShift on VMware? ›

vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plugin (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3. x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.

Can you run OpenShift on VMware? ›

OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only.

Can we install OpenShift on VMware workstation? ›

Here is the step-by-step process to start the openshift single node cluster on your workstation. 1. Install VMware workstation(Licensed)/ VMware workstation player (Free for personal use). If you are familiar with virtual box, you can use that as well.

How to install OpenShift on premise? ›

Install OpenShift Container Platform
  1. Prerequisites.
  2. Attach OpenShift Container Platform Subscription.
  3. Set Up Repositories.
  4. Install the OpenShift Container Platform Package.
  5. Set up Password-less SSH Access.
  6. Run the Installation Playbooks.

What are the minimum requirements for OpenShift local? ›

Depending on the desired container runtime, Red Hat OpenShift Local requires the following system resources:
  • For OpenShift Container Platform. 4 physical CPU cores. 9 GB of free memory. 35 GB of storage space. ...
  • For the Podman container runtime. 2 physical CPU cores. 2 GB of free memory. 35 GB of storage space.

What are the production requirements for OpenShift? ›

2.1.1. For OpenShift Container Platform
  • 4 physical CPU cores.
  • 9 GB of free memory.
  • 35 GB of storage space.

How to install OpenShift cluster on VMware? ›

Obtain the installation and client binaries
  1. Log in to the OpenShift Cluster Manager.
  2. Click Create an OpenShift cluster.
  3. Choose Red Hat OpenShift Container Platform.
  4. On Select an infrastructure provider, select Run on VMware vSphere.
  5. Click Download pull secret or copy the pull secret and save it to a file.
Mar 9, 2021

Can OpenShift work without Kubernetes? ›

Ans: No. OpenShift and Kubernetes are different container orchestration platforms. OpenShift has a built-in Kubernetes platform, which makes the installation process easier, but is limited to Red Hat Linux distributions.

What version of the operating system is required to install OpenShift Container platform? ›

Base OS: RHEL 7.5 or later with "Minimal" installation option, or RHEL Atomic Host 7.4. 5 or later.

What is the difference between OpenShift and vSphere? ›

A new third-party study from Principled Technologies shows that vSphere supports double the number of active SQL Server VMs, requires less admin hands-on time and provides more functionality than the OpenShift Virtualization solution.

What hypervisor is used by OpenShift virtualization? ›

OpenShift Virtualization uses the KVM hypervisor, a core part of the Red Hat Enterprise Linux kernel used by Red Hat Virtualization for more than a decade.

Can OpenShift run on premise? ›

A self-managed deployment option, Red Hat OpenShift Platform Plus can be installed on premise, cloud, managed cloud, or at the edge providing consistent user experience, management, and security across hybrid infrastructure.

Is OpenShift cloud or on premise? ›

Introduced in 2011, OpenShift Online is for individual developers or teams that access OpenShift as a public cloud service. OpenShift Online is implemented as an on-demand consumption model hosted on public cloud platforms, including Amazon Web Services (AWS), Microsoft Azure and Google Cloud.

How to install OC OpenShift? ›

Installing the OpenShift CLI on macOS using the web console
  1. From the web console, click ?.
  2. Click Command Line Tools.
  3. Select the oc binary for macOS platform, and then click Download oc for Mac for x86_64.
  4. Save the file.
  5. Unpack and unzip the archive.
  6. Move the oc binary to a directory on your PATH.

How many master nodes are required in OpenShift? ›

Three master nodes are required to control the operation of a Kubernetes cluster. In OpenShift Container Platform, the master nodes are responsible for all control plane operations.

What is the best storage for OpenShift? ›

The preferred storage technology is object storage. The storage technology must support RWX access mode and must ensure read-after-write consistency. File storage and block storage are not recommended for a scaled/HA OpenShift Container Platform registry cluster deployment with production workloads.

What is the minimum OpenShift cluster size? ›


Red Hat OpenShift Container Storage 4.2 supports a minimum of 3 nodes and a maximum of 9 nodes. Expand the cluster in sets of three nodes to ensure that your storage is replicated, and to ensure you can use at least three availability zones.

What are the requirements for cluster in VMware? ›

VMware Cluster Requirements
  • a datacenter in the vCenter inventory.
  • sufficient vSphere permissions to create a cluster.
  • vSAN enabled before creating a vSphere cluster if you want to use VMware vSAN.
Mar 2, 2023

What is the minimum requirements for Azure OpenShift? ›

Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription doesn't meet this requirement. To request an increase in your resource limit, see Standard quota: Increase limits by VM series.

What are the requirements for VM storage? ›

Minimum Requirements

The minimum hardware requirements for running virtual machines depend on the type of virtual machine you are running. Generally speaking, you will need a processor with at least two cores, at least 4GB of RAM, and at least 20GB of hard drive space.

Can you run VMs on OpenShift? ›

The OpenShift console provides access to multiple pre-configured templates to run VMs. You can also create custom templates by using the KubeVirt documentation.


1. TAM Lab 069 - Deploying OpenShift 4.3 to VMware vSphere
(VMware TAM Lab)
2. How to install Red Hat OpenShift 4.x on vSphere 8 using IPI Method? | OCP 4.11
(Gnan Cloud Garage)
3. OpenShift 4 Full-stack Automated Deployment with VMware vSphere
4. Openshift 4 - Assisted Installer demo on vmware vSphere
(RedHat VN Labs)
5. Deploying Red Hat OpenShift 4.4 to VMware vSphere 7
6. Step-by-Step OpenShift 4.x Deployment Process - Prerequisites – Download OpenShift Installer
(Gnan Cloud Garage)


Top Articles
Latest Posts
Article information

Author: Jeremiah Abshire

Last Updated: 07/07/2023

Views: 6253

Rating: 4.3 / 5 (54 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.