The following table provides an overview of the significant changes up to this current version. The table does not provide an exhaustive list of all changes or new features up to this release.
Release version of the Cisco ACI CNI plugin | Feature |
---|---|
5.2(1) | Kompatibilitet med Red Hat OpenShift 4.6 en VMware vSphere 7 User-Provisioned Infrastructure (UPI). |
5.2(3) | Added OpenShift Decommissioning section. |
Openshift 4.6 en VMware vSphere
Cisco ACI supports Red Hat OpenShift 4.6 on VMware vSphere 7 User-Provisioned Infrastructure (UPI). This document provides instructions on using Ansible Playbooks to provision OpenShift 4.6 on VMware vSphere with the Container Network Interface (CNI) plugin.
Ansible playbooks provide virtual machines (VMs) with the necessary interface configuration and generate the startup configuration files. You must implement your own DHCP, DNS and load balancing infrastructure following high availability best practices.
Ansible playbooks are available atGithub.
The following are the Ansible playbooks:
-
claim.etc: This playbook performs basic validation of variable declarations itodo.ymlfile.
-
configuration.yml: This playbook performs the following tasks:
-
Configure the orchestrator node:
-
Installs Terraform, the OpenShift client, and the OpenShift installer. Create the following: Terraform variables for the boot, master, and worker nodes; the main operator and machine setup worker; The OpenShift installation configuration file.
-
Configure the load balancer node: disable Security-Enhanced Linux (SELinux), configure HAProxy, configure DHCP and DNS if selected.
This optional step configures only these three components if you configure
provision_dhcp
,provision_dns
, yprovision_lb
variables to true.
-
-
-
oshift_prep.yml:
-
Set installation and startup folders.
-
Generate manifests using openshift-install.
-
Adds the additional machine configuration operator manifests.
(Video) Install OpenShift 4.6 via UPI on VMware -
Adds the Cisco ACI-CNI manifests.
-
Back up the manifests.
-
Configures bootstrap, master, and worker node powerup files.
-
Copies the startup stream file to the load balancer node.
-
-
create_nodes.yml:
-
Provides boot, master and worker nodes using Terraform.
-
Configures a cron job to approve Cisco Certificate Signing Requests (CSRs), if selected.
-
-
delete_nodes.yml: Removes all master and worker nodes.
Prerequisites for installing OpenShift 4.6 on VMware vSphere
To install OpenShift Container Platform (OCP) 4.6 on VMware vSphere, you must meet the following prerequisites:
Cisco ACI
-
Download version 5.2.3.3 or later of the acc-provision tool.
Specify the value of the "--flavor" option as "openshift-4.6-esx" and use the "-z" option. The tool creates a .tar archive file as specified by the value of the "-z" option. You will need this archive file during installation.
Ensure that the Cisco ACI container images provided as input to the acc provisioning tool are version 5.2.3.3 or later.
VMware vSphere
Obtain privileged user credentials to create virtual machines (VMs).
OpenShift
Get the following from the Red Hat website:
-
OCP4 Open Virtualization Appliance (OVA) - Be sure to download the appropriate version of the OVA image. navigate tomirroron the OpenShift website where all RHCOS versions are listed and select the desired version. download
rhcos-vmware.x86_64.ova
file. -
OCP4 Client Tools: Go tomirroron the OpenShift website where the client and installation tool versions are listed and select the desired version. download
openshift-cliente-linux.tar.gz
yopenshift-install-linux.tar.gz
filer(Video) *Updated : Installing OpenShift 4.7 on VMware via UPI & Static IP addressing -
throw away the secret
Installing OpenShift 4.6 on VMware vSphere
![]() Use | VerExample Ansible batch filessection. |
Before you start
Complete the tasks inprevious requirementssection.
It is recommended to seeRedHat OpenShift documentationfor prerequisites and other details about installing a cluster on vSphere.
Procedure
Paso 1 | Provision the Cisco ACI fabric using the acc-provision tool:
| ||||
Paso 2 | After provisioning the Cisco ACI fabric, verify that a port group is namedsystem_id_vlan_kubeapi_vlanit is created under the distributed switch. This document refers to this group of ports asapi-vlan-portgroup.
![]() The kube_api VLAN is added to the dynamic VLAN group associated with the VMware VMM domain. The mapping mode is set to Static. | ||||
Paso 3 | In VMware vSphere, import the Open Virtual Appliance (OVA) image from OpenShift Container Platform 4 (OCP4). To specify | ||||
Stages 4 | Provision a Red Hat Enterprise load balancer virtual machine (VM) with the network interface connected toapi-vlan-portgroup. Ansible Playbooks optionally configures this virtual machine as a load balancer, DNS server, and DHCP server for the OpenShift cluster. | ||||
Paso 5 | Provision a Red Hat Enterprise Orchestrator VM with the network interface connected toapi-vlan-portgroup. Ansible playbooks are played from orchestrator VM. | ||||
Paso 6 | Perform the following tasks on the orchestrator virtual machine:
|
what to do then
You can use the commandsopenshift installation wait for startup to complete
yopenshift installation wait for the installation to complete
to check the installation progress. Run the commands from the boot folder.
Updating standard input handlers
To update the Ingress Controller's default release strategy to use the ACI Load Balancer, log in as a user with cluster administrator privileges and run the following:
oc replace --force --wait --filename - <
For more details, seeSetting the default input controller for your cluster to be internalsection iIngress operator on the OpenShift container platform4.6 Red Hat Guide.
Configuring MachineSets with ACI CNI
The Machine API is a combination of primary resources based on the bottom-up cluster API project and OpenShift Container Platform custom resources.
For OpenShift Container Platform 4.6 clusters, the Machine API performs all node-host provisioning operations after the cluster installation is complete. OpenShift Container Platform 4.6 offers a dynamic and elastic provisioning method on top of public or private cloud infrastructure thanks to the Machine API.
The two main resources are:
-
Machines: A basic unit that describes the host of a node. A machine has a vendor specification that describes the types of compute nodes offered for different cloud platforms. For example, a machine type for a worker node in Amazon Web Services (AWS) can define a specific machine type and the required metadata.
(Video) Installing OpenShift 4.7 on VMware via IPI -
MachineSet - MachineSet resources are groups of machines. Machine kits are to machines what replica kits are to pods. If you need more machines or need to downsize, change the replica field in the machine set to meet your computing needs.
Changes to operating systems on OpenShift Container Platform nodes can be made by creating MachineConfig objects that are managed by the machine configuration operator.
Creating a machine configuration file
Use this procedure to create a MachineConfig file that configures the network interfaces for the new nodes with:
-
ACI infrastruktur interface (ens224)
-
ACI infrastructure subinterface (ens224.{InfraVlanID})
-
Opflex route for BUM traffic replication (224.0.0.0/4)
The required (example) settings are shown in the procedure below, but should be adapted to match your specific environment. In general, the following changes are required:
-
Replace each occurrence of {InfraVLAN} with the ACI infrastructure VLAN for your fabric.
-
Replace each occurrence of {MTU} with the MTU you have chosen for your cluster.
![]() Use | Make sure your network interface isens224(Not another name.) |
Procedure
Paso 1 | Create a route 80 opflex. | ||
Paso 2 | Make aconnection ens224.nm. | ||
Paso 3 | Make aens224.{InfraVLAN}.nmconexión. | ||
Stages 4 | Convert the above three configurations (steps 1,2,3) tobase64encoded strings and use it in the machineconfig template (after base64) as shown below.
| ||
Paso 5 | Useoc crear -f Command to create MachineConfig for your cluster. The machine configuration is applied to the already existing worker nodes (two) and replaces the three existing files with identical copies. When you include the md5 configuration in the machine configuration, three files are created on the node. The older worker nodes are restarted one at a time. | ||
Paso 6 | Get your cluster ID usingoc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastrukturklynge domain. | ||
Paso 7 | Create the machine set. The above is an example configuration, you may need to modify it as follows:
|
Scaling compute nodes using MachineSet
Use this procedure to scale compute nodes using MachineSet.
Procedure
Paso 1 | Create a virtual machine folder - |
Paso 2 | Create a template in vCenter. make sure thattemplate namematchercluster namein the RHCOS46 template. |
Paso 3 | Create a category TAG calledIDand vCenter. |
Stages 4 | Configure the DHCP server to assign IP addresses to the nodes. |
Sample files for installing OpenShift 4.6 on VMware vSphere
This section contains sample files that you need to install OpenShift 4.6 on VMware vSphere.
Tryacc determination inputFile
The following is an exampleacc-provision-input.yaml. The highlighted or bold values are the ones you need to change to meet your site's requirements.
## Konfiguration for ACI Fabric#aci_config: system_id: ocp4aci #apic-refreshtime: 1200 apic_hosts: - 1.1.1.1 vmm_domain: encap_type: vxlan mcast_range: # Cada opflex VMM debe usar un range 225.2.5.2 end: 8 distinto end:25.2.8. .255 nested_inside: type: vmware navn: my-vswitchelag_name:# Starting with Cisco APIC 5.0(1), you can configure VMware teaming policy # when using Link Aggregation Groups (LAGs). installer_provisioned_lb_ip: 10.213.0.201 # The following resources must already exist in the APIC. # They are used, but not created, by the provisioning tool. aep: my-aep vrf: # This VRF was used to create all kubernetes EPs name: myl3out_vrf tenant: common l3out: name: myl3out external_network: - myl3out_net## Network used by containers ACI#net_config: node_subnet: 192.168.168.192 .168. : 10.128.0.1/16 # Subnet for use with Kubernetes # Pods/CloudFoundry containers extern_dynamic: 10.3.0.1/24 # Subnet for use with dynamic external IP addresses extern_static: 10.4.0.1/24 # Subnet for use with addresses Staticsvc external IPs node_svc external IP addresses : 10.5. 0.1/24 # Subnet to use for service graph kubeapi_vlan: 35 service_vlan: 36 infra_vlan: 3901 #interface_mtu: 1600 #service_monitor_interval: 5 # IPSLA interval polling time for PBR monitoring # Default is 0, set > 0 max: 65, 3 pbr_tracking_non_snat: true # Default is false, set to true for IPSLA # to be efficient with non-snat services## Configuration for container logging# Update if configured custom container logging#kube-config: image_pull_policy: Always ovs_memory_limit: 1Giprefix: quay_ . io/noiro
Ansible Examplegroup_vars/all.ymlFile
The following is an examplegroup_vars/all.yml. The highlighted or bold values are the ones you need to change to meet your site's requirements.
#domainname# type: string, base dns domainname, cluster metaname is added as a subdomain to this # required: yesdomainname:ocplab.local#provision_dns# type: boolean, true or false# required: yes# notes: if set to true, the load balancer is configured as a dns server.# If false, the dns server is assumed to already exist.provision_dns:RIGHT#dns_forwarder:# type: IP address# required: yes# notes: this value is used when configuring a dhcp service and also for the 'forwarders' value in dns configuration.dns_forwarder:172.28.184.18#loadbalancer_ip:# type: resolvable IP address or hostname# required: yes# notes: This host is configured as a load balancer for the cluster and also as a dhcp and dns server if needed. This IP address is the same as the one you configured in installer_provisioned_lb_ip in acc-provision config.loadbalancer_ip:192.168.18.201. This IP address is the same as the one you configured in installer_provisioned_lb_ip in the provisioning configuration according to #auto_approve_csr:# type: boolean# required: yes# notes: when set to true, configures a cron job to automatically approve openshift csrauto_approve_csr :RIGHT#proxy_env#proxy_env: #don't remove the dummy field, regardless of whether the configuration requires a proxy or not. dummy: dummy #set http/https proxy server, if the configuration does not need proxy, please comment the values below. #these values are used for possible tasks and also passed to the openshift installerhttp_proxy: http://1.1.1.1:80 https_proxy: http://1.1.1.1:80 no_proxy: 1.2.1.1,1.2.1.2#packages# defines the URLs to download terraform, the openshift client, and the openshift installers from.packages: validate_certs: False terraform_url: https://releases.hashicorp.com/terraform/0.13.6/terraform_0.13.6_linux_amd64 .zip . openshift_client_linux_url: https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.6.21/openshift-client-linux-4.6.21.tar.gz openshift_install_linux_url: https://mirror.openshift.com /pub /openshift-v4/clients/ocp/4.6.21/openshift-install-linux-4.6.21.tar.gz#default_aci_manifests_archive:# default filename to look for in archive directory.# this will be you can override by pass additional parameter aci_manifests_archive on command line ansibledefault_aci_manifests_archive: aci_manifests.tar.gz#opflex_interface_mtu:# required: yes# MTU size for interface attached to fabric must be greater than 1564opflex_interface_mtu:pherevspher:evs:server:1800myvsphere.local.labuser:administrator@vsphere.localpassword:xxxxallow_unverified_ssl: sandt datacenternavn:min-dccluster name:mi clusterdata_store_name:with data warehouseRHCOS_template_name:RHCOS46#base_dir# type: directory path# required: yes# notes: All installation files and directories are created under this base_dir directory:/root/ocpinstallDetails of the #node's network. This is common for boot, master and worker nodes.192.168.18.210#cpu_count_required:8#optional: default to 4 memory_KB:16384#optional: Default is 8192 disk_size_MB:40#optional: Default is 40masters_vars: cpu_count:8#optional: default to 4 memory_KB:16384#optional: Default is 16384 disk_size_MB:40#optional: Default is 40 nodes: #mac address and IP address are required for each node - master-1: ip:192.168.18.211- maestro-2: ip:192.168.18.212- maestro-3: ip:192.168.18.213workers_vars: cpu_count:8#optional: default to 4 memory_KB:16384#optional: Default is 16384 disk_size_MB:40#optional: Default is 40 nodes: #mac address and IP address are required for each node - worker-1: ip:192.168.18.214- worker-2: ip:192.168.18.215#user_ssh_key:# required: no# notes: if specified, this key is set on nodes, otherwise the current user's ssh key is used.user_ssh_key:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD...#additional_trust_bundle:# required: no# notes: use this field to add a certificate to the private store## example:#additional_trust_bundle: |# -----BEGIN CERTIFICATE-----# MIIDDDCCAfQCCQDuOnV7XBjpODANBgkqhkiG9w0BAQCAGEDBIMJVQVGAVGAQCAGEDBIMJVQVQQQCAGEDBIMJWQ 0ExDDAKBgNVBAcMA1NKQzEOMAwGA1UEC gwFQ2lzY28xDjAM# - - - - -END CERTIFICATE-----#openshift_pullsecret:# required: yes# notes: see https://cloud.redhat.com/openshift/install/pull-secret # example:# openshift_pullsecret: {"auths": {" cloud.openshift.com":{"authorization":.........}openshift_pullsecret:xxx
Tryhosts.inifile
The following is an examplehosts.ini. The highlighted or bold values are the ones you need to change to meet your site's requirements.
[orchestrator]192.168.18.200[lb]192.168.18.201
OpenShift shutdown
Use this procedure to retire OpenShift and remove the ACI-supplied configuration from ACI.
![]() Use | Starting with Cisco APIC Release 5.2, VMM domains for OpenShift cannot be removed from the APIC GUI. It is only possible to use the REST API, therefore it is a good idea to use the acc provisioning tool to remove the VMM domain and other related objects used by the decommissioned OpenShift cluster. Make sure you have |
Before you start
In the event of dismounting or deleting the Openshift cluster, the ACI configuration provisioned for that cluster must be deleted from ACI. The acc-provision tool can be used to remove these settings.
Procedure
Use the following command from the machine and directory used to provision the ACI infrastructure to remove the preprovisioned configurations and the VMM domain.
acc-provision -d -f openshift-4.6-esx -c acc input fil -tu user -side password
Example:
acc-provision -d -f openshift-4.6-esx -c acc-input-config.yaml -u admin -p contraseña
FAQs
What are the requirements for OpenShift on VMware? ›
vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. VMware's NSX Container Plugin (NCP) is certified with OpenShift Container Platform 4.6 and NSX-T 3. x+. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform.
Can you run OpenShift on VMware? ›OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only.
Can we install OpenShift on VMware workstation? ›Here is the step-by-step process to start the openshift single node cluster on your workstation. 1. Install VMware workstation(Licensed)/ VMware workstation player (Free for personal use). If you are familiar with virtual box, you can use that as well.
How to install OpenShift on premise? ›- Prerequisites.
- Attach OpenShift Container Platform Subscription.
- Set Up Repositories.
- Install the OpenShift Container Platform Package.
- Set up Password-less SSH Access.
- Run the Installation Playbooks.
- 2.1.1.1. For OpenShift Container Platform. 4 physical CPU cores. 9 GB of free memory. 35 GB of storage space. ...
- 2.1.1.2. For the Podman container runtime. 2 physical CPU cores. 2 GB of free memory. 35 GB of storage space.
- 4 physical CPU cores.
- 9 GB of free memory.
- 35 GB of storage space.
- Log in to the OpenShift Cluster Manager.
- Click Create an OpenShift cluster.
- Choose Red Hat OpenShift Container Platform.
- On Select an infrastructure provider, select Run on VMware vSphere.
- Click Download pull secret or copy the pull secret and save it to a file.
Ans: No. OpenShift and Kubernetes are different container orchestration platforms. OpenShift has a built-in Kubernetes platform, which makes the installation process easier, but is limited to Red Hat Linux distributions.
What version of the operating system is required to install OpenShift Container platform? ›Base OS: RHEL 7.5 or later with "Minimal" installation option, or RHEL Atomic Host 7.4. 5 or later.
What is the difference between OpenShift and vSphere? ›A new third-party study from Principled Technologies shows that vSphere supports double the number of active SQL Server VMs, requires less admin hands-on time and provides more functionality than the OpenShift Virtualization solution.
What hypervisor is used by OpenShift virtualization? ›
OpenShift Virtualization uses the KVM hypervisor, a core part of the Red Hat Enterprise Linux kernel used by Red Hat Virtualization for more than a decade.
Can OpenShift run on premise? ›A self-managed deployment option, Red Hat OpenShift Platform Plus can be installed on premise, cloud, managed cloud, or at the edge providing consistent user experience, management, and security across hybrid infrastructure.
Is OpenShift cloud or on premise? ›Introduced in 2011, OpenShift Online is for individual developers or teams that access OpenShift as a public cloud service. OpenShift Online is implemented as an on-demand consumption model hosted on public cloud platforms, including Amazon Web Services (AWS), Microsoft Azure and Google Cloud.
How to install OC OpenShift? ›- From the web console, click ?.
- Click Command Line Tools.
- Select the oc binary for macOS platform, and then click Download oc for Mac for x86_64.
- Save the file.
- Unpack and unzip the archive.
- Move the oc binary to a directory on your PATH.
Three master nodes are required to control the operation of a Kubernetes cluster. In OpenShift Container Platform, the master nodes are responsible for all control plane operations.
What is the best storage for OpenShift? ›The preferred storage technology is object storage. The storage technology must support RWX access mode and must ensure read-after-write consistency. File storage and block storage are not recommended for a scaled/HA OpenShift Container Platform registry cluster deployment with production workloads.
What is the minimum OpenShift cluster size? ›4.1.
Red Hat OpenShift Container Storage 4.2 supports a minimum of 3 nodes and a maximum of 9 nodes. Expand the cluster in sets of three nodes to ensure that your storage is replicated, and to ensure you can use at least three availability zones.
- a datacenter in the vCenter inventory.
- sufficient vSphere permissions to create a cluster.
- vSAN enabled before creating a vSphere cluster if you want to use VMware vSAN.
Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription doesn't meet this requirement. To request an increase in your resource limit, see Standard quota: Increase limits by VM series.
What are the requirements for VM storage? ›Minimum Requirements
The minimum hardware requirements for running virtual machines depend on the type of virtual machine you are running. Generally speaking, you will need a processor with at least two cores, at least 4GB of RAM, and at least 20GB of hard drive space.
Can you run VMs on OpenShift? ›
The OpenShift console provides access to multiple pre-configured templates to run VMs. You can also create custom templates by using the KubeVirt documentation.