Setup

This section provides steps for a starter installation of Dynamic Cluster to enable dynamic virtual machine provisioning. Complete these steps to install LSF and Platform Cluster Manager and enable Dynamic Cluster on your chosen hosts.

Important:

Swap space on your KVM hypervisors should be at least double their physical memory for VM saving to work properly.

Prepare for installation by making sure the following are available:
  • A dedicated host to use as an NFS server (when using KVM hypervisors)
  • A dedicated host to act as both Platform Cluster Manager management server and LSF master host
  • Hosts representing your hypervisors and physical server hosts
  • Platform Cluster Manager management server and agent installer binaries:
    • pcmae_3.2.0.2_mgr_linux2.6-x86_64.bin
    • pcmae_3.2.0.2_agent_linux2.6-x86_64.bin
    Note: The agent only runs on KVM hosts. There is no dedicated package for VMware hypervisors.
  • Platform Cluster Manager entitlement file:
    • pcmae_entitlement.dat
  • LSF distribution tar file:
    • lsf9.1.3_linux2.6-glibc2.3-x86_64.tar.Z
  • LSF installer (lsfinstall) tar file:
    • lsf9.1.3_lsfinstall.tar.Z
  • LSF entitlement file, which is one of the following files:
    • LSF Express Edition: platform_lsf_exp_entitlement.dat
    • LSF Standard Edition: platform_lsf_std_entitlement.dat
    • LSF Advanced Edition: platform_lsf_adv_entitlement.dat
  • Dynamic Cluster add-on distribution package:
    • lsf9.1.3_dc_lnx26-lib23-x64.tar.Z
  • Oracle database:

    Either use an existing Oracle database installation or download the following Oracle database packages:

    Copy the Oracle database packages to the directory where the Platform Cluster Manager management server package is located. The Platform Cluster Manager management server installer will install the Oracle packages automatically.

Restriction:

All hypervisor hosts must be running the same operating system type and version.

Install the Platform Cluster Manager management server

About this task

To install the Platform Cluster Manager master services in the default installation folder /opt/platform, complete the following installation steps on the intended management server:

Procedure

  1. Log into the management server as root.
  2. Navigate to the directory where the management server package is located.
  3. Set the installation environment variables.

    The following environment variables assume that the management server is named HostM and the license file is located in /pcc/software/license/pcmae_entitlement.dat:

    export MASTERHOST=HostM
    export LICENSEFILE=/pcc/software/license/pcmae_entitlement.dat
    export CLUSTERNAME=PCMAE_DC
    export CLUSTERADMIN=admin
    export BASEPORT=15937
    Note: CLUSTERNAME (the name of the Platform Cluster Manager cluster) must be different from the name of the LSF cluster.

    Start of change If you want to use management server failover, set the SHAREDDIR environment variable with the file path to the shared directory. For example: End of change

    Start of change
    export SHAREDDIR=/usr/share/platform
    End of change

    Start of change To configure manager server failover for Platform Cluster Manager, refer to Configuring master managment server failover in the Administering IBM Platform Cluster Manager Advanced Edition guide. End of change

  4. Run the installer binary.

    ./pcmae_3.2.0.2_mgr_linux2.6-x86_64.bin

    The Platform Cluster Manager installer installs Oracle XE on the host if the Oracle database packages are located in the same directory as the management server package.
    • Start of change If the installer installs Oracle XE and the host already has an OS account named oracle, you must enable interactive login to the oracle account while installing Oracle XE. End of change
    • If you reinstall Platform Cluster Manager and want to transfer the accumulated Oracle data to your new installation, you will be prompted for the credentials to access the database.

    When the Platform Cluster Manager installer prompts you to install the provisioning engine on the following prompt, select no:

    Do you want to install the provisioning engine on the same host as your management server?(yes/no)

    By default, the Platform Cluster Manager installer uses the following parameter values:

    • Username: isf
    • Password: isf
    • Port: 1521
    • Service name: XE
    • The installer will create /etc/init.d/ego.
  5. Source the Platform Cluster Manager environment.
    • csh or tcsh: source /opt/platform/cshrc.platform

    • sh, ksh, or bash: . /opt/platform/profile.platform

  6. Start the manager services.

    egosh ego start

Install Platform Cluster Manager agents on KVM hypervisors (KVM only)

About this task

To install the Platform Cluster Manager agent (PVMO agent) on the KVM hypervisors, complete the following installation steps on each intended hypervisor:

Procedure

  1. Log into the hypervisor host as root.
  2. Prepare the hypervisor host for virtualization.
    1. Use yum to install the required virtualization files.
      yum groupinstall Virtualization*
    2. Ensure that VT-x or AMD-V is enabled in the BIOS.
    3. Ensure that the libvirtd service is enabled and running.
    4. Stop and disable the libvirt-guests service.

      chkconfig libvirt-guests off

      service libvirt-guests stop

    5. Start of change Configure a network bridge to give the virtual machines direct access to the network.

      For more details, refer to Bridged network with libvirt in the RHEL product documentation: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Network_Configuration-Network_Configuration-Bridged_networking_with_libvirt.html.

      Note: The RHEL KVM supports two methods of connecting virtual machines to the physical network: software bridge and MacVTap. Use the software bridge because MacVTap has performance issues with Windows guest operating systems.
      End of change
  3. Navigate to the directory where the agent package is installed.
  4. Set the installation environment variables.

    The following environment variables assume that the management server is named HostM:

    export MASTERHOST=HostM
    export CLUSTERNAME=PCMAE_DC
    export CLUSTERADMIN=admin
    export BASEPORT=15937
    Important:
    • The values of these environment variables must match those you used when you installed the Platform Cluster Manager management server.
    • CLUSTERADMIN must be a valid OS user account on the host with the same user ID and group ID as the corresponding user account on the management server.
    • Start of change CLUSTERNAME (the name of the Platform Cluster Manager cluster) must be different from the name of the LSF cluster. End of change
  5. Run the installer binary.

    ./pcmae_3.2.0.2_agent_linux2.6-x86_64.bin

  6. Source the Platform Cluster Manager environment.
    • csh or tcsh: source /opt/platform/cshrc.platform

    • sh, ksh, or bash: . /opt/platform/profile.platform

  7. Start the agent services.

    egosh ego start

Add your vCenter Server host to Platform Cluster Manager (VMware only)

Before you begin

  • Place the hypervisor hosts that will join the LSF cluster into their own VMware Data Center or VMware Cluster.
  • If you are using VMware Cluster, you must disable Distributed Resource Scheduler (DRS) and High Availability (HA) in the VMware Cluster.

Procedure

  1. Log into the Platform Cluster Manager web user interface (Portal) as an administrator.
  2. From the Resources tab, select Inventory > VMware.
  3. Click the vCenter Servers tab
  4. Click the Add button to add a vCenter Server host.
  5. Specify the host name, user name, and password for your vCenter Server.

Results

It may take several minutes for Platform Cluster Manager to connect and load the vCenter inventory details.

Add IP addresses to the Platform Cluster Manager web user interface (Portal)

About this task

To complete the Platform Cluster Manager installation, add IP address to the IP pool.

When Dynamic Cluster powers on a VM, it must be assigned an IP address by the management server from its IP pool. VM IP addresses and host names must be DNS resolvable.

Procedure

  1. Log into the Platform Cluster Manager management server.
  2. Source the Platform Cluster Manager environment:
    • C shell: source /opt/platform/cshrc.platform
    • Bourne shell: . /opt/platform/profile.platform
  3. Authenticate with Platform Cluster Manager.

    egosh user logon -u Admin -x Admin

    The default password (for -x) is Admin.

  4. Prepare a file with the IP address information. The file contains 4 whitespace delimited columns. The columns contain (in this order) the IP address, the hostname, the subnet mask, the default gateway.

    For example,

    $ cat SAMPLE_INPUT_FILE 
    172.27.101.184 myip101184.lsf.example.com 255.255.0.0 172.27.232.2
    172.27.101.185 myip101185.lsf.example.com 255.255.0.0 172.27.232.2
    Note:
    • Make sure the host name/IP pairs are added to the DNS server on your network.
    • All the addresses added to the IP pool must be on the same subnet.
    • If a Windows guest operating system joins an Active Directory domain, make sure that the domain will not change the guest's fully-qualified domain name. In addition, the guest's fully-qualified domain name must exactly match the name in the IP pool.
  5. Load the IP addresses into Platform Cluster Manager with the following command:

    vsh ips add -f SAMPLE_INPUT_FILE

  6. View the list of available IP addresses.

    You can view the list of available IP addresses in the Portal for Platform Cluster Manager by logging into the Portal as an administrator and navigating to IP Pool > Dynamic Cluster IP Pool in the Resources tab.

    You can also view the list of available IP addresses from the command line using the following command:

    vsh ips list

Make a VM template in Platform Cluster Manager

About this task

Platform Cluster Manager creates new virtual machines based on a template which contains the guest operating system and the application stack. To create a VM template, you must first manually create a VM on one of your hypervisors, install the basic software components required for the LSF compute host, and convert it to a template using the Portal for Platform Cluster Manager.

Procedure

  1. Create a VM using the hypervisor tools.
    • For KVM hypervisors, refer to the RedHat Virtualization Host Configuration and Guest Installation Guide:

      https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/

      Note:
      • When selecting storage for the VM, select the Select managed or other existing storage option and create a new volume in the default storage pool. The volume must use the qcow2 format.

        Despite the RedHat Virtualization Host Configuration and Guest Installation Guide stating that you must install VMs on shared network storage to support live and offline migrations, you still need to create the VM on a local qcow2 disk. Dynamic Cluster copies the VM image onto shared storage to manage when the VM is converted to a template.

      • If you intend to migrate virtual machines with running jobs from one hypervisor host to another (live migration of VMs), select the Cache mode as none (in Advanced options) when creating a VM.

    • For VMware hypervisors, use the VMware vSphere Client.

  2. Boot the VM with your desired guest operating system.
    Note:

    Note that certain combinations of RHEL guest operating systems and hardware architectures may cause timekeeping errors to appear in the VM console. These can usually be resolved by modifying the kernel parameters in the guest OS. The following documentation gives more information on configuring guest timing management if these errors arise:

  3. Configure the DNS servers in your guest operating system.
  4. For VMware hypervisors, install VMware Tools into your guest operationg system.
  5. Copy the VMTools installer to your VM.

    The installer is on your Platform Cluster Manager management server in the folder: /opt/platform/virtualization/4.1.

  6. Install VMTools from within your VM.
    • If you are making a template for a Linux guest operating system, extract and run the VMTools installer package that you copied to your VM.
    • If you are making a template for a Windows guest operating system:
      1. Log into the VM as the Windows OS Administrator account.
      2. For VMware hypervisors, add the C:\Program Files\VMware\VMware Tools directory to your system PATH environment variable.

        Click Start, right-click Computer, then select Advanced System Settings > Environment Variables. Select Path and click Edit.

      3. Extract the VMTools installer package that you copied to your VM.
      4. Open a command line window and run the cscript install.vbs script file to install VMTools.
  7. Install the LSF slave host into your VM.
    • If you are making a template for a Linux guest operating system:

      1. Follow the steps in the LSF installation guide for Linux.

        Make sure to specify the following parameter in the slave.config configuration file:

        LSF_LOCAL_RESOURCES="[resource jobvm]"

      2. Run the hostsetup and chkconfig commands to configure the host to not boot LSF when the VM launches:
        # hostsetup --boot="n"
        # chkconfig lsf off
    • If you are making a template for a Windows guest operating system:

      1. Follow the steps in the LSF installation guide for Windows.
      2. Edit the lsf.conf file and specify the LSF_USER_DOMAIN parameter in the lsf.conf file.
        • If all Windows accounts are local accounts, use "." as the domain:

          LSF_USER_DOMAIN="."

        • If the VM will join a Windows domain, specify the domain name:

          LSF_USER_DOMAIN="domain_name"

      3. Edit the lsf.conf file and specify the LSF_LOCAL_RESOURCES parameter:

        LSF_LOCAL_RESOURCES="[resource jobvm]"

      4. Create a Windows OS account with the same name as the LSF cluster administrator (for example, lsfadmin).
        Note: The same Windows user account in all templates must use the same password.
  8. For KVM hypervisors, if your VM has a mounted CD-ROM device and bound ISO file, use virt-manager to unmount the ISO file for the CD-ROM and remove the CD-ROM device from the virtual machine.
  9. Power off your VM.
  10. Log into the Portal for Platform Cluster Manager as an administrator.
  11. Add storage repositories to your VM.
    • For KVM hypervisors, Platform Cluster Manager manages storage repositories. Use the Platform Cluster Manager Portal to add a storage repository:

      Click the Resources tab, then navigate to Inventory > KVM > Storage List and click Add Storage.

    • For VMware hypervisors, VMware vCenter manages storage repositories. In a Dynamic Cluster-enabled LSF cluster, all VMs could be started on any hypervisor host, so the VM datastore must make all VMs disks available to all hypervisors.

    For more details on storage repositories, refer to Managing storage repositories in the Platform Cluster Manager Administration Guide.

  12. Convert the VM into a template.

    For VMware, Platform Cluster Manager supports two types of templates: standard templates and snapshot templates. With a standard template, the full VM disk must be copied before a new VM can be powered on. Depending on the size of the disk and the performance of your storage infrastructure, a full copy of a VM disk can take minutes or hours to complete. A snapshot template uses copy on write to instantly clone the VM disk from the template. Local disk intensive applications will have better performance with a standard template.

    • Create a KVM standard template:
      1. Log into the Platform Cluster Manager Portal as an administrator.
      2. Click the Resources tab, then navigate to Inventory > KVM in the navigation tree.
      3. Click the Machines tab in the main window.
      4. Select your VM from the list.
      5. Click Manage and select Convert to Template.
    • Create a VMware standard template:
      1. Log into the Platform Cluster Manager Portal as an administrator.
      2. Click the Resources tab, then navigate to Inventory > VMware > vCenter_host_name in the navigation tree.
      3. Click the Machines tab in the main window.
      4. Select your VM from the list.
      5. Click Manage and select Convert to Template.
    • Create a VMware snapshot template:
      1. In the VMware vSphere Client, create a snapshot of your VM.
      2. Log into the Platform Cluster Manager Portal as an administrator.
      3. Click the Resources tab, then navigate to Inventory > VMware in the navigation tree.
      4. Select your vCenter Server in the navigation tree.
      5. Click the VM Snapshots tab in the main window.
      6. Select your VM snapshot from the list.
      7. Click Set as template.
  13. Add a post-provisioning script to the template.

    For more details, refer to Create a post-provisioning script.

Install the LSF master host

Procedure

  1. Follow the steps in Installing LSF to install the LSF master host.
    The following install.config parameter is required for Dynamic Cluster to work:
    • Start of change ENABLE_DYNAMIC_HOSTS="Y" End of change
  2. Source the LSF environment
    • csh or tcsh: source /opt/lsf/conf/cshrc.platform
    • sh, ksh, or bash: ./opt/lsf/conf/profile.platform
  3. Extract the Dynamic Cluster add-on distribution package and run the setup script.
  4. If you are using Windows guest operating systems, use the lspasswd command to register the password for the account you created.

    Use the same domain name as the one you specified in the LSF_USER_DOMAIN parameter.

    For example, use ".\lsfadmin" for local accounts or "domain_name\lsfadmin" for a Windows domain.

Results

The rest of this guide assumes that you have used the directory /opt/lsf as your top-level LSF installation directory.

Enable Dynamic Cluster in LSF

About this task

Complete the following steps to make your LSF installation aware of the Dynamic Cluster functionality and mark hosts in your cluster as valid for dynamic provisioning.

Procedure

  1. Install the Dynamic Cluster add-on package (lsf9.1.1_dc_lnx26-lib23-x64.tar.Z).

    For installation instructions, extract the package and refer to the README file in the package.

  2. Create the dc_conf.lsf_cluster_name.xml file in $LSF_ENVDIR.

    A template for dc_conf.lsf_cluster_name.xml is provided for easy setup.

    1. Copy the template file TMPL.dc_conf.CLUSTER_NAME.xml from /opt/lsf/9.1/misc/conf_tmpl/ into $LSF_ENVDIR .
    2. Change the <Templates> section in the file to match the templates that you created and make other required changes for your configuration.
    3. Start of change If you set the SHAREDDIR environment variable when installing the Platform Cluster Manager management server (to enable management server failover), change the file path in the DC_CONNECT_STRING parameter from the default /opt/platform to the value that you set for the SHAREDDIR environment variable.

      For example, if you set SHAREDDIR to /usr/share/platform by running the following command:

      export SHAREDDIR=/usr/share/platform

      Navigate to the DC_CONNECT_STRING parameter and change the file path as follows:

              <Parameter name="DC_CONNECT_STRING">
                  <Value>Admin::/usr/share/platform</Value>
              </Parameter>
      End of change
    4. Rename the file to dc_conf.lsf_cluster_name.xml.

    For example, for KVM hosts:

    # cat /opt/lsf/conf/dc_conf.lsf_cluster_name.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <dc_conf xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
        <ParametersConf>
            <Parameter name="DC_VM_MEMSIZE_DEFINED">
                <memsize>8192</memsize>
                <memsize>4096</memsize>
                <memsize>2048</memsize>
                <memsize>1024</memsize>
            </Parameter>
            <Parameter name="DC_VM_RESOURCE_GROUPS">
                <Value>KVMRedHat_Hosts</Value>
            </Parameter>
            <Parameter name="DC_CONNECT_STRING">
                <Value>Admin::/opt/platform</Value>
            </Parameter>
        </ParametersConf>
        <Templates>
            <Template>
                <Name>RH_VM_TMPL</Name>
                <PCMAE_TemplateName>isf_rhel56vm_tmpl</PCMAE_TemplateName>
                <PostProvisioningScriptName>start-lsf.sh</PostProvisioningScriptName>
                <PostProvisioningScriptArguments/>
                <Description>RHEL 5.6 Virtual Machine</Description>
                <RES_ATTRIBUTES>
                    <bigmem />
                    <hostType>linux</hostType>
                    <OSTYPE>linux2.6-glibc2.4</OSTYPE>
                </RES_ATTRIBUTES>
            </Template>
            <Template>
                <Name>RH_KVM</Name>
            </Template>
        </Templates>
        <ResourceGroupConf>
            <HypervisorResGrps>
                <ResourceGroup>
                    <Name>KVMRedHat_Hosts</Name>
                    <Template>RH_KVM</Template>
                    <MembersAreAlsoPhysicalHosts>Yes</MembersAreAlsoPhysicalHosts>
                </ResourceGroup>
            </HypervisorResGrps>
        </ResourceGroupConf>
    </dc_conf>

    For VMware hosts, make the following changes to the previous example file:

    1. Change the ParametersConf node:

      Change the DC_VM_RESOURCE_GROUPS parameter to match the name of your VMware resource group.

      To determine the name of the VMware resource group, use the Platform Cluster Manager Portal:

      1. Log into the Platform Cluster Manager Portal as an administrator.
      2. Click the Resources tab, then navigate to Inventory > VMware in the navigation tree.
      3. Click the Resource Group tab in the main window.
      4. Locate the name of your resource group in the Resource Group Name column of the table.
    2. Change the ResourceGroupConf node:
      1. Change the name of the hypervisor resource group to the same resource group name used in the ParametersConf node.

        The hypervisor resource group is found in the following XML path: HypervisorResGrps\ResourceGroup\Name.

      2. Delete the HypervisorResGrps\ResourceGroup\Template node.
      3. Change the value of the MembersAreAlsoPhysicalHosts parameter to No.

    If you set the SHAREDDIR environment variable when installing the Platform Cluster Manager managment server, change the file path in the

    The DC_VM_MEMSIZE_DEFINED parameter specifies the possible virtual machine memory sizes that will be created. Jobs that specify resource requirements that fall between these values will run in VMs whose memory allocations are rounded up to the next highest value in this list.

    The <Templates> section defines all Dynamic Cluster templates. Each <Template> section has the following parameters:
    • <Name>: REQUIRED. Provide a unique name for the template.
    • <PCMAE_TemplateName>: Optional. The Platform Cluster Manager template name. The specified template is used for provisioning through Dynamic Cluster. The same Platform Cluster Manager template can be used by different Dynamic Cluster templates at the same time.
    • <PostProvisioningScriptName>: OPTIONAL. The script to be run once the provision is finished. You can upload post-provisioning script files through the Portal for Platform Cluster Manager.
    • <PostProvisioningScriptArguments>: OPTIONAL. Arguments to be passed to the post-provisioning script defined in <PostProvisioningScriptName> when the script is run.
    • <Description>: OPTIONAL. The description of the template.
    • <RES_ATTRIBUTES>: OPTIONAL. Defines resource names and initial values that are available as resource requirements in the template. Each resource is defined as an element with the same name as the resource name. Numeric or string resource values are defined within the element (between the opening and closing tags) while Boolean resources are specified without a defined value within the element. The resource names must be defined in lsf.shared, otherwise the resource is ignored in this element.

    The <ResourceGroupConf> section defines the resource groups.

    Hypervisor resource groups are defined in a <HypervisorResGrps> section containing one or more <ResourceGroup> sections. Each <ResourceGroup> section has the following parameters:
    • <ResourceGroup>: Contains the detailed configuration of a resource group.
    • <Name>: REQUIRED. The unique name of the resource group.
    • <Template>: REQUIRED. The Dynamic Cluster template name associated with the hypervisor hosts in the resource group.
    • <MembersAreAlsoPhysicalHosts>: OPTIONAL. The parameter has the following values:
      • Yes - The member hosts belonging to this resource group can accept physical machine jobs. When retrieving Dynamic Cluster host status, type PM_HV is displayed. This is the default value for KVM hosts.
      • No - The member hosts belonging to this resource group cannot run physical machine jobs, and the type will be displayed as HV. This is the default value for VMware hosts because VMware hypervisor hosts do not support running jobs on the physical machine.

      The default the value is No.

    See the Reference section of this guide for details about other dc_conf.lsf_cluster_name.xml parameters.

  3. Add desired hypervisors and physical hosts to the cluster file and tag them dchost.
    # cat /opt/lsf/conf/lsf.cluster.lsf_cluster_name
    Begin Host
    HOSTNAME  model    type        server r1m  mem  swp  RESOURCES    #Keywords
    #apple    Sparc5S  SUNSOL       1     3.5  1    2   (sparc bsd)   
    #Example
    ...
    <Master>  !        !            1     3.5  ()   ()  (mg)
    <Host1>   !        !            1     3.5  ()   ()  (dchost)
    <Host2>   !        !            1     3.5  ()   ()  (dchost)
    ...
    <HostN>   !        !            1     3.5  ()   ()  (dchost)
    End Host
  4. Edit lsb.params and increase the value of PREEMPTION_WAIT_TIME.

    The default value in LSF is 5 minutes, but this is too short and can cause performance problems with Dynamic Cluster.

    # cat /opt/lsf/conf/lsbatch/lsf_cluster_name/configdir/lsb.params
    ...
    Begin Parameters
    DEFAULT_QUEUE  = normal   #default job queue name
    MBD_SLEEP_TIME = 20       #mbatchd scheduling interval (60 secs is default)
    SBD_SLEEP_TIME = 15       #sbatchd scheduling interval (30 secs is default)
    JOB_ACCEPT_INTERVAL = 1   #interval for any host to accept a job 
                              # (default is 1 (one-fold of MBD_SLEEP_TIME))
    ENABLE_EVENT_STREAM = n   #disable streaming of lsbatch system events
    PREEMPTION_WAIT_TIME=1800 #at least 30 minutes
    End Parameters
  5. Edit lsf.cluster and define LSF_HOST_ADDR_RANGE=*.*.*.* to enable LSF to detect dynamic job VMs.
  6. If you want to use hyperthreading, edit lsf.conf and define LSF_DEFINE_NCPUS=threads.

    If you do not want to use hyperthreading, disable hyperthreading in the BIOS of your hypervisor hosts and reboot the hosts.

Create a post-provisioning script

You can define a post-provisioning script in Dynamic Cluster to run when a virtual machine of a given template is first started. For example, this functionality allows you to further customize the VM instance configurations by adding users, installing applications, or running any other commands you need before running a VM job.

The name and arguments of the post-provisioning script for each given template is specified in the template definition of the Dynamic Cluster configuration file. Enable your script by copying it to the Start of change post provisioning scripts directory (/opt/platform/virtualization/conf/postProvisionScript) End of change on your Platform Cluster Manager management server.

Note: If Platform Cluster Manager is already started, you must restart the VMOManager service after copying the script to the postProvisionScript directory:
egosh user logon -u Admin -x Admin
egosh service stop VMOManager
egosh service start VMOManager

Verify the installation

After completing the installation steps, start the LSF services on your master host and run the bdc host and bdc tmpl commands to verify that your cluster is up and running:

# bdc host
NAME         STATUS       TYPE    TEMPLATE  CPUS  MAXMEM    RESGROUP  NPMJOBS
dc-kvm2      on           HV_PM   RH_KVM    4     7998 MB   KVMRedHa  0 
dc-kvm1      on           HV_PM   RH_KVM    4     7998 MB   KVMRedHa  0 
# bdc tmpl
NAME                MACHINE_TYPE        RESGROUP
RH_KVM              VM                  KVMRedHat_Hosts

Update the Platform Cluster Manager entitlement file

If you update the Platform Cluster Manager entitlement file, restart the ego service by running egosh ego restart.

To verify that the entitlement file is working, run egosh resource list and verify that there are no error messages about entitlement files.