hit counter script
HP 1032 Manual
HP 1032 Manual

HP 1032 Manual

Clusterpack v2.4 tutorial
Table of Contents

Advertisement

Index of Tutorial Sections
Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary
Administrators Guide
1.0 ClusterPack Install QuickStart
1.1 ClusterPack General Overview
1.2 Comprehensive Install Instructions
1.3 Installation and Configuration of Optional Components
1.4 Software Upgrades and Reinstalls
1.5 Golden Image Tasks
1.6 System Maintenance Tasks
1.7 System Monitoring Tasks
1.8 Workload Management Tasks
1.9 System Troubleshooting Tasks
Users Guide
2.1 Job Management Tasks
2.2 File Transfer Tasks
2.3 Miscellaneous Tasks
Tool Overview
3.1 Cluster Management Utility Zone Overview
3.2 Service ControlManager (SCM) Overview
3.3 System Inventory Manager Overview
3.4 Application ReStart (AppRS) Overview
3.5 Cluster Management Utility (CMU) Overview
3.6 NAT/IPFilter Overview
3.7 Platform Computing Clusterware Pro V5.1 Overview
3.8 Management Processor (MP) Card Interface Overview
3.9 HP Systems Insight Manager (HPSIM) Overview
Related Documents
4.1 Related Documents
ClusterPack

Advertisement

Table of Contents
loading

Summary of Contents for HP 1032

  • Page 1 ClusterPack Index of Tutorial Sections Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Administrators Guide 1.0 ClusterPack Install QuickStart 1.1 ClusterPack General Overview 1.2 Comprehensive Install Instructions 1.3 Installation and Configuration of Optional Components 1.4 Software Upgrades and Reinstalls 1.5 Golden Image Tasks 1.6 System Maintenance Tasks...
  • Page 2 Dictionary of Cluster Terms Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 3: Clusterpack Install Quickstart

    ClusterPack Install QuickStart ClusterPack ClusterPack Install QuickStart Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.0.1 How Can I Get My HP-UX Cluster Running? Step Q1 Fill Out the ClusterPack Installation Worksheet Step Q2 Install Prerequisites Step Q3 Allocate File System Space Step Q4 Obtain a License File Step Q5 Prepare Hardware Access...
  • Page 4 Step Q1 Fill Out the ClusterPack Installation Worksheet Print out this form and fill out all information for each node in your cluster. Installation Worksheet (pdf) Note: You will not be able to complete the following steps if you have not collected all of this information.
  • Page 5 Back to Top Step Q4 Obtain a License File Get the Host ID number of the Management Server. Contact Hewlett-Packard Licensing Services to redeem your license certificates. If you purchased the ClusterPack Base Edition, redeem the Base Edition license certificate.
  • Page 6 Note: It may take up to 24 hours to receive the license file. Plan accordingly. For more information, see the Comprehensive Instructions for this step. References: Step 4 Obtain a License File Back to Top Step Q5 Prepare Hardware Access Get a serial console cable long enough to reach all the Compute Nodes from the Management Server.
  • Page 7 Step Q7 Configure the ProCurve Switch Select an IP address from the same IP subnet that will be used for the Compute Nodes. Connect a console to the switch Log onto the switch through the console Type 'set-up' Select IP Config and select the "manual" option Select the IP address field and enter the IP address to be used for the switch For more information, see the Comprehensive Instructions for this step.
  • Page 8 References: Step 9 Install ClusterPack on the Management Server Back to Top Step Q10 Run manager_config on the Management Server Provide the following information to the manager_config program: The path to the license file(s), The DNS domain and optional NIS domain for the cluster, The host name of the manager and the name of the cluster, The management LAN interface on the Management Server, The IP address(es) of the Compute Node(s),...
  • Page 9 Step 11 Run mp_register on the Management Server Back to Top Step Q12 Power up the Compute Nodes Use the clbootnodes program to power up all Compute Nodes that have a connected Management Processor that you specified in the previous step. The clbootnodes program will provide the following information to the Compute Nodes: Language to use, Host name,...
  • Page 10 For more information, see the Comprehensive Instructions for this step. References: Step 14 Set up HyperFabric (optional) Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 11: Clusterpack General Overview

    ClusterPack General Overview ClusterPack ClusterPack General Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.1.1 ClusterPack Overview 1.1.2 Who should use the material in this tutorial? 1.1.3 What is the best order to review the material in the tutorial? 1.1.4 Operating System and Operating Environment Requirements 1.1.5 System Requirements 1.1.1 ClusterPack Overview...
  • Page 12 of Gigabit Ethernet or Infiniband. The common components of a cluster are: Head Node - provides user access to the cluster. In smaller clusters, the Head Node may also serve as a Management Server. Management Server - server that provides single point of management for all system components in the cluster Management LAN/switch - usually an Ethernet network used to monitor and control all the major system components.
  • Page 13: Installation And Configuration

    latency and higher bandwidth. A cluster LAN is also configured to separate the system management traffic from application message passing and file serving traffics. Management Software and Head Node The ability to manage and use a cluster as easily as a single compute system is critical to the success of any cluster solution.
  • Page 14 Version 2.0. The ClusterPack has a server component that runs on a Management Server, and client agents that run on the managed Integrity compute servers. NAS 8000 NAS 8000 High Availability Cluster was designed to significantly reduce downtime and maximize the availability of storage by providing heterogeneous file-sharing and file- serving functionality across a wide variety of application areas, including content delivery and distribution, consolidated storage management, technical computing, and Web serving.
  • Page 15 The Data Dictionary contains definitions for common terms that are used through the tutorial. Back to Top 1.1.3 What is the best order to review the material in the tutorial? System Administrators Initial installation and configuration of the cluster requires a complete understanding of the steps involved and the information required.
  • Page 16: System Requirements

    a link to the printable version at the bottom of the page. References: Printable Version Back to Top 1.1.4 Operating System and Operating Environment Requirements The key components of the HP Integrity Server Technical Cluster are: Management Server: HP Integrity server with HP-UX 11i Version 2.0 TCOE Compute Nodes: HP Integrity servers with HP-UX 11i Version 2.0 TCOE...
  • Page 17 Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 18: Comprehensive Install Instructions

    Comprehensive Install Instructions ClusterPack Comprehensive Install Instructions Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.2.1 Comprehensive Installation Overview Step 1 Fill Out the ClusterPack Installation Worksheet Step 2 Install Prerequisites Step 3 Allocate File System Space Step 4 Obtain a License File Step 5 Prepare Hardware Access Step 6 Power Up the Management Server...
  • Page 19 Processor. Verify the Management Server and the initial Compute Node. Configure the remaining Compute Nodes with a Golden Image. Create a Golden Image. Add nodes to the configuration that will receive the Golden Image. Distribute the Golden Image to remaining nodes. Install and configure the Compute Nodes that received the Golden Image.
  • Page 20 Note: You will not be able to complete the following steps if you have not collected all of this information. Details At various points during the configuration you will be queried for the following information: DNS Domain name [ex. domain.com] NIS Domain name [ex.
  • Page 21 HP-UX 11i Ignite-UX HP-UX 11i V2.0 TCOE ClusterPack depends on certain open source software which is normally installed as a part of the operatin environment. The minimum release versions required are: MySQL Version 3.23.58 or higher Perl Version 5.8 or higher The Management Server requires a minimum of two LAN connections.
  • Page 22 /var - 4GB /share - 500MB (Clusterware edition only) Details Allocate space for these file systems when you do a fresh install of HP-UX on the Management Server. To resize /opt 1. Go to single user mode. % # /usr/sbin/shutdown -r now 2.
  • Page 23 You will need to contact HP licensing to redeem your license certificates. You can call, E-mail, or fax yo request to Hewlett-Packard Software Licensing Services. Refer to your Software License Certificate for contact information. Prior to installing ClusterPack V2.4, you can request a key by providing the Host ID number of the Management Server.
  • Page 24 Background This document does not cover hardware details. It is necessary, however, to make certain hardware preparations in order to run the software. Overview Get a serial console cable long enough to reach all the Compute Nodes from the Management Server. Details To allow the Management Server to aid in configuring the Management Processors, it is necessary to hav serial console cable to connect the serial port on the Management Server to the console port on the...
  • Page 25 % /opt/clusterpack/bin/manager_config Back to Top Step 7 Configure the ProCurve Switch Background The ProCurve Switch is used for the management network of the cluster. Overview The IP address for the ProCurve Switch should be selected from the same IP subnet that will be used for Compute Nodes.
  • Page 26 % > lcd /tmp % > get cpack.lic % > bye Back to Top Step 9 Install ClusterPack on the Management Server Background The ClusterPack software is delivered on a DVD. Overview Mount and register the ClusterPack DVD as a software depot. Install the ClusterPack Manager software (CPACK-MGR) using swinstall.
  • Page 27 Note: You cannot be in the /mnt/dvdrom directory when you try to mount. You will get a file busy error. When you are finished, on the local machine: 6. Unmount the DVD file system. % /etc/umount /mnt/dvdrom On the remote system: 7.
  • Page 28 Using the ClusterPack DVD, mount and register the DVD as a software depot. Install the ClusterPack Manager software (CPACK-MGR) on the Management Server using swinstall. On the Management Server: % /usr/sbin/swinstall -s <source_machine>:/mnt/dvdrom CPACK- The ClusterPack DVD will be referenced again in the installation process. Please leave it in the DVD drive until the "Invoke /opt/clusterpack/bin/manager_config on Management Server"...
  • Page 29 Cluster Management Software components after reboots. Configure Cluster Management Software tools. The Management Server components of HP System Management Tools (HP Systems Insight Manager) is also configured if selected. Print a PASS diagnostic message if all of the configuration steps are successful. Overview Provide the following information to the manager_config program: The path to the license file(s),...
  • Page 30 manager_config Invocation manager_config is an interactive tool that configures the Management Server based on some simple quer (most of the queries have default values assigned, and you just need to press RETURN to assign those default values). Back to Top Step 11 Run mp_register on the Management Server Background A Management Processor (MP) allows you to remotely monitor and control the state of a Compute Node...
  • Page 31 When you telnet to an MP, you will initially access the console of the associated server. Other options su as remote console access, power management, remote re-boot operations, and temperature monitoring are available by typing control-B from the console mode. It is also possible to access the MP as a web consol However, before it is possible to access the MP remotely it is first necessary to assign an IP address to ea MP.
  • Page 32 console port on the MP card of each Compute Node. When you are ready to run mp_register, use this command: % /opt/clusterpack/bin/mp_register Back to Top Step 12 Power up the Compute Nodes Background The clbootnodes utility is intended to ease the task of booting Compute Nodes for the first time. To use clbootnodes, the nodes' MP cards must have been registered and/or configured with mp_register.
  • Page 33 When booting a node, clbootnodes will answer the first boot questions rather than having to answer them manually. The questions are answered using the following information: Language selection: All language selection options are set to English. Keyboard selection: The keyboard selection is US English Timezone: The time zone information is determined based on the setting of the Management Server Time: The current time is accepted.
  • Page 34 Background This tool is the driver that installs and configures appropriate components on every Compute Node. Registers Compute Nodes with HP Systems Insight Manager or SCM on the Management Server. Pushes agent components to all Compute Nodes. Sets up each Compute Node as NTP client, NIS client, and NFS client. Starts necessary agents in each of the Compute Nodes.
  • Page 35 Execute the following command. % /opt/clusterpack/bin/compute_config Back to Top Step 14 Set up HyperFabric (optional) Background The utility clnetworks assists in setting up a HyperFabric network within a cluster. For clnetworks to recognize the HyperFabric (clic) interface, it is necessary to first install the drivers and/or kernel patches are needed.
  • Page 36: Known Issues

    ClusterPack can configures IP over InfiniBand (IPoIB) if the appropriate InfiniBand drivers are installed the systems. Overview If the InfiniBand IPoIB drivers are installed prior to running compute_config, the InfiniBand HCA is detected and the administrator is given a chance to configure them. The administrator can also configure the InfiniBand HCA with IP addresses by invoking /opt/clusterpack/bin/clnetworks.
  • Page 37 The finalize_config tool can be run at any time to validate the cluster configuration and to determine if there are any errors in the ClusterPack software suite. Overview This program verifies the Cluster Management Software, and validates the installation of the single Comp Node.
  • Page 38 LSF jobs while the archive is being made: % badmin hclose <hostname> In addition, you should either wait until all running jobs complete, or suspend them: % bstop -a -u all -m <hostname> Execute sysimage_create on the Management Server and pass the name of the file from which you would like the image to be made.
  • Page 39 Overview Register the image. Distribute the image to selected nodes. Details To distribute a Golden Image to a set of Compute Nodes, you need to first register the image. To register image, use the command: % /opt/clusterpack/bin/sysimage_register <full path of image> If the image was created with sysimage_create, the full path of the image was displayed by sysimage_cre Images are stored in the directory: /var/opt/ignite/archives/<hostname>...
  • Page 40 Details Finalize and validate the installation and configuration of the ClusterPack software. % /opt/clusterpack/bin/finalize_config Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 41: Installation And Configuration Of Optional Components

    Installation and Configuration of Optional Components ClusterPack Installation and Configuration of Optional Components Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.3.1 HP-UX IPFilter 1.3.2 External /home File Server 1.3.3 Adding Head Nodes to an ClusterPack cluster 1.3.4 Set up TCP-CONTROL 1.3.1 HP-UX IPFilter Introduction to NAT (Network Address Translation)
  • Page 42 Nodes in a private IP sub-net (10.x.y.z range, 192.168.p.q range), which also alleviates the need for numerous public IP addresses. IP Aliasing or Network Address Translation (NAT) ClusterPack comes with HP-UX IPFilter, a software component with powerful packet filtering and firewalling capabilities. One of the features that it supports is Network Address Translation.
  • Page 43 HP-UX IPFilter Validation HP-UX IPFilter is installed with the default HP-UX 11i V2 TCOE bundle. To validate its installation, run the following command: % swverify B9901AA Automatic setup of HP-UX IPFilter rules ClusterPack V2.4 provides a utility called nat.server to automatically set up the NAT rules, based on the cluster configuration.
  • Page 44 % man 8 ipf List the input output filter rules % ipfstat -hio Setup the NAT rules In this section, we will walk through the steps of setting up HP-UX IPFilter that translate the source IP addresses of all packets from the compute private subnet to the IP address of the gateway node. For addin more sophisticated NAT rules, please refer to the IPFilter documentation.
  • Page 45 map lan0 192.168.0.4/32 -> 15.99.84.23/32 portmap tcp/udp 40000:60000 map lan0 192.168.0.4/32 -> 15.99.84.23/32 More examples of NAT and other IPFilter rules are available at /opt/ipf/examples. 2. Enable NAT based on this rule set % ipnat -f /tmp/nat.rules Note: If there are existing NAT rules that you want to replace, you must flush and delete that rule set before loading the new rules: % ipnat -FC -f /tmp/nat.rules For more complicated manipulations of the rules, refer to ipnat man pages.
  • Page 46 If there is no packet loss, then NAT is enabled. DISPLAY Server Interaction Test 1. On the Compute Node, set the DISPLAY variable to a display server that is not part of the cluster, for instance your local desktop. % setenv DISPLAY 15.99.22.42:0.0 (if it is csh) 2.
  • Page 47 The default use model of an ClusterPack cluster is that end users will submit jobs remotely through the ClusterWare GUI or by using the ClusterWare CLI from the Management Node. Cluster administrators generally discourage users from logging into the Compute Nodes directly. Users are encouraged to use th Management Server for accessing files and performing routine tasks.
  • Page 48 More information about the settings in hosts.deny and hosts.allow can be found in the man pages: % man tcpd % man hosts_access Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 49: Software Upgrades And Reinstalls

    Software Upgrades and Reinstalls ClusterPack Software Upgrades and Reinstalls Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.4.1 Software Upgrades and Reinstalls Overview 1.4.2 Prerequisites for Software Upgrades and Reinstalls 1.4.3 Reinstallation and Configuration Steps 1.4.4 Upgrading from Base Edition to Clusterware Edition Reinstall Step 1 Obtain New license key(s) Reinstall Step 2 Invoke /opt/clusterpack/bin/manager_config on Management Server...
  • Page 50 nature can only be accomplished by a complete re-configuration of the cluster (See Initial Installation and Setup). The reinstallation path is only meant to ensure that all of the ClusterPack software is correctly installed and the cluster layout described by earlier invocations of manager_config is configured correctly.
  • Page 51 1.4.4 Upgrading from Base Edition to Clusterware Edition Upgrading from Base Edition to Clusterware Edition is done using the "forced reinstall" path that is documented below. During manager_config you will be given an opportunity to provide a valid Clusterware License key. If you have a key, Clusterware will be installed and integrated into the remaining ClusterPack tools.
  • Page 52 This tool is the main installation and configuration driver. Invoke this tool with "force install" option -F: % /opt/clusterpack/bin/manager_config -F Note: manager_config will ask for the same software depot that was used the last time the cluster was installed. If you are using the ClusterPack V2.4 DVD as the source, please mount the DVD and have it accessable to the Management server BEFORE invoking manager_config -F References:...
  • Page 53 1.4.5 Upgrading from V2.2 to V2.4 ClusterPack V2.4 supports an upgrade path from ClusterPack V2.2. Customers that currently deploy ClusterPack V2.2 on HP Integrity servers use HP-UX 11i Version 2.0 TCOE. ClusterPack V2.4 provides a mechanism for the use of the majority of V2.2 configuration settings for the V2.4 configuration.
  • Page 54 % /opt/clusterpack/bin/compute_config -u Verify that everything is working as expected. % /opt/clusterpack/bin/finalize_config Back to Top 1.4.6 Upgrading from V2.3 to V2.4 ClusterPack V2.4 supports an upgrade path from ClusterPack V2.3. Customers that currently deploy ClusterPack V2.3 on HP Integrity servers use HP-UX 11i Version 2.0 TCOE. ClusterPack V2.4 provides a mechanism for the use of the majority of V2.3 configuration settings for the V2.4 configuration.
  • Page 55 Verify that everything is working as expected. % /opt/clusterpack/bin/finalize_config Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 56: Golden Image Tasks

    Golden Image Tasks ClusterPack Golden Image Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.5.1 Create a Golden Image of a Compute Node from the Management Server 1.5.2 Distribute Golden Image to a set of Compute Nodes 1.5.3 Managing system files on the compute nodes 1.5.4 Adding software bundles to Golden Images 1.5.1 Create a Golden Image of a Compute Node from the Management...
  • Page 57 Ensure that the system is not being used. It is advisable that the system stop accepting new LSF jobs while the archive is being made: % badmin hclose <hostname> In addition, you should either wait until all running jobs complete, or suspend them: % bstop -a -u all -m <hostname>...
  • Page 58 1.5.2 Distribute Golden Image to a set of Compute Nodes To distribute a golden image to a set of Compute Nodes, you need to first register the image. To register the image, use the command: % /opt/clusterpack/bin/sysimage_register <full path of image> If the image was created with sysimage_create, the full path of the image was displayed by sysimage_create.
  • Page 59 clsysfile creates an SD bundle CPACK-FILES. This bundle of files can be used to customize the files on the compute nodes. The revision number of the bundle is automatically incremented each time clsysfile is run. On the management server, clsysfile uses the working directory: /var/opt/clusterpack/sysfiles clsysfile builds the SD control files required to create a SD bundles of files.
  • Page 60 The bundle should include the full revision of the bundle (i.e. bundle,r=revision), to avoid conflicts during installation. Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 61: Table Of Contents

    System Maintenance Tasks ClusterPack System Maintenance Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.6.1 Add Node(s) to the Cluster 1.6.2 Remove Node(s) from the Cluster 1.6.3 Install Software in Compute Nodes 1.6.4 Remove Software from Compute Nodes 1.6.5 Update Software in Compute Nodes 1.6.6 Add Users to Compute Nodes...
  • Page 62 The steps in this section have to be followed in the specified order to ensure that everything works correctly. Step 1 Invoke /opt/clusterpack/bin/manager_config on Management Server Invoke /opt/clusterpack/bin/manager_config with a "add node" option -a. % /opt/clusterpack/bin/manager_config -a <new_node_name>:<new_node_ip_addr> This command adds the new node with the specified hostname and IP address to the cluster. It also reconfigures all of the components of ClusterPack to accommodate the new addition.
  • Page 63: Remove Node(S) From The Cluster

    In the later case, the utility will prompt you (for each node in the cluster) whether to boot it or skip it. To boot a compute node with a system image, use the "-i" option to clbootnodes and specify the image.
  • Page 64: Install Software In Compute Nodes

    Installation and configuration of the Management Server Installation and configuration of the Compute Nodes Verification of the Management Server and Compute Nodes The steps in this section must be followed in the specified order to ensure that everything works correctly. Step 1 Invoke /opt/clusterpack/bin/manager_config on Management Server Invoke /opt/clusterpack/bin/manager_config with a "remove node"...
  • Page 65: Using Cli

    Using CLI Software can also be installed on Compute Nodes using the /opt/clusterpack/bin/clsh tool to run the swinstall command. However, this may not work in a guarded cluster. To install product PROD1 on all Compute Nodes % /opt/clusterpack/bin/clsh /usr/sbin/swinstall -s <depot>...
  • Page 66: Update Software In Compute Nodes

    Using the CLI Software can also be removed from Compute Nodes using the /opt/clusterpack/bin/clsh tool to run the swremove command: To remove product PROD1 on all Compute Nodes % /opt/clusterpack/bin/clsh /usr/sbin/swremove PROD1 To install product PROD1 on just the Compute Node group "cae" % /opt/clusterpack/bin/clsh -C cae /usr/sbin/remove PROD1 Using the HPSIM GUI...
  • Page 67: Add Users To Compute Nodes

    The process for updating software is the same as for installing software. (See "Install Software in Compute Nodes"). swinstall will verify that the software you are installing is a newer version than what is already present. For patches, and software in non-depot format, it will be necessary to follow the specific directions given with the patch/update.
  • Page 68: Remove Users From Compute Nodes

    account parameters to use in creating the account. If NIS is configured in the cluster, all user accounts are administered from the Management Server. Any changes to a user's account will be pushed to all the Compute Nodes using NIS. References: 3.2.3 How to Run SCM Web-based GUI Back to Top...
  • Page 69: Change System Parameters In Compute Nodes

    account to remove. All user accounts are administered from the Management Server. Any changes to a users account will be pushed to all the Compute Nodes using NIS. References: 3.2.3 How to Run SCM Web-based GUI Back to Top 1.6.8 Change System Parameters in Compute Nodes Using the HPSIM GUI To change System Parameters in Compute Nodes using HPSIM GUI, do the following: Select "Configure", "HP-UX Configuration", and then double-click on "Kernel...
  • Page 70: Define Compute Node Inventory Data Collection For Consistency Checks

    Back to Top 1.6.9 Define Compute Node Inventory Data Collection for Consistency checks Scheduling Data Collection tasks are done using the HP System Management Tools: Using the HPSIM GUI To create a Data Collection task using HPSIM GUI, do the following: Select "Options", then click on "Data Collection".
  • Page 71: Define Consistency Check Timetables On Compute Node Inventories

    3.2.3 How to Run SCM Web-based GUI Back to Top 1.6.10 Define Consistency Check Timetables on Compute Node Inventories Scheduling Data Collection tasks are done using the HP System Management Tools: Using the HPSIM GUI To create a Data Collection task using HPSIM GUI, do the following: Select "Options", then click on "Data Collection".
  • Page 72: Compare The Inventories Of A Set Of Nodes

    3.2.3 How to Run SCM Web-based GUI Back to Top 1.6.11 Compare the Inventories of a Set of Nodes Comparing the results of Data Collection tasks is done using the HP System Management Tools: Using the HPSIM GUI To create a Data Collection task using HPSIM GUI, do the following: Select "Reports", then click on "Snapshot Comparison".
  • Page 73: Execute Remote Commands On One Or More Nodes

    3.2.3 How to Run SCM Web-based GUI Back to Top 1.6.12 Execute remote commands on one or more nodes A remote command can be executed on one or more nodes in the cluster from any node by using the 'clsh' command in /opt/clusterpack/bin. Some examples of clsh usage are: Invoke 'uname -a' on all cluster nodes % clsh uname -a...
  • Page 74: List A User's Process Status On One Or More Cluster Nodes

    Update /etc/checklist on node1, node3 and node5 with the local /etc/checklist % clcp -C node1+node3+node5 /etc/checklist % h:/etc/checklist Copy multiple local files to all nodes % clcp a.txt b.txt c.txt %h:/tmp Copy multiple remote files to multiple local files % clcp %h:/tmp/a.txt /tmp/a.%h.txt For more details on the usage of clcp, invoke: % man clcp Back to Top...
  • Page 75: Create A Cluster Group

    using PIDs on a cluster is not feasible given there will be different PIDs on different hosts, clkill can kill processes by name. Some examples of clps usage: Kill all processes belonging to user 'joeuser' % clkill -u joeuser Interactively kill all processes named 'view_server' % clkill -i -r view_server will result in a session like: node0 2260 user1 ? 0:00 view_server...
  • Page 76: Add Nodes To A Cluster Group

    Groups of Compute Nodes can be removed from ClusterPack using /opt/clusterpack/bin/clgroup. The following example removes the node group "cae": % /opt/clusterpack/bin/clgroup -r cae Note that the above-mentioned command just removes the group; the nodes are still part of the cluster, and users can submit jobs to the nodes. For more details on the usage of clgroup, invoke the command: % man clgroup Back to Top...
  • Page 77: Add File Systems To Compute Nodes

    Back to Top 1.6.20 Add File Systems to Compute Nodes The file system for Compute Nodes can be defined using System Administration Manager (SAM). Invoke SAM from the command line or from within HP System Management tools and select "Disks and File Systems". Invoke SAM from the command line or from within SCM and select "Disks and File Systems".
  • Page 78 Computing Clusterware Pro V5.1 Overview" References: 3.7.5 How do I start and stop the Clusterware Pro V5.1 daemons? Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 79: System Monitoring Tasks

    System Monitoring Tasks ClusterPack System Monitoring Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.7.1 Get an Overview of Cluster Health 1.7.2 Get an Overview of the Job Queue Status 1.7.3 Get details on health of specific Compute Nodes 1.7.4 View Usage of Resources in Compute Node(s) 1.7.5 Monitor Compute Nodes based on resource thresholds 1.7.1 Get an Overview of Cluster Health...
  • Page 80 State refers to the state of the host. Batch State refers to the state of the host, and the state of the daemons running on that host. A detailed list of batch states is shown below. For more information, select the online help: Select Help->Platform Help Select "View"...
  • Page 81 have exceeded their thresholds. closed_Excl - The host is not accepting jobs until the exclusive job running on it completes. closed_Full - The host is not accepting new jobs. The configured maximum number of jobs that can run on it has been reached. closed_Wind - The host is not accepting jobs.
  • Page 82 % bqueues -l <hostname> For more information, see the man page: % man bqueues Common Terms Both the Web interface and the CLI use the same terms for the health and status of the job submission queues. These terms are used to define the State of an individual queue. Open - The queue is able to accept jobs.
  • Page 83 Default status from each node is available using: % bhosts <hostname> STATUS shows the current status of the host and the SBD daemon. Batch jobs can only be dispatched to hosts with an ok status. A more detailed list of STATUS is available in the long report: % bhosts -l <hostname>...
  • Page 84 1.7.4 View Usage of Resources in Compute Node(s) Using the Clusterware Pro V5.1 Web Interface: From the Hosts Tab: Select the host to be monitored using the checkbox next to each host. More than one host can be selected. From the menu select Host->Monitor A new window will open that displays the current resource usage of one of the selected hosts.
  • Page 85 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 1.7.5 Monitor Compute Nodes based on resource thresholds Using the Clusterware Pro V5.1 Web Interface: From the Hosts Tab From the View menu select View->Choose Columns Add the Available Column resource to the Displayed Columns list.
  • Page 86 Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 87: Workload Management Tasks

    Workload Management Tasks ClusterPack Workload Management Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.8.1 Add new Job Submission Queues 1.8.2 Remove Queues 1.8.3 Restrict user access to specific queues 1.8.4 Add resource constraints to specified queues 1.8.5 Change priority of specified queues 1.8.6 Add pre/post run scripts to specified queues 1.8.7 Kill a job in a queue...
  • Page 88 After adding, removing or modifying queues, it is necessary to reconfigure LSF to read the new queue information. This is done from the Management Server using the Clusterware Pro V5.1 CLI: % badmin reconfig Verify the queue has been added by using the Clusterware Pro V5.1 CLI: % bqueues -l <queue_name>...
  • Page 89 Back to Top 1.8.3 Restrict user access to specific queues Using the Clusterware Pro V5.1 CLI: The file /share/platform/clusterware/conf/lsbatch/<clustername>/configdir/lsb.queues controls which users can submit to a specific queue. The name of your cluster can be determined by using the Clusterware Pro V5.1 CLI: % lsid Edit the lsb.queues file and look for a USERS line for the queue you wish to restrict.
  • Page 90 % lsid Find the queue definition you wish to modify. The following entries for maximum resource usage can be modified or added for each queue definition: CPULIMIT = minutes on a host FILELIMIT = file size limit MEMLIMIT = bytes per job DATALIMIT = bytes for data segment STACKLIMIT = bytes for stack CORELIMIT = bytes for core files...
  • Page 91 PRIORITY = <integer value> to the queue definition. Queues with higher priority values are searched first during scheduling. After adding, removing or modifying queues, it is necessary to reconfigure LSF to read the new queue information. This is done from the Management Server using the Clusterware Pro V5.1 CLI: % badmin reconfig Verify the queue has been modified by using the Clusterware Pro V5.1 CLI:...
  • Page 92 to the queue definition. The command or tool should be accessible and runnable on all nodes that the queue services. After adding, removing or modifying queues, it is necessary to reconfigure LSF to read the new queue information. This is done from the Management Server using the Clusterware Pro V5.1 CLI: % badmin reconfig Verify the queue has been modified by using the Clusterware Pro V5.1 CLI:...
  • Page 93 Users can kill their own jobs. Queue administrators can kill jobs associated with a particular queue. References: 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 1.8.9 Kill all jobs in a queue Using the Clusterware Pro V5.1 CLI: All of the jobs in a queue can be killed by using the bkill command with the -q option: % bkill -q <queue name>...
  • Page 94 1.8.11 Suspend all jobs owned by a user Using the Clusterware Pro V5.1 CLI: All of a user's jobs can be suspended using the special 0 job id: % bstop -u <userid> 0 Users can suspend their own jobs. Queue administrators can suspend jobs associated with a particular queue.
  • Page 95 % bresume -q <queue name> -u all 0 References: 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 96: System Troubleshooting Tasks

    System Troubleshooting Tasks ClusterPack System Troubleshooting Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 1.9.1 Locate a Compute Node that is down 1.9.2 Get to the console of a Compute Node that is down 1.9.3 Bring up a Compute Node with a recovery image 1.9.4 View system logs for cause of a crash 1.9.5 Bring up the Management Server from a crash...
  • Page 97 % lshosts -l <hostname> % bhosts -l <hostname> References: 1.7.1 Get an Overview of Cluster Health 1.7.3 Get details on health of specific Compute Nodes 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 1.9.2 Get to the console of a Compute Node that is down If a Compute Node is unreachable using the Management Server LAN within the cluster, it...
  • Page 98 This will reboot the machine, hostname, and cause the machine to install from the golden image you specified. References: 1.5.2 Distribute Golden Image to a set of Compute Nodes Back to Top 1.9.4 View system logs for cause of a crash The system logs are located in /var/admin/syslog/syslog.log The crash logs are stored in /var/adm/crash The installation and configuration logs for ClusterPack are stored in /var/opt/clusterpack/log...
  • Page 99 Problem: When I try to add a node, I get "Properties file for <xyz> doesn't exist." Solution: Make sure that the hostname is fully qualified in /etc/hosts on both the Management Server and the managed node, if it exists in /etc/hosts, and that any shortened host names are aliases instead of primary names.
  • Page 100 be added to the cluster using the IP address and hostname of the failed node or can be added with a new name and IP address. Replacing with a new hostname and IP address In this case, the replacement node is handled simply by removing the failed node and adding the new node.
  • Page 101 Copyright 1994-2004 hewlett-packard company...
  • Page 102: Job Management Tasks

    Job Management Tasks ClusterPack Job Management Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 2.1.1 Invoke the Workload Management Interface from the Management Server 2.1.2 Invoke the Workload Management Interface from the intranet 2.1.3 Prepare for job submission 2.1.4 Submit a job to a queue 2.1.5 Submit a job to a group...
  • Page 103: Invoke The Workload Management Interface From The Intranet

    Go to the following URL in the web browser: % /opt/netscape/netscape http://<management_server>:8080/Platform/login/Login.jsp Enter your Unix user name and password. This assumes that the gaadmin services have been started by the LSF Administrator. Note: The user submitting a job must have access to the Management Server and to all the Compute Nodes that will execute the job.
  • Page 104: Submit A Job To A Queue

    Using the Clusterware Pro V5.1 Web Interface: From the jobs tab: Select Job->Submit. Enter job data. Click Submit. Data files required for the job may be specified using the '-f' option to the bsub command. This optional information can be supplied on the "Advanced" tab within the Job Submission screen. For an explanation of the '-f' options please see "Transfer a file from intranet to specific Compute Nodes in the cluster".
  • Page 105: Set A Priority For A Submitted Job

    Using the Clusterware Pro V5.1 CLI: % bsub -q <queue_name> <cmd> Use bqueues to list available Queues. % bqueues References: 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 2.1.5 Submit a job to a group Using the Clusterware Pro V5.1 Web Interface:...
  • Page 106: Check The Status Of A Submitted Job

    Using the Clusterware Pro V5.1 Web Interface: Set a priority at submission by: From the Jobs Tab, select Job->Submit. Using the Queue pull down menu, select a queue with a high priority. After submission: From the Jobs Tab, select the job from the current list of pending jobs. Select Job->Switch Queue.
  • Page 107: Register For Notification On Completion Of A Submitted Job

    References: 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 2.1.8 Check the status of all submitted jobs Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Review the Jobs table.
  • Page 108: Kill A Submitted Job In A Queue

    Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Select Job->Submit. Click Advanced. Select "Send email notification when job is done". Enter the email address in the email to field. Using the Clusterware Pro V5.1 CLI: Using the CLI, users are automatically notified when a job completes. References: 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface?
  • Page 109: Kill All Jobs Submitted By The User In A Queue

    Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Select Tools->Find. Select User from the Field list. Type the user name in the Value field. Click Find. Click Select All. Click Kill. Using the Clusterware Pro V5.1 CLI: % bkill -u <username>...
  • Page 110: Suspend All Jobs Submitted By The User

    3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 2.1.14 Suspend a submitted job in a queue Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Select the job from the Jobs table.
  • Page 111 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 2.1.16 Suspend all jobs submitted by the user in a queue Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Select Tools->Find.
  • Page 112: Resume All Suspended Jobs Submitted By The User

    Using the Clusterware Pro V5.1 CLI: % bresume <job_ID> References: 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 2.1.18 Resume all suspended jobs submitted by the user Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Select Tools->Find.
  • Page 113: Submit A Mpi Job In A Queue

    From the Jobs tab: Select Tools->Find. Select the Advanced tab. Select User from the Field list in the Define Criteria section. Type the user name in the Value field. Click << Select Queue from the Field list. Select the queue from the Queue list. Click <<...
  • Page 114: Suspend A Submitted Mpi Job

    2.1.21 Suspend a submitted MPI job Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Select the job from the Jobs table. Select Job->Suspend. Using the Clusterware Pro V5.1 CLI: % bstop <job_ID> References: 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 2.1.22 Resume a suspended MPI job...
  • Page 115 Copyright 1994-2004 hewlett-packard company...
  • Page 116: File Transfer Tasks

    File Transfer Tasks ClusterPack File Transfer Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 2.2.1 Transfer a file from intranet to the Management Server in the cluster 2.2.2 Transfer a file from intranet to all Compute Nodes in the cluster 2.2.3 Transfer a file from intranet to specific Compute Nodes in the cluster 2.2.4 Transfer a file from a Compute Node to a system outside the cluster 2.2.5 Transfer a file from a Compute Node to another Compute node in the cluster...
  • Page 117 Back to Top 2.2.2 Transfer a file from intranet to all Compute Nodes in the cluster If the cluster is a Guarded Cluster, this operation is done in two steps: FTP the file to the Management Server. Copy the file to all nodes in the cluster. % clcp /a/input.data %h:/date/input.data % clcp /a/input.data cluster:/date/input.data For more details on the usage of clcp, invoke the command:...
  • Page 118 < Copies the remote file to the local file after the job completes. Overwrites the local file if it exists. % bsub -f <local_file> < <remote_file> << Appends the remote file to the local file after the job completes. The local file must exist. % bsub -f <local_file>...
  • Page 119 FTP the file from the Head node to the external target. References: Guarded Cluster Back to Top 2.2.5 Transfer a file from a Compute Node to another Compute node in the cluster The 'clcp' command in /opt/clusterpack/bin is used to copy files between cluster nodes. This command can be invoked either from the Management Server or any Compute Node.
  • Page 120 For more details on the usage of clcp, invoke the command: % man clcp Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 121: Miscellaneous Tasks

    Miscellaneous Tasks ClusterPack Miscellaneous Tasks Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 2.3.1 Run a tool on a set of Compute Nodes 2.3.2 Check resource usage on a Compute Node 2.3.3 Check Queue status 2.3.4 Remove temporary files from Compute Nodes 2.3.5 Prepare application for checkpoint restart 2.3.6 Restart application from a checkpoint if a Compute Node crashes...
  • Page 122: Using The Cli

    Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Select Jobs->Submit. Enter job information. Click Advanced. On the Advanced dialog, enter script details in the Pre-execution command field. Click OK. Click Submit. Using the CLI: % bsub E 'pre_exec_cmd [args ...]' command References: 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface?
  • Page 123 2.3.3 Check Queue status Using the Clusterware Pro V5.1 Web Interface: From the Jobs tab: Review the Queues table. Use the Previous and Next buttons to view more Queues. Using the Clusterware Pro V5.1 CLI: % bqueues [<queue_name>] References: 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top 2.3.4 Remove temporary files from Compute Nodes...
  • Page 124 and should not be used while AppRS jobs are running. % apprs_clean all For jobs submitted to non-AppRS queues, the user's job submission script should include commands to remove files that are no longer needed when the job completes. In the event that the job fails to run to completion it may be necessary to remove these files manually.
  • Page 125 #APPRS TARGETUTIL 1.0 #APPRS TARGETTIME 10 #APPRS REDUNDANCY 4 # Your job goes here: if [ "$APPRS_RESTART" = "Y" ]; then # job as it is run under restart conditions else # job as it is run under normal conditions The names of all files that need to be present for the application to run from a restart should be listed with the HIGHLYAVAILABLE tag: #APPRS HIGHLYAVAILABLE <list of files>...
  • Page 126 2.3.6 Restart application from a checkpoint if a Compute Node crashes If a Compute Node crashes, jobs submitted to an AppRS queue will automatically be restarted on a new node or set of nodes as those resources become available. No user intervention is necessary.
  • Page 127 3.7.8 How do I access the Clusterware Pro V5.1 Web Interface? 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 128: Cluster Management Utility Zone Overview

    Cluster Management Utility Zone Overview ClusterPack Cluster Management Utility Zone Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.1.1 What is Cluster Management Utility Zone? 3.1.2 What are the Easy Install Tools? 3.1.3 What are the system imaging tools? 3.1.4 What are the Cluster Aware Tools? 3.1.5 clsh - Runs commands on one, some, or all nodes in the cluster.
  • Page 129 3.1.2 What are the Easy Install Tools? The ClusterPack suite includes a set of utilities for setting up a cluster of Itanium 2 nodes. The tools mananger_config, mp_register, clbootnodes, compute_config and finalize_config are key components for establishing and administering an Itanium 2 cluster. In particular, these utilities provide: An easy step-by-step process for establishing a cluster Installation and configuration of ClusterPack software...
  • Page 130 sysimage_create sysimage_register sysimage_distribute These scripts use ClusterPack's knowledge of the cluster configuration to simplify the creation and distribution of system (golden) images. With the use of scripts, creating and distributing images is as simple as running these three tools and providing the name of a host and/or path of the image.
  • Page 131 new command will not begin until the previous one is finished, i.e. these do not run in parallel. Sending a SIGINT (usually a ^C) will cause the current host to be skipped, and sending a SIGQUIT (usually a ^\) will immediately abort the whole clsh command. Percent interpolation, as in clcp, is also supported.
  • Page 132 single local to single local % clcp src dst single local to multiple local % clcp src dst.%h single local to multiple remote % clcp src dst:%h or clcp src cluster-group:dst multiple local to multiple remote % clcp src dst.%h %h:dst multiple remote to multiple local % clcp %h:src dst.%h Examples...
  • Page 133 Make necessary changes. % clcp checklist.%c %h:/etc/checklist which maps to: % rcp host0:/etc/checklist checklist.0 % rcp host1:/etc/checklist checklist.1 % vi checklist.0 checklist.1 % rcp checklist.0 host0:/etc/checklist % rcp checklist.1 host1:/etc/checklist 3. The following is an example if log files are needed: % clcp %h:/usr/spool/mqueue/syslog %h/syslog.%Y%M% D.%T This would save the files in directories (which are the host names) with file...
  • Page 134 cluptime is used as follows: % cluptime [ [-C] cluster-group] For more details on the usage of cluptime, invoke the command: % man cluptime Back to Top 3.1.8 clps - Cluster-wide ps command clps and clkill are the same program with clps producing a "ps" output that includes the host name and clkill allowing processes to be killed.
  • Page 135 3.1.10 clinfo - Shows nodes and cluster information. The clinfo command lists which hosts make up a cluster. By default, with no arguments, the current cluster is listed. Non-flag arguments are interpreted as cluster names. Three different output modes are supported. Short format (enabled by the -s option) The short format lists the cluster (followed by a colon) and the hosts it contains;...
  • Page 136 core tools of ClusterPack, including PCC ClusterWare Pro™ and the HP Systems Insight Manager. Node groups are collections of nodes that are subsets of the entire node membership of the compute cluster. They may have overlapping memberships such that a single node may be a member of more than one group.
  • Page 137 % clgroup -l group1 For more details on the usage of clgroup, invoke the command: % man clgroup Back to Top 3.1.12 clbroadcast - Telnet and MP based broadcast commands on cluster nodes. The clbroadcast command is used to broadcast commands to various nodes in the cluster using the Management Processor (MP) interface or telnet interface.
  • Page 138 % clpower --uidon n1 For more details on the usage of clpower, invoke the command: % man clpower Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 139: Service Controlmanager (Scm) Overview

    Service ControlManager (SCM) Overview ClusterPack Service ControlManager (SCM) Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.2.1 What is ServiceControl Manager? 3.2.2 How to install, configure, manage, and troubleshoot SCM: 3.2.3 How to Run SCM Web-based GUI 3.2.1 What is ServiceControl Manager? ServiceControl Manager (SCM) makes system administration more effective, by distributing the effects of existing tools efficiently across nodes.
  • Page 140 You must be using a recent version of Internet Explorer or Netscape in order to run the SCM GUI. Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 141: System Inventory Manager Overview

    System Inventory Manager Overview ClusterPack System Inventory Manager Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.3.1 What is System Inventory Manager? 3.3.2 How to invoke Systems Inventory Manager 3.3.1 What is System Inventory Manager? The Systems Inventory Manager application is a tool that allows you to easily collect, store and manage inventory and configuration information for the Compute Nodes in the HP-UX Itanium 2 cluster.
  • Page 142 From your web browser at your desktop: Go to: http://<management_server>:1190/simgui The user name is the name that will appear on the GUI. Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 143: Application Restart (Apprs) Overview

    Application ReStart (AppRS) Overview ClusterPack Application ReStart (AppRS) Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.4.1 What is AppRS? 3.4.1 What is AppRS? AppRS is a collection of software that works in conjunction with Platform Computing's Clusterware™...
  • Page 144 2.3.5 Prepare application for checkpoint restart 2.3.6 Restart application from a checkpoint if a Compute Node crashes AppRS Release Note AppRS User's Guide Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 145: Cluster Management Utility (Cmu) Overview

    Cluster Management Utility (CMU) Overview ClusterPack Cluster Management Utility (CMU) Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.5.1 What is CMU? 3.5.2 Command line utilities 3.5.3 Nodes monitoring 3.5.4 Invoking CMU 3.5.5 Stopping CMU 3.5.6 CMU main window 3.5.7 Monitoring By Logical Group...
  • Page 146 3.5.3 Nodes monitoring Cluster monitoring Enhanced monitoring capabilities for up to 1024 nodes in a single window (with vertical scrollbars). Monitoring tools Provides tools to monitor remote node activities. Node Administration Allows execution of an action on several nodes with one command. The actions are: 1.
  • Page 147 window enabled. CMU will display the last monitored logical group. Note: When starting the CMU window for the first time, the monitoring action is performed with the “Default” Logical Group. Note: Some of the menus and functions within CMU will allow the user to act on more than one selected item at a time.
  • Page 148 Terminal Server Configuration PDU Configuration Network Topology Adaptation Node Management Event Handling Configuration Back to Top 3.5.7 Monitoring By Logical Group The following section describes the different actions that the user can perform in the "Monitoring By Logical Group" window. Select/Unselect one node Left click on the name of this node.
  • Page 149 A contextual menu window appears with a right click on a node displayed in the central frame of the main monitoring CMU window. The following menu options are available: Telnet Connection Launches a telnet session to this node. The telnet session is embedded in an Xterm window.
  • Page 150 Many management actions such as boot, reboot, halt, or monitoring will be applied to all of the selected nodes. Halt This sub-menu allows a system administrator to issue the halt command on all of the selected nodes. The halt command can be performed immediately (this is the default), or delayed for a given time (between 1 to 60 minutes).
  • Page 151 before booting a node. Reboot This sub-menu allows a system administrator to issue the reboot command on all of the selected nodes. The reboot command can be performed immediately (this is the default), or delayed for a given time (between 1 to 60 minutes).
  • Page 152 Ethernet network to a machine listed in CMU. If the connection fails, you must press a key to destroy the window. Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 153: Nat/Ipfilter Overview

    NAT/IPFilter Overview ClusterPack NAT/IPFilter Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.6.1 Introduction to NAT (Network Address Translation) 3.6.1 Introduction to NAT (Network Address Translation) Network Address Translation (NAT) or IP Aliasing provides a mechanism to configure multiple IP addresses in the cluster to present a single image view with a single external IP address.
  • Page 154 No guarantee can be made about the correctness, completeness or applicability of this or any third party information. http://www.obfuscation.org/ipf/ Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 155: Platform Computing Clusterware Pro V5.1 Overview

    Platform Computing Clusterware Pro V5.1 Overview ClusterPack Platform Computing Clusterware Pro V5.1 Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.7.1 What is Clusterware Pro? 3.7.2 How do I obtain and install the Clusterware Pro V5.1 license file? 3.7.3 Where is Clusterware Pro V5.1 installed on the system? 3.7.4 How can I tell if Clusterware Pro V5.1 is running? 3.7.5 How do I start and stop the Clusterware Pro V5.1 daemons?
  • Page 156 Obtain a License File If you have purchased ClusterPack Clusterware Edition, you will need a license for Platform Computing' Clusterware Pro. You can call, email or fax your request to Hewlett-Packard Software Licensing Services Refer to your Software License Certificate for contact information.
  • Page 157 Setup and Configuration of a DEMO license The use of a DEMO license file (license.dat) for Clusterware Pro, as part of the ClusterPack V2.4 Clusterware Edition, requires some modification of installed configuration files. These modifications will have to be removed in order to use a purchased license key (LSF_license.oem). 1.
  • Page 158 The /etc/exports file on the Management Server, and the /etc/fstab file on each Compute Node is updated automatically by ClusterPack. Back to Top 3.7.4 How can I tell if Clusterware Pro V5.1 is running? On the Management Server, several Clusterware Pro V5.1 services must be running in order to provide fu functionality for the tool.
  • Page 159 To START services on the Management Server Issue the following command on the Management Server as the super user (i.e. root): % /share/platform/clusterware/lbin/cwmgr start To STOP services on the Management Server Issue the following command on the Management Server as the super user (i.e. root): % /share/platform/clusterware/lbin/cwmgr stop To START services on ALL Compute Nodes Issue the following command on the Management Server as the super user (i.e.
  • Page 160 % /share/platform/clusterware/lbin/cwagent stop References: 3.1.5 clsh - Runs commands on one, some, or all nodes in the cluster. Back to Top 3.7.6 How do I start and stop the Clusterware Pro V5.1 Web GUI? The Web GUI is started and stopped as part of the tools that are used to start and stop the other Clusterwa Pro V5.1 services.
  • Page 161 The username and password are the same as for any normal user account on the Management Server. References: 3.7.6 How do I start and stop the Clusterware Pro V5.1 Web GUI? Back to Top 3.7.9 How do I access the Clusterware Pro V5.1 Command Line Interface? Before using the Clusterware Pro V5.1 CLI, you must set a number of environment variables.
  • Page 162 Clusterware Pro. These documents provide more detail on the commands that are part of the Online Tuto Administering Platform Clusterware Pro (pdf) Running Jobs with Platform Clusterware Pro (pdf) Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 163: Management Processor (Mp) Card Interface Overview

    Management Processor (MP) Card Interface Overview ClusterPack Management Processor (MP) Card Interface Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.8.1 Using the MP Card Interface 3.8.1 Using the MP Card Interface The MP cards allow the Compute Nodes to be remotely powered up.
  • Page 164 Step 11 Run mp_register on the Management Server Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...
  • Page 165: Hp Systems Insight Manager (Hpsim) Overview

    HP Systems Insight Manager (HPSIM) Overview ClusterPack HP Systems Insight Manager (HPSIM) Overview Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 3.9.1 What is HP Systems Insight Manager 3.9.2 What are the key features of HP Systems Insight Manager 3.9.3 How to install, configure, manage, and troubleshoot HP Systems Insight Manager 3.9.4 How to run HPSIM Web-based GUI 3.9.1 What is HP Systems Insight Manager...
  • Page 166 conditions automatically through automated event handling. Facilitates secure, scheduled execution of OS commands, batch files, and custom or off-the-shelf applications across groups of Windows, Linux, or HP- UX systems. Enables centralized updates of BIOS, drivers, and agents across multiple ProLiant servers with system software version control. Enables secure management through support for SSL, SSH, OS authentication, and role-based security.
  • Page 167 Copyright 1994-2004 hewlett-packard company...
  • Page 168: Related Documents

    Related Documents ClusterPack Related Documents Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary 4.1.1 HP-UX 11i Operating Environments 4.1.2 HP-UX ServiceControl Manager 4.1.3 HP Application ReStart 4.1.4 HP System Inventory Manager 4.1.5 HP-UX IPFilter 4.1.6 ClusterPack V2.3 4.1.7 HP Systems Insight Manager 4.1.1 HP-UX 11i Operating Environments...
  • Page 169 http://docs.hp.com/en/5990-8540/index.html ServiceControl Manager Troubleshooting Guide http://docs.hp.com/en/5187-4198/index.html Back to Top 4.1.3 HP Application ReStart HP Application ReStart Release Note AppRS Release Notes (pdf) HP Application Restart User's Guide AppRS User's Guide (pdf) Back to Top 4.1.4 HP System Inventory Manager Systems Inventory Manager User's Guide http://docs.hp.com/en/5187-4238/index.html Systems Inventory Manager Troubleshooting Guide http://docs.hp.com/en/5187-4239/index.html...
  • Page 170 Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company ClusterPack Dictionary of Cluster Terms Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary...
  • Page 171 Cluster LAN/Switch A Cluster LAN/Switch is usually an Ethernet network used to monitor and control all the major system components. May also handle traffic to the file server. Back to Top Cluster Management Software The Cluster Management Software is the ClusterPack for system administrators and end- users.
  • Page 172: Management Server

    Interconnect Switch An Interconnect Switch provides high speed connectivity between Compute Nodes. Used for message passing and remote memory access capabilities for parallel applications. Back to Top Management Processor (MP) Management Processor (MP) controls the system console, reset and power management functions.
  • Page 173 Back to Top Index | Administrators Guide | Users Guide | Tool Overview | Related Documents | Dictionary Copyright 1994-2004 hewlett-packard company...

Table of Contents