Page 1
It does not document X9000 file system features or standard Linux administrative tools and commands. For information about configuring and using X9000 Software file system features, see the HP StorageWorks X9000 File Serving Software File System User Guide.
Page 2
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Contents 1 Product description...................11 HP X9720 Network Storage System features................11 System components.........................11 HP X9000 Software features....................11 High availability and redundancy.....................12 2 Getting started..................13 Setting up the X9720 Network Storage System................13 Installation steps........................13 Additional configuration steps.....................13 Logging in to the X9720 Network Storage System...............14 Using the network......................14...
Page 4
Manually failing over a file serving node................30 Failing back a file serving node...................31 Using network interface monitoring..................31 Setting up HBA monitoring....................33 Discovering HBAs......................33 Identifying standby-paired HBA ports................34 Turning HBA monitoring on or off..................34 Deleting standby port pairings..................34 Deleting HBAs from the configuration database..............34 Displaying HBA information....................34 Checking the High Availability configuration.................35 5 Configuring cluster event notification............37...
Page 5
Monitoring the status of file serving nodes..................49 Monitoring cluster events......................50 Viewing events........................50 Removing events from the events database table..............51 Monitoring cluster health......................51 Health checks........................51 Health check reports......................51 Viewing logs..........................54 Viewing and clearing the Integrated Management Log (IML)............54 Viewing operating statistics for file serving nodes................54 9 Maintaining the system................56 Shutting down the system......................56 Shutting down the X9000 Software..................56...
Page 6
Upgrading Windows X9000 clients..................75 Upgrading firmware on X9720 systems..................76 Troubleshooting upgrade issues....................76 Automatic upgrade......................76 Manual upgrade.......................77 12 Licensing....................78 Viewing license terms......................78 Retrieving a license key......................78 Using AutoPass to retrieve and install permanent license keys............78 13 Upgrading the X9720 Network Storage System hardware......79 Adding new server blades.......................79 Adding capacity blocks......................81 Carton contents.........................81...
Page 7
Degrade server blade/Power PIC..................108 ibrix_fs -c failed with "Bad magic number in super-block"............108 LUN status is failed......................109 Apparent failure of HP P700m...................109 X9700c enclosure front panel fault ID LED is amber..............110 Spare disk drive not illuminated green when in use..............110 Replacement disk drive LED is not illuminated green.............110...
Page 8
Configuring the management console on the dedicated (non-agile) Management Server blade..139 Completing the restore on the dedicated (non-agile) Management Server........147 Troubleshooting........................147 iLO remote console does not respond to keystrokes...............147 18 Support and other resources..............148 Contacting HP........................148 Related information.......................148 HP websites.........................149 Rack stability........................149 Customer self repair......................149 Product warranties........................150...
Page 9
SAS switch cabling—Base cabinet..................163 SAS switch cabling—Expansion cabinet................164 B Spare parts list ..................165 AW548A—Base Rack......................165 AW552A—X9700 Expansion Rack..................165 AW549A—X9700 Server Chassis..................166 AW550A—X9700 Blade Server ....................166 AW551A—X9700 Capacity Block (X9700c and X9700cx) ............167 C Warnings and precautions..............168 Electrostatic discharge information..................168 Grounding methods......................168 Equipment symbols.......................168 Weight warning........................169 Rack warnings and precautions....................169...
Page 10
Hungarian notice......................179 Italian notice........................179 Latvian notice........................180 Lithuanian notice......................180 Polish notice........................180 Portuguese notice......................180 Slovakian notice......................181 Slovenian notice......................181 Spanish notice.........................181 Swedish notice........................181 Glossary....................182 Index.......................184 Contents...
1 Product description HP StorageWorks X9720 Network Storage System is a scalable, network-attached storage (NAS) product. The system combines HP X9000 File Serving Software with HP server and storage hardware to create a cluster of file serving nodes. HP X9720 Network Storage System features...
failover of multiple components, and a centralized management interface. X9000 Software can be deployed in environments scaling to thousands of nodes. Based on a Segmented File System architecture, X9000 Software enables enterprises to integrate I/O and storage systems into a single clustered environment that can be shared across multiple applications and managed from a single central management console.
IMPORTANT: Do not modify any parameters of the operating system or kernel, or update any part of the X9720 Network Storage System unless instructed to do so by HP; otherwise, the X9720 Network Storage System could fail to operate properly.
File allocation. Use this feature to specify the manner in which segments are selected for storing new files and directories. For more information about these file system features, see the HP StorageWorks File Serving Software File System User Guide. Logging in to the X9720 Network Storage System Using the network Use ssh to log in remotely from another host.
To power on the remaining server blades, run the command: ibrix_server -P on -h <hostname> NOTE: Alternatively, press the power button on all of the remaining servers. There is no need to wait for the first server blade to boot. Management interfaces Cluster operations are managed through the X9000 Software management console, which provides both a GUI and a CLI.
Page 16
The GUI dashboard opens in the same browser window. You can open multiple GUI windows as necessary. See the online help for information about all GUI displays and operations. The GUI dashboard enables you to monitor the entire cluster. There are three parts to the dashboard: System Status, Cluster Overview, and the Navigator.
Page 17
System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events: Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed.
Navigator The Navigator appears on the left side of the window and displays the cluster hierarchy. You can use the Navigator to drill down in the cluster configuration to add, view, or change cluster objects such as file systems or storage, and to initiate or view tasks such as snapshots or replication. When you select an object, a details page shows a summary for that object.
The administrative commands described in this guide must be executed on the management console host and require root privileges. The commands are located in $IBRIXHOME bin. For complete information about the commands, see the HP StorageWorks X9000 File Serving Software CLI Reference Guide.
Status. Shows the client’s management console registration status and mounted file systems, and provides access to the IAD log for troubleshooting. Registration. Registers the client with the management console, as described in the HP StorageWorks File Serving Software Installation Guide.
Page 21
file. For example, you will need to lock specific ports for rpc.statd, rpc.lockd, rpc.mountd, and rpc.quotad. It is best to allow all ICMP types on all networks; however, you can limit ICMP to types 0, 3, 8, and 1 1 if necessary. Be sure to open the ports listed in the following table.
HP, which will initiate a fast and accurate resolution, based on your product’s service level. Notifications may be sent to your authorized HP Channel Partner for on-site service, if configured and available in your country.
Although the cluster network interface can carry traffic between file serving nodes and clients, HP recommends that you configure one or more user network interfaces for this purpose. Typically, bond1 is created for the first user network when the cluster is configured.
Identify the VIF: # ibrix_nic –a -n bond1:2 –h node1,node2,node3,node4 Set up a standby server for each VIF: # ibric_nic –b –H node1/bond1:1,node2/bond1:2 # ibric_nic –b –H node2/bond1:1,node1/bond1:2 # ibric_nic –b –H node3/bond1:1,node4/bond1:2 # ibric_nic –b –H node4/bond1:1,node3/bond1:2 Configuring NIC failover NIC monitoring should be configured on VIFs that will be used by NFS, CIFS, FTP, or HTTP.
FTP. When you add an FTP share on the Add FTP Shares dialog box or with the ibrix_ftpshare command, specify the VIF as the IP address that clients should use to access the share. HTTP. When you create a virtual host on the Create Vhost dialog box or with the ibrix_httpvhost command, specify the VIF as the IP address that clients should use to access shares associated with the Vhost.
4 Configuring failover This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs. Agile management consoles The management console maintains the cluster configuration and provides graphical and command-line user interfaces for managing and monitoring the cluster. Typically, one active management console and one passive management console are installed when the cluster is installed.
The failed-over management console remains in maintenance mode until it is moved to passive mode using the following command: ibrix_fm -m passive A management console cannot be moved from maintenance mode to active mode. Viewing information about management consoles To view mode information, use the following command: ibrix_fm –i NOTE: If the management console was not installed in an agile configuration, the output will...
If your cluster includes one or more user network interfaces carrying NFS/CIFS client traffic, HP recommends that you identify standby network interfaces and set up network interface monitoring. If your file serving nodes are connected to storage via HBAs, HP recommends that you set up HBA monitoring.
Page 29
1.) Use the following command: <installdirectory>/bin/ibrix_hostpower -a -i SLOTID -s POWERSOURCE -h HOSTNAME For example, to identify that node s1.hp.com is connected to slot 1 on APC power source ps1: <installdirectory>/bin/ibrix_hostpower -a -i 1 -s ps1 -h s1.hp.com Updating the configuration database with power source changes...
For example, to identify that node s1.hp.com has been moved from slot 3 to slot 4 on APC power source ps1: <installdirectory>/bin/ibrix_hostpower -m -i 3,4 -s ps1 -h s1.hp.com Dissociating a file serving node from a power source You can dissociate a file serving node from an integrated power source by dissociating it from slot 1 (its default association) on the power source.
A failback might not succeed if the time period between the failover and the failback is too short, and the primary server has not fully recovered. HP recommends ensuring that both servers are up and running and then waiting 60 seconds before starting the failback. Use the ibrix_server -l command to verify that the primary server is up and running.
Page 32
To set up a network interface monitor, use the following command: <installdirectory>/bin/ibrix_nic -m -h MONHOST -A DESTHOST/IFNAME For example, to set up file serving node s2.hp.com to monitor file serving node s1.hp.com over user network interface eth1: <installdirectory>/bin/ibrix_nic -m -h s2.hp.com -A s1.hp.com/eth1 To delete network interface monitoring, use the following command: <installdirectory>/bin/ibrix_nic -m -h MONHOST -D DESTHOST/IFNAME...
For example, to delete the standby that was assigned to interface eth2 on file serving node s1.hp.com: <installdirectory>/bin/ibrix_nic -b -U s1.hp.com/eth2 Setting up HBA monitoring You can configure High Availability to initiate automated failover upon detection of a failed HBA.
HBA failure. Use the following command: <installdirectory>/bin/ibrix_hba -m -h HOSTNAME -p PORT For example, to turn on HBA monitoring for port 20.00.12.34.56.78.9a.bc on node s1.hp.com: <installdirectory>/bin/ibrix_hba -m -h s1.hp.com -p 20.00.12.34.56.78.9a.bc To turn off HBA monitoring for an HBA port, include the -U option: <installdirectory>/bin/ibrix_hba -m -U -h HOSTNAME -p PORT...
-b argument. To view results only for file serving nodes that failed a check, include the -f argument. <installdirectory>/bin/ibrix_haconfig -l [-h HOSTLIST] [-f] [-b] For example, to view a summary report for file serving nodes xs01.hp.com and xs02.hp.com: <installdirectory>/bin/ibrix_haconfig -l -h xs01.hp.com,xs02.hp.com Host...
Page 36
The -v option produces detailed information about configuration checks that received a Passed result. For example, to view a detailed report for file serving nodes xs01.hp.com: <installdirectory>/bin/ibrix_haconfig -i -h xs01.hp.com...
SMTP server will reject the email. <installdirectory>/bin/ibrix_event -m on|off -s SMTP -f from [-r reply-to] [-t subject] The following command configures email settings to use the mail.hp.com SMTP server and to turn on notifications: <installdirectory>/bin/ibrix_event -m on -s mail.hp.com -f FM@hp.com -r MIS@hp.com -t Cluster1 Notification...
To turn off all Alert notifications for admin@hp.com: <installdirectory>/bin/ibrix_event -d -e ALERT -m admin@hp.com To turn off the server.registered and filesystem.created notifications for admin1@hp.com and admin2@hp.com: <installdirectory>/bin/ibrix_event -d -e server.registered,filesystem.created -m admin1@hp.com,admin2@hp.com Testing email addresses To test an email address with a test message, notifications must be turned on. If the address is valid, the command signals success and sends an email containing the settings to the recipient.
Associating event notifications with trapsinks (all SNMP versions) View definition (V3 only) Group and user configuration (V3 only) X9000 Software implements an SNMP agent on the management console that supports the private X9000 Software MIB. The agent can be polled and can send SNMP traps to configured trapsinks. Setting up SNMP notifications is similar to setting up email notifications.
on and off. The default is on. For example, to create a v2 trapsink with a new community name, enter: ibrix_snmptrap -c -h lab13-116 -v 2 -m private For a v3 trapsink, additional options define security settings. USERNAME is a v3 user defined on the trapsink host and is required.
The subtree is added in the named view. For example, to add the X9000 Software private MIB to the view named hp, enter: ibrix_snmpview -a -v hp -o .1.3.6.1.4.1.18997 -m .1.1.1.1.1.1.1 Configuring groups and users A group defines the access control policy on managed objects for one or more users. All users must belong to a group.
6 Configuring system backups Backing up the management console configuration The management console configuration is automatically backed up whenever the cluster configuration changes. The backup takes place on the node hosting the active management console (or on the Management Server, if a dedicated management console is configured). The backup file is stored at <ibrixhome>/tmp/fmbackup.zip on the machine where it was created.
Configuring NDMP parameters on the cluster Certain NDMP parameters must be configured to enable communications between the DMA and the NDMP Servers in the cluster. To configure the parameters on the management console GUI, select Cluster Configuration from the Navigator, and then select NDMP Backup. The NDMP Configuration Summary shows the default values for the parameters.
To cancel a session, select that session and click Cancel Session. Canceling a session kills all spawned sessions processes and frees their resources if necessary. To see similar information for completed sessions, select NDMP Backup > Session History. To view active sessions from the CLI, use the following command: ibrix_ndmpsession –l To view completed sessions, use the following command.
NDMP events An NDMP Server can generate three types of events: INFO, WARN, and ALERT. These events are displayed on the management console GUI and can be viewed with the ibrix_event command. INFO events. These events specify when major NDMP operations start and finish, and also report progress.
7 Creating hostgroups for X9000 clients A hostgroup is a named set of X9000 clients. Hostgroups provide a convenient way to centrally manage clients using the management console. You can put different sets of clients into hostgroups and then perform the following operations on all members of the group: Create and delete mountpoints Mount file systems Prefer a network interface...
<installdirectory>/bin/ibrix_hostgroup -m -g GROUP -h MEMBER For example, to add the specified host to the finance group: <installdirectory>/bin/ibrix_hostgroup -m -g finance -h cl01.hp.com Adding a domain rule to a hostgroup To set up automatic hostgroup assignments, define a domain rule for hostgroups. A domain rule restricts hostgroup membership to clients on a particular cluster subnet.
Additional hostgroup operations are described in the following locations: Creating or deleting a mountpoint, and mounting or unmounting a file system (see “Creating and mounting file systems” in the HP StorageWorks X9000 File Serving Software File System User Guide Changing host tuning parameters (see “Tuning file serving nodes and X9000 clients”...
8 Monitoring cluster operations Monitoring the X9720 Network Storage System status The X9720 storage monitoring function gathers X9720 system status information and generates a monitoring report. The X9000 management console displays status information on the dashboard. This section describes how to the use the CLI to view this information. Monitoring intervals The monitoring interval is set by default to 15 minutes (900 seconds).
Events are written to an events table in the configuration database as they are generated. To maintain the size of the file, HP recommends that you periodically remove the oldest events. See “Removing events from the events database table” (page 51) for more information.
<installdirectory>/bin/ibrix_event -l View adesignated number of events. The command displays the 100 most recent messages by default. Use the -n EVENTS_COUNT option to increase or decrease the number of events displayed. <installdirectory>/bin/ibrix_event -l [-n EVENTS_COUNT] The following command displays the 25 most recent events: <installdirectory>/bin/ibrix_event -l -n 25 Removing events from the events database table The ibrix_event -p command removes events from the events table, starting with the oldest...
Page 52
Nondefault host tunings Results of the health checks By default, the Result Information field in a detailed report provides data only for health checks that received a Failed or a Warning result. Optionally, you can expand a detailed report to provide data about checks that received a Passed result, as well as details about the file system and segments.
Page 53
----------------- ----------- --------- -------------------------------------------- ---------- -------------- ------------ --------- 5.3.468(internal) 5.3.446 GNU/Linux Red Hat Enterprise Linux Server release 5.2 (Tikanga) 2.6.18-92.el5 i386 i686 Remote Hosts ============ Host Type Network Protocol Connection State -------- ------ ------------ -------- ---------------- lab15-61 Server 99.126.39.71 true S_SET S_READY S_SENDHB lab15-62 Server...
Viewing logs Logs are provided for the management console, file serving nodes, and X9000 clients. Contact HP Support for assistance in interpreting log files. You might be asked to tar the logs and email them to HP. Viewing and clearing the Integrated Management Log (IML) The IML logs hardware errors that have occurred on a blade.
Page 55
HOST Link Readdir Readdirplus Fsstat Fsinfo Pathconf Commit lab12-10.hp.com Viewing operating statistics for file serving nodes...
9 Maintaining the system Shutting down the system To shut down the system completely, first shut down the X9000 software, and then power off the X9720 hardware. Shutting down the X9000 Software Use the following procedure to shut down the X9000 Software. Unless noted otherwise, run the commands from the dedicated Management Console or from the node hosting the active agile management console.
Starting up the system To start a X9720 system, first power on the hardware components, and then start the X900 Software. Powering on the X9720 system hardware To power on the X9720 hardware, complete the following steps: Power on the 9100cx disk capacity block(s). Power on the 9100c controllers.
/etc/init.d/ibrix_client [start | stop | restart | status] Tuning file serving nodes and X9000 clients The default host tuning settings are adequate for most cluster environments. However, HP Support may recommend that you change certain file serving node or X9000 client tuning settings to improve performance.
Page 59
To tune host parameters on nodes or hostgroups: <installdirectory>/bin/ibrix_host_tune -S {-h HOSTLIST|-g GROUPLIST} -o OPTIONLIST Contact HP Support to obtain the values for OPTIONLIST. List the options as option=value pairs, separated by commas. To set host tunings on all clients, include the -g clients option.
HOSTNAME1 to HOSTNAME2 and update the source host: <installdirectory>/bin/ibrix_fs -m -f FSNAME -H HOSTNAME1,HOSTNAME2 [-M] [-F] [-N] For example, to migrate ownership of all segments in file system ifs1 that reside on s1.hp.com to s2.hp.com: <installdirectory>/bin/ibrix_fs -m -f ifs1 -H s1.hp.com,s2.hp.com...
Page 61
Locate other segments on the file system that can accommodate the data being evacuated from the affected segment. Select the file system on the management console GUI and then select Segments from the lower Navigator. If segments with adequate space are not available, add segments to the file system.
HP recommends that the default network be routed through the base User Network interface. For a highly available cluster, HP recommends that you put NFS traffic on a dedicated user network and then set up automated failover for it (see “Setting up automated failover”...
For example, to set netmask 255.255.0.0 and broadcast address 10.0.0.4 for interface eth3 on file serving node s4.hp.com: <installdirectory>/bin/ibrix_nic -c -n eth3 -h s4.hp.com -M 255.255.0.0 -B 10.0.0.4 Preferring network interfaces After creating a user network interface for file serving nodes or X9000 clients, you will need to prefer the interface for those nodes and clients.
<installdirectory>/bin/ibrix_hostgroup -n -g HOSTGROUP -A DESTHOST/IFNAME The destination host (DESTHOST) cannot be a hostgroup. For example, to prefer network interface eth3 for traffic from all X9000 clients (the clients hostgroup) to file serving node s2.hp.com: <installdirectory>/bin/ibrix_hostgroup -n -g clients -A s2.hp.com/eth3...
The following command adds a route for virtual interface eth2:232 on file serving node s2.hp.com, sending all traffic through gateway gw.hp.com: <installdirectory>/bin/ibrix_nic -r -n eth2:232 -h s2.hp.com -A -R gw.hp.com Deleting a routing table entry If you delete a routing table entry, it is not replaced with a default entry. A new replacement route must be added manually.
“Changing the cluster interface” (page 65). To delete a network interface, use the following command: <installdirectory>/bin/ibrix_nic -d -n IFNAME -h HOSTLIST The following command deletes interface eth3 from file serving nodes s1.hp.com and s2.hp.com: <installdirectory>/bin/ibrix_nic -d -n eth3 -h s1.hp.com,s2.hp.com Viewing network interface information Executing the ibrix_nic command with no arguments lists all interfaces on all file serving nodes.
/tmp/X9720/ibrix. If this directory no longer exists, download the installation code from the HP support website for your storage system. IMPORTANT: The migration procedure can be used only on clusters running HP X9000 File Serving Software 5.4 or later. Backing up the configuration...
Page 68
In the command, <cluster_VIF_addr> is the old cluster IP address for the original management console and <local_cluster_IP_addr> is the new IP address you acquired. For example: [root@x109s1 ~]# ibrix_fm -c 172.16.3.1 -d bond0:1 -n 255.255.248.0 -v cluster -I 172.16.3.100 Command succeeded! The original cluster IP address is now configured to the newly created cluster VIF device (bond0:1).
Page 69
1 1. Verify that there is only one management console in this cluster: ibrix_fm -f For example: [root@x109s1 ~]# ibrix_fm -f NAME IP ADDRESS ------ ---------- X109s1 172.16.3.100 Command succeeded! Install a passive agile management console on a second file serving node. In the command, the -F option forces the overwrite of the new_lvm2_uuid file that was installed with the X9000 Software.
NOTE: If iLO was not previously configured on the server, the command will fail with the following error: com.ibrix.ias.model.BusinessException: x467s2 is not associated with any power sources Use the following command to define the iLO parameters into the X9000 cluster database: ibrix_powersrc -a -t ilo -h HOSTNAME -I IPADDR [-u USERNAME -p PASSWORD] See the installation guide for more information about configuring iLO.
1 1 Upgrading the X9000 Software This chapter describes how to upgrade to the latest X9000 File Serving Software release. The management console and all file serving nodes must be upgraded to the new release at the same time. Note the following: Upgrades to the X9000 Software 5.6 release are supported for systems currently running X9000 Software 5.5.x.
“Upgrading Linux X9000 clients” (page 75) “Upgrading Windows X9000 clients” (page 75). If you received a new license from HP, install it as described in the “Licensing” chapter in this guide. Upgrade firmware on X9720 systems. See “Upgrading firmware on X9720 systems” (page 76).
Save the <hostname>_cluser_config.tgz file, which is located in /tmp, to the external storage media. Performing the upgrade Complete the following steps on each node: Obtain the latest Quick Restore image from the HP kiosk at http://www.software.hp.com/ kiosk (you will need your HP-provided login credentials).
When the following screen appears, enter qr to install the X9000 software on the file serving node. The server reboots automatically after the software is installed. Remove the DVD from the DVD-ROM drive. Restoring the node configuration Complete the following steps on each node, starting with the previous active management console: Log in to the node.
/etc/init.d/ibrix_client status IBRIX Filesystem Drivers loaded IBRIX IAD Server (pid 3208) running... The IAD service should be running, as shown in the previous sample output. If it is not, contact HP Support. Upgrading Windows X9000 clients Complete the following steps on each client:...
The README file describes the firmware updates and explains how to install them. Troubleshooting upgrade issues If the upgrade does not complete successfully, check the following items. For additional assistance, contact HP Support. Automatic upgrade Check the following: If the initial execution of /usr/local/ibrix/setup/upgrade fails, check /usr/local/ibrix/setup/upgrade.log for errors.
feature restore failed. Look at the specific feature log file under /usr/local/ibrix/setup/ logs/ for more detailed information. To retry the copy of configuration, use the command appropriate for your server: ◦ A dedicated management console: /usr/local/ibrix/autocfg/bin/ibrixapp upgrade -f A file serving node: ◦...
Fax the Password Request Form that came with your License Entitlement Certificate. See the certificate for fax numbers in your area. Call or email the HP Password Center. See the certificate for telephone numbers in your area or email addresses.
13 Upgrading the X9720 Network Storage System hardware WARNING! Before performing any of the procedures in this chapter, read the important warnings, precautions, and safety information in “Warnings and precautions” (page 168) “Regulatory compliance and safety” (page 172). Adding new server blades NOTE: This requires the use of the Quick Restore DVD.
Page 80
“Recovering the X9720 Network Storage System” (page 125) for more information. Set up fail over. For more information, see the HP StorageWorks X9000 File Serving Software User Guide. Enable high availability (automated failover) by running the following command on server 1: # ibrix_server –m...
Use two people to lift, move, and install the HP StorageWorks X9700c component. Use an appropriate lifting device to lift, move, and install the HP StorageWorks X9700cx component. Always extend only one component at a time. A cabinet could become unstable if more than one component is extended for any reason.
In an expansion cabinet, you must add capacity blocks in the order shown in the following illustration. For example, when adding a fifth capacity block to your HP StorageWorks X9720 Network Storage System, the X9700c 5 component goes in slots U31 through 32 (see callout 4), and the X9700cx 5 goes in slots U1 through U5 (see callout 8).
1 X9700c 8 5 X9700cx 8 2 X9700c 7 6 X9700cx 7 3 X9700c 6 7 X9700cx 6 4 X9700c 5 8 X9700cx 5 Installation procedure Add the capacity blocks one at a time, until the system contains the maximum it can hold. The factory pre-provisions the additional capacity blocks with the standard LUN layout and capacity block settings (for example, rebuild priority).
Insert the X9700c into the cabinet. Use the thumbscrews on the front of the chassis to secure it to the cabinet. Step 2—Install X9700cx in the cabinet WARNING! Do not remove the disk drives before inserting the X9700cx into the cabinet. The X9700cx is heavy;...
Step 4—Cable the X9700c to SAS switches Using the two 4-meter cables, cable the X9700c to the SAS switch ports in the c-Class Blade Enclosure, as shown in the following illustrations for cabling the base or expansion cabinet. Base cabinet Callouts 1 through 3 indicate additional X9700c components.
Expansion cabinet X9700c 8 X9700c 7 X9700c 6 X9700c 5 Used by base cabinet. SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure). Used by base cabinet. SAS switch ports 5 through 8 (in interconnect bay 4 of the c-Class Blade Enclosure). Step 5—Connect the power cords WARNING! To reduce the risk of electric shock or damage to the equipment:...
X9700c enclosure. Wait for the seven-segment display on the rear of the X9700c to read on. This can take a few minutes. If necessary, update the firmware of the new capacity block. See the HP StorageWorks X9720 Network Storage System Administrator Guide for more information about updating the firmware.
IMPORTANT: The X9720 system is shipped with the correct firmware and drivers. Do not upgrade firmware or drivers unless the upgrade is recommended by HP Support or is part of an X9720 patch provided on the HP web site. Firmware update summary When the X9720 Network Storage System software is first loaded, it automatically updates the firmware for some components.
Run the following command: # exds_update_oa_firmware The command prompts for IP address, username and password. Use the data from step 2: HP Onboard Administrator Firmware Flash Utility v1.0.5 Copyright (c) 2009 Hewlett Packard Development Company, L.P. OA network address:192.172.1.1 Username:exds...
X9700c chassis are online. If the path to any controller is "none," the controller might not be updated. Run the update utility (or utilities) located in /opt/hp/mxso/firmware. If you are updating several components, run each update utility one at a time. The update utility depends on the...
Shut down all servers except for the first server. Shut down the first server to single user mode. Run the update utility (or utilities) located in /opt/hp/mxso/firmware. If you are updating several components, run each update utility one at a time. The update utility depends on the...
Page 93
Start the FTP service: # service vsftpd start Download the HP 3Gb SAS BL Switch Firmware from the HP Support website or install the mxso-firmware file onto the Management Server. Copy the firmware file to the /var/ftp/pub directory. For example: # cp /opt/hp/mxso/firmware/S-2_3_2_13.img /var/ftp/pub...
The collected information is collated into a tar file and placed in the directory /admin/platform/ diag/support/tickets/ on the active management console. Send this tar file to HP Support for analysis. The name of the tar file is ticket_<name>/tgz. In the filename, <name> is a number, for example, ticket_0002.tgz.
Support ticket states Support tickets are in one of the following states: Ticket State Description COLLECTING_LOGS The data collection operation is collecting logs and command output. COLLECTED_LOGS The data collection operation has completed on all nodes in the cluster. CREATING The data collected from each node is being copied to the active management console.
The X9720 Network Storage System escalate tool produces a report on the state of the system. When you report a problem to HP technical support, you will always be asked for an escalate report, so it saves time if you include the report up front.
Each OA has a service port (this is the right-most Ethernet port on the OA). This allows you to use a laptop to access the OA command line interface. See HP BladeSystem c7000 Enclosure Setup and Installation Guide for instructions on how to connect a laptop to the service port.
# exds_stdiag [--raw=<filename>] The --raw=<filename> option saves the raw data gathered by the tool into the specified file in a format suitable for offline analysis, for example by HP support personnel. Following is a typical example of output from this command:...
The exds_netdiag utility performs tests on and retrieves data from the networking components in an X9720 Network Storage System. It performs the following functions: Reports failed Ethernet Interconnects (failed as reported by the HP Blade Chassis Onboard Administrator) Reports missing, failed, or degraded site uplinks...
2x 1 GB LUNs—These were used by the X9100 for membership partitions, and remain in the X9720 for backwards compatibility. Customers may use them as they see fit, but HP does not recommend their use for normal data storage, due to performance limitations.
multiple events if they fail. Failed components will be reported in the output of ibrix_vs -i, and failed storage components will be reported in the output of ibrix_health -V -i. Identifying failed I/O modules on an X9700cx chassis When an X9700cx I/O module (or the SAS cable connected to it) fails, the X9700c controller attached to the I/O module reboots and if the I/O module does not immediately recover, the X9700c controller stays halted.
Identifying the failed component IMPORTANT: A replacement X9700cx I/O module could have the wrong version of firmware pre-installed. The X9700cx I/O module cannot operate with mixed versions of firmware. Plan for system downtime before inserting a new X9700cx I/O module. Verify that SAS cables are connected to the correct controller and I/O module.
Page 103
Fault is in the X9700c controller. The fault is Re-seat the controller as described later in this not in the X9700cx or the SAS cables document If the fault does not clear, report to HP connecting the controller to the I/O modules. Support to obtain a replacement controller.
Page 104
10. If the fault has not cleared at this stage, there could be a double fault (that is, failure of two I/O modules). Alternatively, one of the SAS cables could be faulty. Contact HP Support to help identify the fault or faults. Run the exds_escalate command to generate an escalate...
NOTE: If you reply Y to the wrong array, let the command finish normally. This can do no harm since I/O has been suspended as described above (and the I/O modules should already be at the level included in the X9720 Network Storage System). After the array has been flashed, you can exit the update utility by entering q to quit.
The file system and IAD/FS output fields should show matching version numbers unless you have installed special releases or patches. If the output fields show mismatched version numbers and you do not know of any reason for the mismatch, contact HP Support. A mismatch might affect the operation of your cluster.
Automatic. X9000 Software reinstall failed If you need to restore the X9000 Software on an X9720 server blade, HP recommends using the X9720 QuickRestore DVD. If, for some reason, you need to uninstall and reinstall the X9000 Software using the ibrixinit command, use the -F option to ibrixinit during reinstallation.
Err: RPC call to host=wodao6 failed, error=-651, func=IDE_FSYNC_prepacked If you see these messages persistently, contact HP Services as soon as possible. The messages could indicate possible data loss and can cause I/O errors for applications that access X9000 file systems.
Apparent failure of HP P700m Sometimes when a server is booted, the HP P700m cannot access the SAS fabric. This is more common when a new blade has just been inserted into the blade chassis, but can occur on other occasions.
Power on all enclosures. Wait until all sever-segment displays show "on" then power on all server blades. If the HP P700m still cannot access the fabric, replace it on affected server blades and run exds_stdiag again. X9700c enclosure front panel fault ID LED is amber If the X9700c enclosure fault ID LED is amber, check to see if the power supplies and controllers are amber.
In this situation, try swapping out each component one at a time, checking the GSI light after each replacement. See Replacing components in the HP ExDS9100 Storage System for replacement instructions. X9700cx drive LEDs are amber after firmware is flashed If the X9700cx drive LEDs are amber after the firmware is flashed, try power cycling the X9700cx again.
To maintain access to a file system, file serving nodes must have current information about the file system. HP recommends that you execute ibrix_health on a regular basis to monitor the health of this information. If the information becomes outdated on a file serving node, execute ibrix_dbck -o to resynchronize the server’s information with the configuration database.
HP (or HP service providers or partners) identifies that the repair can be accomplished by the use of a CSR part, HP will ship that part directly to you for replacement. There are two categories of CSR parts: Mandatory—Parts for which customer self repair is mandatory.
In the materials shipped with a CSR component, HP specifies whether the defective component must be returned to HP. In cases where it is required, you must ship the defective part back to HP within a defined period of time, normally five business days. The defective part must be returned with the associated documentation in the provided shipping material.
Note the server blade bay number, then remove the server blade from the blade enclosure. If you are replacing the system board, replace with a new system board. See HP ProLiant BL460c Server Blade Maintenance and Service Guide for instructions.
Doing so could compromise data integrity. To replace an OA module: Using the HP BladeChassis Insight Display, check if the standby OA has an IP address. If one of the OA addresses is 0.0.0.0, then review your records to determine what the IP address should be.
Disconnect the network connections into the Ethernet Virtual Connect (the module in bay 1 or Remove the VC module. Replace the VC module. Reconnect the cable that was disconnected in step 1. Remove and then reconnect the uplink to the customer network for bay 2. NOTE: Clients lose connectivity during this procedure unless you are using a bonded network.
LUN and set it to the failed state. To replace a drive: NOTE: For best results, HP recommends that you replace disk drives with both the X9700c and X9700cx powered on. Identify the drive in need of replacement either by running the exds_stdiag command, or by visually checking the LED on the disk in the X9700c or examining the panel on the front of the X9700cx drawer.
The working controller flashes the replacement controller with the correct firmware. If you do not follow this procedure, the firmware version may be incompatible with the HP ExDS system software. If you need to replace both controllers at the same time, contact HP Support for instructions.
See the HP StorageWorks Disk Enclosure Fan Module Replacement Instructions for more information. Replacing the X9700c chassis You cannot replace the X9700c chassis while the system is in operation. HP recommends that you perform this operation only during a scheduled maintenance window.
X9700cx to normal operation. To replace the X9700cx I/O module: Unmount the file systems. For more information, see the HP StorageWorks X9000 File Serving Software User Guide. Ensure that the disk drawer is fully pushed in and locked.
8 above until the firmware levels are correct. 10. Mount the file systems that were unmounted in step 1. For more information, see the HP StorageWorks X9000 File Serving Software User Guide.
Page 124
If you disconnect the SAS cable connecting an X9700c controller and an X9700cx I/O module, the X9700c controller will halt. After replacing the SAS cable, you need to re-seat out the X9700c controller to reboot it. For more information, see “Replacing the X9700cx I/O module ”...
CAUTION: Recovering the management console node can result in data loss if improperly performed. Contact HP Support for assistance in performing the recovery procedure. Starting the recovery To recover a failed blade, follow these steps: If a NIC monitor is configured on the user network, remove the monitor.
The server reboots automatically after the installation is complete. Remove the DVD from the USB DVD drive. The Configuration Wizard starts automatically. Use the appropriate configuration procedure: To configure a file serving node, select one of the following: ◦ When your cluster was configured initially, the installer may have created a template for configuring file serving nodes.
Page 127
Log into the system as user root (the default password is hpinvent). When the System Deployment Menu appears, select Join an existing cluster. The Configuration Wizard attempts to discover management consoles on the network and then displays the results. Select the appropriate management console for this cluster. NOTE: If the list does not include the appropriate management console, or you want to customize the cluster configuration for the file serving node, select Cancel.
Page 128
The Verify Configuration window shows the configuration received from the management console. Select Accept to apply the configuration to the server and register the server with the management console. NOTE: If you select Reject, the wizard will exit and the shell prompt will be displayed. You can restart the Wizard by entering the command /usr/local/ibrix/autocfg/bin/menu_ss_wizard or logging in to the server again.
IMPORTANT: Configure a passive agile management console only if the agile management console is enabled and an active agile management console is configured. If you have configured a user network, enter a VIF IP address and netmask. If you configured a passive management console, enter the following command to verify the status of the console: ibrix_fm -i Next, complete the restore on the file serving node.
Page 130
1 1. If Insight Remote Support was previously enabled on this file serving node, run the following command to start Insight Remote Support services each time the node is rebooted: chkconfig hp-snmp-agents on To start Insight Remote Support services now, run the following commands:...
12. Run ibrix_health -l from the X9000 management console to verify that no errors are being reported. NOTE: If the ibrix_health command reports that the restored node failed, run the following command: ibrix_health –i –h <hostname> If this command reports failures for volume groups, run the following command: ibrix_pv -a -h <Hostname of restored node>...
Page 132
The Configuration Wizard attempts to discover management consoles on the network and then displays the results. Select Cancel to configure the node manually. (If the wizard cannot locate a management console, the screen shown in step 4 will appear.) The file serving node Configuration Menu appears. 132 Recovering the X9720 Network Storage System...
Page 133
The Cluster Configuration Menu lists the configuration parameters that you will need to set. Use the Up and Down arrow keys to select an item in the list. When you have made your select, press Tab to move to the buttons at the bottom of the dialog box, and press Space to go to the next dialog box.
Page 134
Select Time Zone from the menu, and then use Up or Down to select your time zone. Select Default Gateway from the menu, and enter the IP Address of the host that will be used as the default gateway. 134 Recovering the X9720 Network Storage System...
Page 135
Select DNS Settings from the menu, and enter the IP addresses for the primary and secondary DNS servers that will be used to resolve domain names. Also enter the DNS domain name. 1 1. Select NTP Servers from the menu, and enter the IP addresses or hostnames for the primary and secondary NTP servers.
Page 136
Select Networks from the menu. Select <add device> to create a bond for the cluster network. You are creating a bonded interface for the cluster network; select Ok on the Select Interface Type dialog box. Enter a name for the interface (bond0 for the cluster interface) and specify the appropriate options and slave devices.
Page 137
When the Configure Network dialog box reappears, select bond0. Configuring a file serving node 137...
Page 138
To complete the bond0 configuration, enter a space to select the Cluster Network role. Then enter the IP address and netmask information that the network will use. Repeat this procedure to create a bonded user network (typically bond1 with eth1 and eth2) and any custom networks as required.
IMPORTANT: Configure a passive agile management console only if the agile management console is enabled and an active agile management console is configured. If you configured a user network, enter a VIF IP address and netmask for the network. If you configured a passive management console, enter the following command to verify the status of the console: ibrix_fm -i IMPORTANT:...
Page 140
Log into the system as user root (the default password is hpinvent). The Management Console Configuration Wizard starts automatically. (You can also launch the Wizard manually by entering the command /usr/local/ibrix/autocfg/bin/menu_wizard.) When using the menu, use the Up and Down arrow keys to select an item in the list. When you have made your selection, press Tab to go to the buttons at the bottom of the dialog box, and then press Space to activate your selection.
Page 141
Select Hostname from the menu, and enter the hostname of this server. Select Time Zone from the menu, and then use Up or Down to select your time zone. Configuring the management console on the dedicated (non-agile) Management Server blade...
Page 142
Select Default Gateway from the menu, and enter the IP Address of the host that will be used as the default gateway. Select DNS Settings from the menu, and enter the IP addresses for your DNS servers. Also enter the DNS domain name. 142 Recovering the X9720 Network Storage System...
Page 143
Select NTP Servers from the menu, and enter the IP addresses or hostnames for the primary and secondary NTP servers. Select Networks from the menu. You will need to create one cluster network interface, which will be used for intracluster communication. Typically this interface is configured as bond0. You may also need to create a user network, which is used for server to client communication.
Page 144
You are creating a bonded interface for the cluster network; select Ok on the Select Interface Type dialog box. Enter a name for the interface (bond0 for the cluster interface) and specify the appropriate options and slave devices. 144 Recovering the X9720 Network Storage System...
Page 145
1 1. When the Configure Network dialog box reappears, select bond0. Configuring the management console on the dedicated (non-agile) Management Server blade 145...
Page 146
To complete the bond0 configuration, enter a space to select the Cluster Network role. Then enter the IP address and netmask information that the network will use. Repeat this procedure to create a bonded user network (typically bond1) and any custom networks as required.
Use the AutoPass GUI to reinstall your license. For more information, see the “Licensing” chapter in the HP StorageWorks X9000 File Serving Software User Guide. Ensure that you have root access to the management console. The restore process sets the root password to hpinvent, the factory default.
HP StorageWorks X9720 Network Storage System Controller User Guide (Describes how to install, administer, and troubleshoot the HP StorageWorks X9700c) On the Manuals page, select storage > NAS Systems > NAS/Storage Servers > HP StorageWorks X9000 Network Storage Systems. Using and maintaining file serving nodes...
HP customer self repair (CSR) programs allow you to repair your StorageWorks product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR.
For information about HP StorageWorks product warranties, see the warranty information website: http://www.hp.com/go/storagewarranty Subscription service HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/go/e-updates After registering, you will receive email notification of product enhancements, new driver versions, firmware updates, and other product resources.
A Component and cabling diagrams Base and expansion cabinets A minimum X9720 Network Storage System base cabinet has from 3 to 16 performance blocks (that is, server blades) and from 1 to 4 capacity blocks. An expansion cabinet can support up to four more capacity blocks, bringing the system to eight capacity blocks.
Back view of a base cabinet with one capacity block 1. Management switch 2 2. Management switch 1 3. X9700c 1 4. TFT monitor and keyboard 5. c-Class Blade enclosure 6. X9700cx 1 152 Component and cabling diagrams...
Front view of an expansion cabinet The optional X9700 expansion cabinet can contain from one to four capacity blocks. The following diagram shows a front view of an expansion cabinet with four capacity blocks. 1. X9700c 8 5. X9700cx 8 2.
Back view of an expansion cabinet with four capacity blocks 1. X9700c 8 5. X9700cx 8 2. X9700c 7 6. X9700cx 7 3. X9700c 6 7. X9700cx 6 4. X9700c 5 8. X9700cx 5 Performance blocks (c-Class Blade enclosure) A performance block is a special server blade for the X9720. Server blades are numbered according to their bay number in the blade enclosure.
Ethernet module cabling—Base cabinet” (page 161). If you connect several ports to the same switch in your site network, all ports must use the same media type. In addition, HP recommends you use 10 links. The X9720 Network Storage System uses mode 1 (active/backup) for network bonds. No other bonding mode is supported.
X9700c (array controller with 12 disk drives) Front view of an X9700c 1. Bay 1 5. Power LED 2. Bay 2 6. System fault LED 3. Bay 3 7. UID LED 4. Bay 4 8. Bay 12 Rear view of an X9700c 1.
This component is also known as the HP StorageWorks 600 Modular Disk System. For an explanation of the LEDs and buttons on this component, see the HP StorageWorks 600 Modular Disk System User Guide at http://www.hp.com/support/manuals. Under Storage click Disk Storage Systems, then under Disk Enclosures click HP StorageWorks 600 Modular Disk System.
Cabling diagrams Capacity block cabling—Base and expansion cabinets A capacity block is comprised of the X9700c and X9700cx. CAUTION: Correct cabling of the capacity block is critical for proper X9720 Network Storage System operation. X9700c X9700cx primary I/O module (drawer 2) X9700cx secondary I/O module (drawer 2) X9700cx primary I/O module (drawer 1) X9700cx secondary I/O module (drawer 1)
Page 162
Site network Onboard Administrator Available uplink port Management switch 2 Bay 5 (reserved for future use) Management switch 1 Bay 6 (reserved for future use) Bay 1 (Virtual Connect Flex- 1 0 10 Ethernet Bay 7 (reserved for optional components) Module for connection to site network) Bay 2 (Virtual Connect Flex- 1 0 10 Ethernet Bay 8 (reserved for optional components)
SAS switch cabling—Base cabinet NOTE: Callouts 1 through 3 indicate additional X9700c components. X9700c 4 X9700c 3 X9700c 2 X9700c 1 SAS switch ports 1through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Ports 2 through 4 are reserved for additional capacity blocks.
SAS switch cabling—Expansion cabinet NOTE: Callouts 1 through 3 indicate additional X9700c components. X9700c 8 SAS switch ports 1 through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Used by base cabinet. X9700c 7 SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure).
X9720 Network Storage System components. Spare parts are categorized as follows: Mandatory. Parts for which customer self repair is mandatory. If you ask HP to replace these parts, you will be charged for the travel and labor costs of this service.
Use conductive field service tools. Use a portable field service kit with a folding static-dissipating work mat. If you do not have any of the suggested equipment for proper grounding, have an HP-authorized reseller install the part. NOTE: For more information on static electricity or assistance with product installation, contact your HP-authorized reseller.
WARNING! Power supplies or systems marked with these symbols indicate the presence of multiple sources of power. WARNING! Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely. Weight warning WARNING! The device can be very heavy.
WARNING! Verify that the AC power supply branch circuit that provides power to the rack is not overloaded. Overloading AC power to the rack power supply circuit increases the risk of personal injury, fire, or damage to the equipment. The total rack load should not exceed 80 percent of the branch circuit rating.
Page 171
CAUTION: Protect the installed solution from power fluctuations and temporary interruptions with a regulating Uninterruptible Power Supply (UPS). This device protects the hardware from damage caused by power surges and voltage spikes, and keeps the system in operation during a power failure.
D Regulatory compliance and safety Regulatory compliance identification numbers For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information. When requesting compliance information for this product, always refer to this regulatory model number.
Do not operate controls, make adjustments, or perform procedures to the laser device, other than those specified herein. Allow only HP-authorized service technicians to repair the unit. The Center for Devices and Radiological Health (CDRH) of the U.S. Food and Drug Administration implemented regulations for laser products on August 2, 1976.
International notices and statements Canadian notice (Avis Canadien) Class A equipment This Class A digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations. Cet appareil numérique de la classe A respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada.
To forward them to recycling or proper disposal, please use the public collection system or return them to HP, an authorized HP Partner, or their agents. For more information about battery replacement or proper disposal, contact an authorized reseller or an authorized service provider.
1.00 mm2 or 18 AWG, and the length of the cord must be between 1.8 m (6 ft) and 3.6 m (12 ft). If you have questions about the type of power cord to use, contact an HP-authorized service provider. NOTE: Route power cords so that they will not be walked on and cannot be pinched by items placed upon or against them.
NOTE: For more information on static electricity, or for assistance with product installation, contact your authorized reseller. Waste Electrical and Electronic Equipment directive Czechoslovakian notice Danish notice Dutch notice Waste Electrical and Electronic Equipment directive 177...
Domain name system. File Transfer Protocol. Global service indicator. High availability. Host bus adapter. Host channel adapter. Hard disk drive. HP X9000 Software Administrative Daemon. Integrated Lights-Out. Initial microcode load. IOPS I/Os per second. IPMI Intelligent Platform Management Interface. JBOD Just a bunch of disks.
Page 183
Voltaire's Unified Fabric Manager client software. Unit identification. SNMP User Security Model. VACM SNMP View Access Control Model. HP Virtual Connect. Virtual interface. WINS Windows Internet Naming Service. World Wide Name. A unique identifier assigned to a Fibre Channel device.
Index contacting HP, customer self repair, using hpacucli, adding declaration of conformity, server blades, disk drive replacement, X9700c, 1 19 agile management console, document Array Configuration Utility related documentation, using hpacucli, AutoPass, email event notification, error messages backups POST, file systems,...
Page 185
start or stop processes, set up power sources, troubleshooting, summary configuration report, tune, troubleshooting, view process status, hostgroups, file systems add domain rule, segments add X9000 client, migrate, create hostgroup tree, firmware delete, capacity block, prefer a user network interface, locating, view, Onboard Administrator,...
Page 186
SNMP event notification, warning, SNMP MIB, regulatory compliance, spare parts list, related documentation, storage, monitor, remote support, storage, remove from cluster, removing Subscriber's Choice, HP, capacity blocks, support tickets, server blades, symbols replacing on equipment, blade enclosure, 1 15 system board...
Page 187
Waste Electrical and Electronic Equipment directive, websites customer self repair, HP Subscriber's Choice for Business, weight, warning, Windows X9000 clients, upgrade, X9000 clients add to hostgroup, change IP address, identify a user network interface,...