Setup of an AX100 on a RedHat ES 4

Jehan Procaccia MCI INT-EVRY- jehan.procaccia@int-evry.fr

12 juin 2006

Table des matières

Résumé : This document is a practical setup of an AX100 storage system on a linux redhat 4 system. Its show complete steps, as well as specials suplements steps that were needed to go through, because we used and AX system that was previously setup for another server (we reused it , so we needed to reinititialize it) . this documentation is an 'update' of the french one dedicated AX100 descovering and crash tests on a RH ES 3 available here: http://www.int-evry.fr/s2ia/user/procacci/Doc/AX100/AX100-ISO_8859-1.html

1  Hardware used

It's an AX100 SAN (Storage Area Network) composed of 12 disks of 250 Go = 3 Tera, directly attached through Fiber Channel qlogic (qla200) HBAs on a dell 1850 powerEdge server with two raid 1 internal disk for the redhat system. A Power supplies (UPS) is also attached www.apc.com.

2  Official Documentations

2.1  Descriptions

Description for AX100 is not anymore present on dell sites, I could only find the new model AX150 :
Dell http://www.dell.com/content/products/productdetails.aspx/pvaul_ax150?c=us&cs=555&l=en&s=biz&~tab=specstab#tabtop
Emc http://france.emc.com/interoperability/matrices/AX_Series_SupportMatrix.pdf
Emc navisphere relaeses http://france.emc.com/products/systems/clariion/ax100/support/pdf/R19_whats_new.pdf
Emc support Matrix http://france.emc.com/products/systems/clariion/ax100/support/pdf/supported_configs.pdf

2.2  Support

Source french documentation is available at http://france.emc.com/local/fr/FR/products/systems/clariion/ax100/support/support.jsp, then connexion Directe -> serveur Linux http://france.emc.com/products/systems/clariion/ax100/support/pdf/direct_linux_install.pdf.

3  Server checks and updates

3.1  check the current kernel


[root@pasargades rhn]# uname -a
Linux pasargades.int-evry.fr 2.6.9-34.ELsmp #1 SMP Fri Feb 24 16:54:53 EST 2006 i686 i686 i386 GNU/Linux
[root@pasargades rhn]# uptime
 18:32:06 up 10 days,  2:34,  2 users,  load average: 0.00, 0.01, 0.00

3.2  update kernel


[root@pasargades rhn]# up2date --update

Fetching Obsoletes list for channel: rhel-i386-es-4...

Fetching rpm headers...
########################################

Name                                    Version        Rel
----------------------------------------------------------
kernel                                  2.6.9          34.0.1.EL         i686
kernel-devel                            2.6.9          34.0.1.EL         i686
kernel-hugemem-devel                    2.6.9          34.0.1.EL         i686
kernel-smp                              2.6.9          34.0.1.EL         i686
kernel-smp-devel                        2.6.9          34.0.1.EL         i686

then reboot the server on this new kernel, after reboot:
$ uname -a
Linux pasargades.int-evry.fr 2.6.9-34.0.1.ELsmp #1 SMP Wed May 17 17:05:24 EDT 2006 i686 i686 i386 GNU/Linux

[root@pasargades rhn]# cat /etc/redhat-release
Red Hat Enterprise Linux ES release 4 (Nahant Update 3)

3.3  check qla drivers

On redhat 4 , qlogic HBA driver are included in default kernels. Check that the kernel sees it:
[root@pasargades /proc/scsi/qla2xxx]
$ cat 1
QLogic PCI to Fibre Channel Host Adapter for QLA200:
        Firmware version 3.03.18 FLX, Driver version 8.01.02-d4
ISP: ISP6312, Serial# Q44447
Request Queue = 0x376c0000, Response Queue = 0x376b0000
Request Queue count = 2048, Response Queue count = 512
Total number of active commands = 0
Total number of interrupts = 198
    Device queue depth = 0x10
Number of free request entries = 2047
Number of mailbox timeouts = 0
Number of ISP aborts = 0
Number of loop resyncs = 0
Number of retries for empty slots = 0
Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0
Host adapter:loop state = <DEAD>, flags = 0x1a03
Dpc flags = 0x4000000
MBX flags = 0x0
Link down Timeout = 030
Port down retry = 030
Login retry count = 030
Commands retried with dropped frame(s) = 0
Product ID = 4953 5020 2020 0003

SCSI Device Information:
scsi-qla0-adapter-node=200000e08b199f17;
scsi-qla0-adapter-port=210000e08b199f17;

FC Port Information:

SCSI LUN Information:
(Id:Lun)  * - indicates lun is not registered with the OS.


Kernel modules should show up in lsmod
[root@pasargades /proc/scsi/qla2xxx]
$ lsmod | grep qla
qla6312               119233  0
qla2xxx               165733  2 qla6312
scsi_transport_fc      12225  1 qla2xxx
scsi_mod              116941  5 sg,qla2xxx,scsi_transport_fc,megaraid_mbox,sd_mod

4  Powerpath

Got the latest EMCpower.LINUX RPM from dell-EMC gold queue FTP .
[root@pasargades ~/PowerPath-EMC]
$ rpm -ivh EMCpower.LINUX-4.5.1-022.rhel.i386.rpm
Preparing...                ########################################### [100%]
   1:EMCpower.LINUX         ########################################### [100%]
All trademarks used herein are the property of their respective owners.
NOTE:License registration is not required to manage the CLARiiON AX series array

The RPM use a postrpm script that does lot of things, finally it starts /etc/init.d/PowerPath which load the emc kernel modules.
[root@pasargades ~/PowerPath-EMC]
$ lsmod  | grep -i emc
emcphr                 20316  0
emcpmpap              119772  0
emcpmpaa               88328  0
emcpmpc               111452  0
emcpmp                 70580  0
emcp                  887412  5 emcphr,emcpmpap,emcpmpaa,emcpmpc,emcpmp
emcplib                 6144  1 emcp
scsi_mod              116941  6 emcp,sg,qla2xxx,scsi_transport_fc,megaraid_mbox,sd_mod

We can start and stop PowerPath init script to check that :
$ /etc/init.d/PowerPath stop
Stopping PowerPath:  done
[root@pasargades ~/PowerPath-EMC]
$ lsmod  | grep -i emc
[root@pasargades ~/PowerPath-EMC]
$ /etc/init.d/PowerPath start
Starting PowerPath:  done
[root@pasargades ~/PowerPath-EMC]
$ lsmod  | grep -i emc
emcphr                 20316  0
emcpmpap              119772  0
emcpmpaa               88328  0
emcpmpc               111452  0
emcpmp                 70580  0
emcp                  887412  5 emcphr,emcpmpap,emcpmpaa,emcpmpc,emcpmp
emcplib                 6144  1 emcp
scsi_mod              116941  6 emcp,sg,qla2xxx,scsi_transport_fc,megaraid_mbox,sd_mod

5  AX reinitialization

5.1  naviinittool RPM installation

First we need to install naviinittool rpm , which contain the naviinittoolcli , as it's name implies, it's a script that allows us to initialize the AX (set SP IP adresses and login/password ).
[root@pasargades /media/cdrom/linux]
$ rpm -Uvh naviinitool-i386.rpm
Preparing...                ########################################### [100%]
   1:naviinittool           ########################################### [100%]
[root@pasargades /media/cdrom/linux]
$ rpm -Uvh naviserverutil.rpm-i386.rpm
Preparing...                ########################################### [100%]
   1:naviserverutil         ########################################### [100%]
Adding...

5.2  Hardware reinitialization of the AX

As it's name implies, naviinittoolcli script allows us to initialize the AX (set the SP IP adresses and login/password ). Here, in that case, the AX was used by an older server, that's why we see an old configuration (name and IP already set) . To get rid of the old configuration, we can reinitialize the AX by pressing for a few second at startup the power ont/off button. First we stop the AX by pressing one the power on/off button at the back of the AX (needs a thin pencil to push it !). One it's stoped, the precedure says to wait 10 secondes. After 10 secondes of inactivity push the power on/off button and stay pushed for about 3/5 second until we see the green light appearing. then the AX has been reinitialize. Wait a certain amount of time until the naviinittoolcli does see the AX after fresh reboot, otherwise it sees nothing ! (I had to go and get a coffe ...) .

5.3  Software reinitialization of the AX

Now we can start naviinittoolcli script:
[root@pasargades /opt/Navisphere/bin]
$ ./naviinittoolcli

                EMC END-USER LICENSE AGREEMENT
                        (EULA)

...
Do you want to accept the agreement?(Yes/No)yes
Navisphere Array Initialization Tool Version 1.0 (6.8.1.2.21)

Item Number:            1
Serial Number:          FCNPR043800095
Name:                   audierne
Current IPAddress(es):  157.159.14.21   157.159.14.22
Initialized:            yes

Please choose a storage system by 'Item Number' or '0' to discover again.
Please type 'e' to exit the application.

1

The current name of the storage system is: audierne
If you wish to keep the current name press 'k'. Please press 'Carriage Return' to modify the name.
Please specify the new name for the storage system: darius

Please specify the new networking parameters for : FCNPR043800095

The current IP address of storage processor A is: 157.159.14.21
If you wish to keep the current IP address press 'k'. Please press 'Carriage Return' to modify the IP address.
Please enter the IP address of storage processor A: 157.159.10.98

The current IP address of storage processor B is: 157.159.14.22
If you wish to keep the current IP address press 'k'. Please press 'Carriage Return' to modify the IP address.
Please enter the IP address of storage processor B: 157.159.10.99

The current subnet mask is : 255.255.255.0
If you wish to keep the current subnet mask press 'k'. Please press 'Carriage Return' to modify the subnet mask.k

The current gateway address is : 157.159.14.1
If you wish to keep the current gateway address press 'k'. Please 'Carriage Return' to modify the gateway address.
Please enter the gateway IP address: 157.159.10.1

Please specify the security settings for : FCNPR043800095

User Name: ourusername

Password: secret
Confirm Password: secret
Please choose 'a' to apply these settings or 'c' to cancel and retype the settings or 'e' to exit the application: a
Operation completed successfully.

5.4  Check that IP adresses are set and running


[root@lugdunum ~]
$ ping 157.159.10.99
PING 157.159.10.99 (157.159.10.99) 56(84) bytes of data.
64 bytes from 157.159.10.99: icmp_seq=0 ttl=127 time=0.263 ms
64 bytes from 157.159.10.99: icmp_seq=1 ttl=127 time=0.240 ms

--- 157.159.10.99 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.240/0.251/0.263/0.019 ms, pipe 2
[root@lugdunum ~]
$ ping 157.159.10.98
PING 157.159.10.98 (157.159.10.98) 56(84) bytes of data.
64 bytes from 157.159.10.98: icmp_seq=0 ttl=127 time=0.261 ms
64 bytes from 157.159.10.98: icmp_seq=1 ttl=127 time=0.245 ms

5.5  naviserverutilcli

In order to set connection between the direct attached server and the AX we ran naviserverutilcli . Again here as that AX was attached to an old server, we could do anything from naviserverutilcli in the first place: .
$ ./naviserverutilcli
Welcome to Navisphere Server Utility - version : 1.0 (6.8.1.2.21)

Storage systems attached to this server:

HBA    Storage System  SP  Port  SP IP Address
-------------------------------------------------------

No paths are discovered from the server to the storage system. Please make sure
1. The link between the server and the storage system is up.
2. The HBA software on the server has discovered the storage system target.


There are currently no volumes from external storage systems.

Please verify the information above.  If it is correct, you can update
the server with the attached storage systems.  If the information
is incorrect you can scan again and then update.

Please select [u]pdate, [s]can, [c]ancel: s

Storage systems attached to this server:

HBA    Storage System  SP  Port  SP IP Address
-------------------------------------------------------

No paths are discovered from the server to the storage system. Please make sure
1. The link between the server and the storage system is up.
2. The HBA software on the server has discovered the storage system target.


There are currently no volumes from external storage systems.

Please verify the information above.  If it is correct, you can update
the server with the attached storage systems.  If the information
is incorrect you can scan again and then update.

Please select [u]pdate, [s]can, [c]ancel: c

We thought that the server needed to reboot to see the AX .
[root@pasargades ~]
$ reboot

that was not enough apparently. So we finnally also connected to the web intervace, on one of the IP adresses set above (one SP -> https://157.159.10.98) and login with the login/password sets above in naviinittoolcli to delete the old server connections . Then naviserverutilcli did see the SPs :
[root@pasargades /opt/Navisphere/bin]
$ ./naviserverutilcli
Welcome to Navisphere Server Utility - version : 1.0 (6.8.1.2.21)

Storage systems attached to this server:

HBA    Storage System  SP  Port  SP IP Address
-------------------------------------------------------
0      FCNPR043800095  A   0     157.159.10.98
1      FCNPR043800095  B   0     157.159.10.99


There are currently no volumes from external storage systems.

Please verify the information above.  If it is correct, you can update
the server with the attached storage systems.  If the information
is incorrect you can scan again and then update.

Please select [u]pdate, [s]can, [c]ancel:

At this stage, here's what kernel scsi sees :
[root@pasargades /opt/Navisphere/bin]
$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 06 Lun: 00
  Vendor: PE/PV    Model: 1x2 SCSI BP      Rev: 1.0
  Type:   Processor                        ANSI SCSI revision: 02
Host: scsi0 Channel: 01 Id: 00 Lun: 00
  Vendor: MegaRAID Model: LD 0 RAID1   69G Rev: 521S
  Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: DGC      Model: LUNZ             Rev: 0208
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: DGC      Model: LUNZ             Rev: 0208
  Type:   Direct-Access                    ANSI SCSI revision: 04

6  Navisphere

6.1  1st disk pool

It uses the 4 initial disk, those disk a particular, they contain a mirror of the AX system, it's good idea to keep them in a unique disk pool.

6.2  2nd disk pool


6.3  Hot Spare



6.4  Virtual Disk

here we create 2 virtual disk that takes repectively the 2 whole disks pools . We will benefit from linux LVM for later sizing and resizing of partition instead of navispehre ... which one is better ??

6.5  Check components while initializing



7  Rescan dynamically the scsi bus

After initialization ends, the server doesn't see the new devices :-( I tried a script from http://www.linux1394.org/scripts/rescan-scsi-bus.sh that should dynamically rescan the bus, but with no sucess.
$ /root/rescan-scsi-bus.sh
Host adapter 1 (qla2xxx) found.
Host adapter 2 (qla2xxx) found.
Scanning for device 1 0 0 0 ...
OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 00
      Vendor: DGC      Model: LUNZ             Rev: 0208
      Type:   Direct-Access                    ANSI SCSI revision: 04
Scanning for device 2 0 0 0 ...
OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00
      Vendor: DGC      Model: LUNZ             Rev: 0208
      Type:   Direct-Access                    ANSI SCSI revision: 04
0 new device(s) found.
0 device(s) removed.

So I stoped powerpath and unload qla modules in order to restart the whole thing.
$ /etc/init.d/PowerPath stop
Stopping PowerPath:  done
$ lsmod | grep qla
qla6312               119233  0
qla2xxx               165733  1 qla6312
scsi_transport_fc      12225  1 qla2xxx
scsi_mod              116941  5 sg,qla2xxx,scsi_transport_fc,megaraid_mbox,sd_mod
[root@pasargades /opt/Navisphere/bin]
$ modprobe -r qla6312 qla2xxx
[root@pasargades /opt/Navisphere/bin]
$ lsmod | grep qla

then reload the whole thing:
$ modprobe qla2xxx qla6312
[root@pasargades /opt/Navisphere/bin]
$ /etc/init.d/PowerPath start
Starting PowerPath:  done

then it works, the kernel does see the new devices
$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 06 Lun: 00
  Vendor: PE/PV    Model: 1x2 SCSI BP      Rev: 1.0
  Type:   Processor                        ANSI SCSI revision: 02
Host: scsi0 Channel: 01 Id: 00 Lun: 00
  Vendor: MegaRAID Model: LD 0 RAID1   69G Rev: 521S
  Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi3 Channel: 00 Id: 00 Lun: 00
  Vendor: DGC      Model: RAID 5           Rev: 0208
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi3 Channel: 00 Id: 00 Lun: 01
  Vendor: DGC      Model: RAID 5           Rev: 0208
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi4 Channel: 00 Id: 00 Lun: 00
  Vendor: DGC      Model: RAID 5           Rev: 0208
  Type:   Direct-Access                    ANSI SCSI revision: 04
Host: scsi4 Channel: 00 Id: 00 Lun: 01
  Vendor: DGC      Model: RAID 5           Rev: 0208
  Type:   Direct-Access                    ANSI SCSI revision: 04
[root@pasargades /opt/Navisphere/bin]
$ fdisk -l

Disk /dev/sda: 73.2 GB, 73274490880 bytes
255 heads, 63 sectors/track, 8908 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1           4       32098+  de  Dell Utility
/dev/sda2   *           5         583     4650817+  83  Linux
/dev/sda3             584        1220     5116702+  83  Linux
/dev/sda4            1221        8908    61753860    5  Extended
/dev/sda5            1221        3770    20482843+  83  Linux
/dev/sda6            3771        5682    15358108+  83  Linux
/dev/sda7            5683        6192     4096543+  82  Linux swap

Disk /dev/sdb: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 1395.8 GB, 1395864371200 bytes
255 heads, 63 sectors/track, 169704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 1395.8 GB, 1395864371200 bytes
255 heads, 63 sectors/track, 169704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/emcpowera: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/emcpowera doesn't contain a valid partition table

Disk /dev/emcpowerb: 1395.8 GB, 1395864371200 bytes
255 heads, 63 sectors/track, 169704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/emcpowerb doesn't contain a valid partition table


REmarque: We can see that fdisk sees double path 'raw' devices ( /dev/sdb and /dev/sdd ) to a same device, which finnaly is presented by powerpath as /dev/emcpowera . All disk system command (fdisk etc ...) should now use that device in order to benefit the use of powerpath (load balancing and failover on our double attached FC ).//

The 'rescan' script shows that now:
$ /root/rescan-scsi-bus.sh
Host adapter 3 (qla2xxx) found.
Host adapter 4 (qla2xxx) found.
Scanning for device 3 0 0 0 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
      Vendor: DGC      Model: RAID 5           Rev: 0208
      Type:   Direct-Access                    ANSI SCSI revision: 04
Scanning for device 4 0 0 0 ...
OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 00
      Vendor: DGC      Model: RAID 5           Rev: 0208
      Type:   Direct-Access                    ANSI SCSI revision: 04
0 new device(s) found.
0 device(s) removed.

8  Partitioning, volume groups, logical volumes etc ...

8.1  Volume Group

We will now create a volume group on the linux system to group the two Disk pool we created in navisphere. Start fdisk on the powerpath device, create a partition, toggle it to a linux LVM partition.
$ fdisk /dev/emcpowera

Command (m for help): p

Disk /dev/emcpowera: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

         Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-82241, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-82241, default 82241):
Using default value 82241

Command (m for help): p

Disk /dev/emcpowera: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

         Device Boot      Start         End      Blocks   Id  System
/dev/emcpowera1               1       82241   660600801   83  Linux


Command (m for help): t
Selected partition 1
Hex code (type L to list codes): L


Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/emcpowera: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

         Device Boot      Start         End      Blocks   Id  System
/dev/emcpowera1               1       82241   660600801   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Same thing on the second disk pool
Disk /dev/emcpowerb: 1395.8 GB, 1395864371200 bytes
255 heads, 63 sectors/track, 169704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

         Device Boot      Start         End      Blocks   Id  System
/dev/emcpowerb1               1      169704  1363147348+  8e  Linux LVM

Command (m for help): w
The partition table has been altered!


8.2  Physical volume aggregation

We will aggregate /dev/emcpowera1 and /dev/emcpowerb1 to form a single volume group .
[root@pasargades ~]
$ pvcreate /dev/emcpowera1
  Physical volume "/dev/emcpowera1" successfully created
$ pvcreate /dev/emcpowerb1
  Physical volume "/dev/emcpowerb1" successfully created
$ vgcreate VolGroup00 /dev/emcpowera1 /dev/emcpowerb1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdb1 not /dev/emcpowera1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sdc1 not /dev/emcpowerb1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdd1 not /dev/sdb1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sde1 not /dev/sdc1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/emcpowera1 not /dev/sdd1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdb1 not /dev/emcpowera1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/emcpowerb1 not /dev/sde1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sdc1 not /dev/emcpowerb1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdd1 not /dev/sdb1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sde1 not /dev/sdc1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/emcpowera1 not /dev/sdd1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/emcpowerb1 not /dev/sde1

  Volume group "VolGroup00" successfully created

8.3  display VolumeGroup


[root@pasargades ~]
$ vgdisplay

  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               1.88 TB
  PE Size               4.00 MB
  Total PE              494078
  Alloc PE / Size       0 / 0
  Free  PE / Size       494078 / 1.88 TB
  VG UUID               5fAFBI-Jxgk-urS5-NFZ3-0Rs9-lsdW-D2kxVB

8.4  Logical Volume

creation of an experimental 100GB Logical volume:
$ lvcreate -L 100G -n lvol00 VolGroup00

  Logical volume "lvol00" created

$ lvdisplay

  --- Logical volume ---
  LV Name                /dev/VolGroup00/lvol00
  VG Name                VolGroup00
  LV UUID                V9GVyt-YilG-6H9g-ZFXW-eWl2-LEqV-hBbYsx
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                100.00 GB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

8.5  Format Logical volume


$  mke2fs -j /dev/VolGroup00/lvol00
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
13107200 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=29360128
800 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

8.6  Mount Logical volume


$ mount /dev/VolGroup00/lvol00 /mnt/test/


$ df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sda3              5.2G   277M   4.7G   6% /
none                   2.2G      0   2.2G   0% /dev/shm
/dev/sda6               16G   2.2G    13G  15% /usr
/dev/sda5               21G   331M    20G   2% /var
/dev/mapper/VolGroup00-lvol00
                       106G    97M   101G   1% /mnt/test

8.7  Test it


[root@pasargades /mnt/test]
$ rsync -a -v /root/ .
building file list ... done
./

$ df -H /dev/mapper/VolGroup00-lvol00
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/VolGroup00-lvol00
                       106G   121M   101G   1% /mnt/test


9  Logical volume extention

We tested a live (online) logical volume extention before doing that if necessary later in production !. first extend it of 50GB.

9.1  Extend Logical Volume


[root@pasargades /mnt]
$ lvextend -L +50G /dev/VolGroup00/lvol00
  Extending logical volume lvol00 to 150.00 GB
  Logical volume lvol00 successfully resized

[root@pasargades /mnt/test]
$ df -H .
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/VolGroup00-lvol00
                       148G   120M   141G   1% /mnt/test

Extend again of 50GB
[root@pasargades /mnt]
$ lvextend -L +50G /dev/VolGroup00/lvol00
  Extending logical volume lvol00 to 200.00 GB
  Logical volume lvol00 successfully resized

Now we need to extend the filesystem, it can be done 'online' with ext2online
$ ext2online -v /dev/VolGroup00/lvol00 200G
ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b
new filesystem size 52428800

creating group 1154 with 32768 blocks (rsvd = 1018, newgd = 13)

cache direct hits: 477, indirect hits: 4, misses: 7

10  Reduce a filesystem and it's Logical Volume

again to check taht before beeing in production, we reduced the filesystem. To do that we need first to check the disk (umount it before !)

10.1  FSCK


[root@pasargades /mnt]
$ e2fsck -f /dev/VolGroup00/lvol00
e2fsck 1.35 (28-Feb-2004)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VolGroup00/lvol00: 587/26214400 files (0.3% non-contiguous), 851831/52428800 blocks

10.2  Resize Filesystem

We resize the FS from 200 GB to 160 GB.
[root@pasargades /mnt]
$ resize2fs /dev/VolGroup00/lvol00 160G
resize2fs 1.35 (28-Feb-2004)
Resizing the filesystem on /dev/VolGroup00/lvol00 to 41943040 (4k) blocks.
The filesystem on /dev/VolGroup00/lvol00 is now 41943040 blocks long.

10.3  Test it

check that everything is fine:
[root@pasargades /mnt]
$ mount /dev/VolGroup00/lvol00 /mnt/test/
[root@pasargades /mnt]
$ cd test/
[root@pasargades /mnt/test]
$ df .
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-lvol00
                     165139820    117064 158311872   1% /mnt/test
[root@pasargades /mnt/test]
$ df -H .
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/VolGroup00-lvol00
                       170G   120M   163G   1% /mnt/test
\begin{jpshellverb}

\subsection{Resize the Logical volume}

The filesystem is resized now, but the Logical Volume is still at 200 GB .

\begin{jpshellverb}
$ lvdisplay

  --- Logical volume ---
  LV Name                /dev/VolGroup00/lvol00
  VG Name                VolGroup00
  LV UUID                V9GVyt-YilG-6H9g-ZFXW-eWl2-LEqV-hBbYsx
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                200.00 GB
  Current LE             51200
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

resize it (umount before)
$ lvreduce -v -L 160G /dev/VolGroup00/lvol00
    Finding volume group VolGroup00
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdb1 not /dev/emcpowera1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sdc1 not /dev/emcpowerb1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdd1 not /dev/sdb1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sde1 not /dev/sdc1
  WARNING: Reducing active logical volume to 160.00 GB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvol00? [y/n]: y
    Archiving volume group "VolGroup00" metadata (seqno 4).
  Reducing logical volume lvol00 to 160.00 GB
    Creating volume group backup "/etc/lvm/backup/VolGroup00" (seqno 5).
    Found volume group "VolGroup00"
    Loading VolGroup00-lvol00 table
    Suspending VolGroup00-lvol00 (253:0)
    Found volume group "VolGroup00"
    Resuming VolGroup00-lvol00 (253:0)
  Logical volume lvol00 successfully resized
The warnings are quite alarming ! however it's just a test here ...
$ lvdisplay
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdb1 not /dev/emcpowera1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sdc1 not /dev/emcpowerb1
  Found duplicate PV M14GpOH2Wv0VLskN6Igj7M1PmYQhS7e4: using /dev/sdd1 not /dev/sdb1
  Found duplicate PV fh2ofF0ZnQZf9oMimgxwxaAj9sJHOL5T: using /dev/sde1 not /dev/sdc1
  --- Logical volume ---
  LV Name                /dev/VolGroup00/lvol00
  VG Name                VolGroup00
  LV UUID                V9GVyt-YilG-6H9g-ZFXW-eWl2-LEqV-hBbYsx
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                160.00 GB
  Current LE             40960
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

OK it's resized, mount it again to check.
$ mount /dev/VolGroup00/lvol00 /mnt/test/
[root@pasargades /mnt]
$ cd test
[root@pasargades /mnt/test]
$ df -H .
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/VolGroup00-lvol00
                       170G   120M   163G   1% /mnt/test

OK, done .

11  Navisphere update

Update the navisphere software and packages associated.Here dell sent me that file to update it : AX100_Series-Bundle-02.19.100.5.029.zip Here's in picture the procedure followed:






The certificate seems to change so you get to that error before restarting you browser !

After restarting the web browser (removed all cookies and cache), a new certificate is presented

11.1  New flare and packages




12  Server utils updates associated to the AX update


[root@pasargades ~/navisphere-files]
$ ls -l
total 75912
-rw-r--r--  1 root root 77008300 Jun 12 17:55 AX100_Series-Bundle-02.19.100.5.029.zip
-rw-r--r--  1 root root   539827 Jun 12 17:53 axnaviserverutil-6.19.0.4.14-1.i386.zip
-rw-r--r--  1 root root    79627 Jun 12 17:53 LINUX_naviinittool-6.19.1.0.0-1.noarch.zip

[root@pasargades ~/navisphere-files]
$ unzip LINUX_naviinittool-6.19.1.0.0-1.noarch.zip
Archive:  LINUX_naviinittool-6.19.1.0.0-1.noarch.zip
  inflating: naviinittool-6.19.1.0.0-1.noarch.rpm
[root@pasargades ~/navisphere-files]
$ unzip axnaviserverutil-6.19.0.4.14-1.i386.zip
Archive:  axnaviserverutil-6.19.0.4.14-1.i386.zip
  inflating: axnaviserverutil-6.19.0.4.14-1.i386.rpm

$ rpm -Uvh axnaviserverutil-6.19.0.4.14-1.i386.rpm
Preparing...                ########################################### [100%]
   1:axnaviserverutil       ########################################### [100%]
Adding...

$ rpm -Uvh naviinittool-6.19.1.0.0-1.noarch.rpm
Preparing...                ########################################### [100%]
   1:naviinittool           ########################################### [100%]


Ce document a été traduit de LATEX par HEVEA.