Friday, May 11, 2012

RAID LEVELS

The different raid levels available today

Raid 0 - Stripping data across the disks. This stripes the data across all the disks present in the
array. This improves the read and write performance. Eg. Reading a large file takes a
long time in comparison to reading the same file from a Raid 0 system.They is no data
redundancy in this case.

Raid 1 - Mirroring. In case of Raid 0 it was observed that there was no redundancy,i.e if one
disk fails then the data is lost. Raid 1 overcomes that problem by mirroring the data. So
if one disk fails the data is still accessible through the other disk.

Raid 2 - RAID level that does not use one or more of the "standard" techniques of mirroring,
striping and/or parity. It is implemented by splitting data at bit level and spreading it
across the data disks and redundant disk. It uses a special algorithm called as ECC
(error correction code) which is accompanied across each data block. These are tallied
when the data is read from the disk to maintain data integrity.

Raid 3 - Data is striped across multiple disks at a byte level. The data is stripped with parity and
the parity is maintained in a separate disk. So if that disk goes off , it results in a data
loss.

Raid 4 - Similar to Raid 3 the only difference is that the data is striped across multiple disks at
block level.

Raid 5 - Block-level striping with distributed parity. The data and parity is stripped across all
disks thus increasing the data redundancy. Minimum three disks are required and if
any one disk goes off the data is still secure.

Raid 6 - Block-level striping with dual distributed parity. Its stripes blocks of data and parity
across all disks in the Raid except that it maintains two sets of parity information for
each parcel of data thus increasing the data redundancy. So if two disk go off the data
is still intact.

Raid 7 - Asynchronous, cached striping with dedicated parity. This level is not a open industry
standard. It is based on the concepts of Raid 3 and 4 and a great deal of cache is
included across multiple levels. Also there is a specialised real time processor to
manage the array asynchronously.

LPAR Movement from One Frame to Another

Steps for migrating LPAR from ONE Frame to Another IBM Frame

1.Have Storage zone the LPARs disk to the new HBA(s). Also have them add an additional 40GB drive for the new boot disk. By doing this we have a back out to the old boot disk on the old frame.

2. Collect data from the current LPAR:

a. Network information – Write down IP and ipv4 alias(s) for each interface

b. Run “oslevel –r” - will need this when setting up NIM for the mksysb recovery

c. Is the LPAR running AIO, if so will need to configure after the mksysb recovery

d. Run “lspv”, save this output, contains volume group and PVID information

e. Any other customizations you deem neccessary


3. create mksysb backup of this LPAR

4. Reconfigure the NIM machine for this LPAR, with new Ethernet MAC address. Foolproof method is to remove the machine and re-create it.

5. In NIM, configure the LPAR for a mksysb recovery. Select the appropriate SPOT and LPP Source, base on “oslevel –r” data collected in step 2.

6. Shut down the LPAR on the old frame (Halt the LPAR)

7. Move network cables, fibre cables, disk, zoning

8. if needed, to the LPAR on the new frame

9. On the HMC, bring up the LPAR on the new frame in SMS mode and select a network boot. Verify SMS profile has only a single HBA (if Clarrion attached, zoned to a single SP), otherwise the recovery will fail with a 554.

10. Follow prompts for building a new OS. Select the new 40GB drive for the boot disk (use lspv info collected in Step 2 to identify the correct 40GB drive). Leave defaults for remaining questions NO (shrink file systems, recover devices, and import volume groups).

11. After the LPAR has booted, from the console (the network interface may be down):

a. lspv Note the hdisk# of the bootdisk

b. bootlist –m normal –o Verify boot list is set – if not, set it

bootlist –m normal –o hdisk#

c. ifconfig en0 down If interface got configured, down it

d. ifconfig en0 detach and remove it


e. lsdev –Cc adapter Note Ethernet interfaces (ex. ent0, ent1)

f. rmdev –dl Remove all en devices

g. rmdev –dl Remove all ent devices

h. cfgmgr Will rediscover the en/ent devices

i. chdev –l -a media_speed=100_Full_Duplex Set on each interface unless

running GIG, leave defaults


j. Configure the network interfaces and aliases Use info recorded from step 2 mktcpip –h -a -m -i -g -A no –t N/A –s

chdev –l en# -a alias4=,


k. Verify that the network is working.


12. If LPAR was running AIO (data collected in Step 2), verify it is running (smitty aio)

13. Check for any other customizations which may have been made on this LPAR

14. Vary on the volume groups, use the “lspv” data collected in Step 2 to identify by PVID a hdisk in each volume group. Run for each volume group:

a. importvg –y hdisk# Will vary on all hdisk in the volume group

b. varyonvg

c. mount all Verify mounts are good

15. Verify paging space is configured appropriately

a. lsps –a Look for Active and Auto set to yes

b. chps –ay pagingXX Run for each paging space, sets Auto

c. swapon /dev/pagingxx Run for each paging space, sets Active


16. Verify LPAR is running 64 bit

a. bootinfo –K If 64, you are good


b. ln –sf /usr/lib/boot/unix_64 /unix If 32, change to run 64 bit

c. ln –sf /usr/lib/boot/unix_64 /usr/lib/boot/unix

d. bosboot –ak /usr/lib/boot/unix_64


17. If LPAR has Power Path

a. Run “powermt config” Creates the powerpath0 device

b. Run “pprootdev on” Sets Power Path control of the boot disk

c. If Clariion, make configuration changes to enable SP failover


chdev -l powerpath0 -Pa QueueDepthAdj=1

chdev –l fcsX –Pa num_cmd_elems=2048 For each fiber adapter

chdev –l fscsiX –Pa fc_err_recov=fast_fail For each fiber adapter

d. Halt the LPAR

e. Activate the Normal profile If Sym/DMX – verify two HBA’s in profile

f. If Clarrion attached, have Storage add zone to 2nd SP

i. Run cfgmgr Configure the 2nd set of disk


g. Run “pprootdev fix” Put rootdisk pvid’s back on hdisk

h. lspv grep rootvg Get boot disk hdisk#

i. bootlist –m normal –o hdisk# hdisk# Set the boot list with both hdisk


20. From the HMC, remove the LPAR profile from the old frame

21. Pull cables from the old LPAR (Ethernet and fiber), deactivate patch panel ports

22. Update documentation, Server Master, AIX Hardware spreadsheet, Patch Panel spreadsheet

23. Return the old boot disk to storage.