The latter has been performed on Oracle 11.2.0.2 running on OUL 5
Step 1 - Create the new disk on your NAS and present this to the node.
Step 2 -Issue a rescan of the multipath devices. This should be done as user root :
[root@node01 ~]# for i in
`ls -1 /sys/class/scsi_host`
> do
> echo "- - -"
>/sys/class/scsi_host/${i}/scan
> done
Now you should see the new device when you perform a ‘multipath –ll’
[root@node01 ~]# multipath
–ll
…
420003ad00000000250056500002de8
dm-20 3PARdata,VV
[size=150G][features=0][hwhandler=0][rw]
\_ 0:0:0:205 sdak 66:64
[active][ready]
\_ 0:0:1:205 sdal 66:80
[active][ready]
\_ 1:0:0:205 sdam 66:96
[active][ready]
\_ 1:0:1:205 sdan 66:112 [active][ready]
…
Step 3 - Give the multipath-device a more meaningfull name. This can be easily accomplished by editing the /etc/multipath.conf file (should be done as user root)
Insert the following block in the /etc/multipath.conf in the secion
multipaths. Enter the correct wwid (can be found by performing a ‘multipath
–ll’). The alias should be something meaningfull :
multipath {
wwid
420003ad00000000250056500002de8
alias ora_data_prd4_lun_05
path_grouping_policy failover
}
To make this changes persistant you have to reload the multipathd
service
[root@node01 ~]# service
multipathd reload
Reloading multipathd: [ OK ]
Step 4 - Create a partition with fdisk (still
as user root) :
[root@node01 mapper]# fdisk
/dev/mapper/ora_data_prd4_lun_05
Device contains neither a
valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS
disklabel. Changes will remain in memory only,
until you decide to write
them. After that, of course, the previous
content won't be
recoverable.
The number of cylinders for
this disk is set to 19581.
There is nothing wrong with
that, but this is larger than 1024,
and could in certain setups
cause problems with:
1) software that runs at
boot time (e.g., old versions of LILO)
2) booting and partitioning
software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000
of partition table 4 will be corrected by w(rite)
Command (m for help): u
Changing display/entry units
to sectors
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First sector (63-314572799,
default 63): 2048
Last sector or +size or
+sizeM or +sizeK (2048-314572799, default 314572799):
Using default value
314572799
Command (m for help): w
The partition table has been
altered!
Calling ioctl() to re-read
partition table.
WARNING: Re-reading the
partition table failed with error 22: Invalid argument.
The kernel still uses the
old table.
The new table will be used
at the next reboot.
Syncing disks.
Fdisk will issue a warning and inform that the changes only will be used after a reboot. You can activate the changes by using kpartx thus avoiding a reboot. After doing this you should see the partition on /dev/mapper :
[root@node01 mapper]# kpartx
-a /dev/mapper/ora_data_prd4_lun_05
[root@node01 mapper]# ll
total 0
…
brw-rw---- 1 root disk 253,
20 Oct 17 10:02 ora_data_prd4_lun_05
brw-rw---- 1 root disk 253,
21 Oct 17 10:05 ora_data_prd4_lun_05p1
…
Step 5 - Create the Oracle ASM Disk
[root@node01 mapper]#
oracleasm createdisk ORA_DATA_DISK05 /dev/mapper/ora_data_prd4_lun_05p1
Writing disk header: done
Instantiating disk: done
Step 6 - Add the disk to the diskgroup. This can be done via the command-line interface or via the gui. I’m going to use the GUI for this. First we need to connect as user grid and set our DISPLAY-variable. The startup the ASM-configuration assistant è asmca
[grid@dbnode4 ~]$ export
DISPLAY=123.456.123.456:0.0
[grid@dbnode4 ~]$ asmca
Select the diskgroup and ‘right-click’. A drop-down will appear. Select “Add disks”
Normally
you should see the new-disk in the list. Select this disk and click OK. The
disk will then be added.
ASM will perform a rebalance-operation. This means that ASM will distribute the extents evenly across all ASM-Disks in the diskgroup. (more info on this : http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG137 )
This will take a while and can be monitored with the following statement
:
select
operation,state,power,sofar,est_work from v$asm_operation;
[grid@node01 ~]$ sqlplus /
as sysasm
SQL*Plus: Release 11.2.0.3.0
Production on Fri Oct 17 11:16:40 2014
Copyright (c) 1982, 2011,
Oracle. All rights reserved.
Connected to:
Oracle Database 11g
Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage
Management option
SQL> select operation,state,power,sofar,est_work
from v$asm_operation;
OPERA STAT POWER
SOFAR EST_WORK
----- ---- ----------
---------- ----------
REBAL RUN 1
118021 118049
You can alter the ‘POWER’ of the rebalance
operation. The values go from 0 to 11 (0 à no rebalance will take place : 11 à most powerfull rebalance, will also
consume more resources but will finish quicker).
Geen opmerkingen:
Een reactie posten