Oracle 11gR2: Stretched Cluster

      1 Comment on Oracle 11gR2: Stretched Cluster

Warning!: the following article will present Stretched cluster functionality using 2 VMWAre Esxi servers.

This is NOT the Oracle supported configuration. Do NOT implement this solution for a productive environment.

 

For openfiler or dns, this link might be helpful: here

 

(Article in progress…)

 

Architecture design

2 Esxi servers

ESXi 1

  • One dns server
  • One iSCSI target server (openfiler 2.9)
  • 4 nodes (node1,node2,node3,node4)

ESXi 2

  • One dns server
  • One iSCSI target server (openfiler 2.9)
  • 4 nodes (node5,node6,node7,node8)

1 Windows 7 box with iSCSI target capabilities (fox third voting file)

 

Here is the graphical representation of the cluster

 

 

Hosts configuration

Hardware and OS

DNS servers

  • Oracle enterprise linux 5.6 x64
  • 1 VCPU
  • 640 Mb RAM
  • 20 GB HD
  • 1 NIC (192.168.1.x) public

Storage servers

  • OpenFiler 2.9 x64
  • 2 VCPU
  • 2 Gb RAM
  • 20 GB HD
  • Shared storage (see later on)
  • 2 NIC (192.168.1.x) public (192.168.102.x) storage with jumbo frame

Nodes

  • Oracle enterprise Linux 6.4 x64
  • 4 VCPU
  • 5 GB RAM
  • 50 HD
  • Shared storage configuration (see later)
  • 3 NIC (192.168.1.x) public, (192.168.101.x) private with jumbo frame, (192.168.102.x) storage with jumbo frame

 

Network configuration

DNS1: 192.168.1.5 (public network)

DNS2: 192.168.1.6 (public network)

Storage1

  • 192.168.1.10 (public network)
  • 192.168.102.5 (storage network) enable jumbo frame MTU 9000

Storage2

  • 192.168.1.11 (public network)
  • 192.168.102.6 (storage network) enable jumbo frame MTU 9000

Node1

  • 192.168.1.201 (public network)
  • 192.168.101.201 (private network) enable jumbo frame MTU 9000
  • 192.168.102.201 (storage network) enable jumbo frame MTU 9000

Node2

  • 192.168.1.202 (public network)
  • 192.168.101.202 (private network) enable jumbo frame MTU 9000
  • 192.168.102.202 (storage network) enable jumbo frame MTU 9000

Node3

  • 192.168.1.203 (public network)
  • 192.168.101.203 (private network) enable jumbo frame MTU 9000
  • 192.168.102.203 (storage network) enable jumbo frame MTU 9000

Node4

  • 192.168.1.204 (public network)
  • 192.168.101.204 (private network) enable jumbo frame MTU 9000
  • 192.168.102.204 (storage network) enable jumbo frame MTU 9000

Node5

  • 192.168.1.205 (public network)
  • 192.168.101.205 (private network) enable jumbo frame MTU 9000
  • 192.168.102.205 (storage network) enable jumbo frame MTU 9000

Node6

  • 192.168.1.206 (public network)
  • 192.168.101.206 (private network) enable jumbo frame MTU 9000
  • 192.168.102.206 (storage network) enable jumbo frame MTU 9000

Node7

  • 192.168.1.207 (public network)
  • 192.168.101.207 (private network) enable jumbo frame MTU 9000
  • 192.168.102.207 (storage network) enable jumbo frame MTU 9000

Node8

  • 192.168.1.208 (public network)
  • 192.168.101.208 (private network) enable jumbo frame MTU 9000
  • 192.168.102.208 (storage network) enable jumbo frame MTU 9000

Libraries to install on nodes

  • gcc-c++-4.4.7-3.el6.x86_64.rpm (yum install gcc)
  • compat-libcap1-1.10-1.x86_64.rpm
  • compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm
  • ksh-20100621-19.el6.x86_64.rpm
  • libaio-devel-0.3.107-10.el6.x86_64.rpm
  • libstdc++-devel-4.4.7-3.el6.x86_64.rpm

 

SSH configuration (will be done during grid infrastructure installation)

DNS Configuration

Named configuration:/var/named/chroot/etc/named.conf

// Enterprise Linux BIND Configuration Tool
// Default initial “Caching Only” name server configuration
//
options {
directory “/var/named”;
dump-file “/var/named/data/cache_dump.db”;
statistics-file “/var/named/data/named_stats.txt”;
};

// Zone for this RAC configuration is mydomain.fr
zone “mydomain.fr” in {
type master;
file “mydomain.fr.zone”;
allow-update { none; };
};

// For reverse lookups
zone “1.168.192.in-addr.fr” in {
type master;
file “1.168.192.in-addr.fr.zone”;
allow-update { none; };
};

DNS1: cat /var/named/chroot/var/named/1.168.192.in-addr.fr.zone

$TTL 1d
@ IN SOA dns1.mydomain.fr. root.mydomain.fr. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)
IN NS dns1.mydomain.fr.

201 IN PTR node1.mydomain.fr.
211 IN PTR node1-vip.mydomain.fr.
202 IN PTR node2.mydomain.fr.
212 IN PTR node2-vip.mydomain.fr.
203 IN PTR node3.mydomain.fr.
213 IN PTR node3-vip.mydomain.fr.
204 IN PTR node4.mydomain.fr.
214 IN PTR node4-vip.mydomain.fr.
205 IN PTR node5.mydomain.fr.
215 IN PTR node5-vip.mydomain.fr.
206 IN PTR node6.mydomain.fr.
216 IN PTR node6-vip.mydomain.fr.
207 IN PTR node7.mydomain.fr.
217 IN PTR node7-vip.mydomain.fr.
208 IN PTR node8.mydomain.fr.
218 IN PTR node8-vip.mydomain.fr.

201 IN PTR node1.
211 IN PTR node1-vip.
202 IN PTR node2.
212 IN PTR node2-vip.
203 IN PTR node3.
213 IN PTR node3-vip.
204 IN PTR node4.
214 IN PTR node4-vip.
205 IN PTR node5.
215 IN PTR node5-vip.
206 IN PTR node6.
216 IN PTR node6-vip.
207 IN PTR node7.
217 IN PTR node7-vip.
208 IN PTR node8.
218 IN PTR node8-vip.

; RAC Nodes SCAN VIPs in Reverse
73 IN PTR clujko-scan.mydomain.fr.
74 IN PTR clujko-scan.mydomain.fr.
75 IN PTR clujko-scan.mydomain.fr.

73 IN PTR clujko-scan.
74 IN PTR clujko-scan.
75 IN PTR clujko-scan.

DNS1: cat /var/named/chroot/var/named/mydomain.fr.zone

$TTL 1d
mydomain.fr. IN SOA dns1.mydomain.fr. root.mydomain.fr. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)

IN NS dns1.mydomain.fr.

node1 IN A 192.168.1.201
node1-vip IN A 192.168.1.211
node2 IN A 192.168.1.202
node2-vip IN A 192.168.1.212
node3 IN A 192.168.1.203
node3-vip IN A 192.168.1.213
node4 IN A 192.168.1.204
node4-vip IN A 192.168.1.214
node5 IN A 192.168.1.205
node5-vip IN A 192.168.1.215
node6 IN A 192.168.1.206
node6-vip IN A 192.168.1.216
node7 IN A 192.168.1.207
node7-vip IN A 192.168.1.217
node8 IN A 192.168.1.208
node8-vip IN A 192.168.1.218

; 3 SCAN VIPs
clujko-scan IN A 192.168.1.73
clujko-scan IN A 192.168.1.74
clujko-scan IN A 192.168.1.75

DNS2: cat /var/named/chroot/var/named/1.168.192.in-addr.fr.zone

$TTL 1d
@ IN SOA dns2.mydomain.fr. root.mydomain.fr. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)
IN NS dns2.mydomain.fr.

201 IN PTR node1.mydomain.fr.
211 IN PTR node1-vip.mydomain.fr.
202 IN PTR node2.mydomain.fr.
212 IN PTR node2-vip.mydomain.fr.
203 IN PTR node3.mydomain.fr.
213 IN PTR node3-vip.mydomain.fr.
204 IN PTR node4.mydomain.fr.
214 IN PTR node4-vip.mydomain.fr.
205 IN PTR node5.mydomain.fr.
215 IN PTR node5-vip.mydomain.fr.
206 IN PTR node6.mydomain.fr.
216 IN PTR node6-vip.mydomain.fr.
207 IN PTR node7.mydomain.fr.
217 IN PTR node7-vip.mydomain.fr.
208 IN PTR node8.mydomain.fr.
218 IN PTR node8-vip.mydomain.fr.

201 IN PTR node1.
211 IN PTR node1-vip.
202 IN PTR node2.
212 IN PTR node2-vip.
203 IN PTR node3.
213 IN PTR node3-vip.
204 IN PTR node4.
214 IN PTR node4-vip.
205 IN PTR node5.
215 IN PTR node5-vip.
206 IN PTR node6.
216 IN PTR node6-vip.
207 IN PTR node7.
217 IN PTR node7-vip.
208 IN PTR node8.
218 IN PTR node8-vip.
; 3 SCAN VIPs
73 IN PTR clujko-scan.mydomain.fr.
74 IN PTR clujko-scan.mydomain.fr.
75 IN PTR clujko-scan.mydomain.fr.

73 IN PTR clujko-scan.
74 IN PTR clujko-scan.
75 IN PTR clujko-scan.

 

DNS2: cat /var/named/chroot/var/named/mydomain.fr.zone

$TTL 1d
mydomain.fr. IN SOA dns2.mydomain.fr. root.mydomain.fr. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)

IN NS dns2.mydomain.fr.

; RAC Nodes Public name
node1 IN A 192.168.1.201
node1-vip IN A 192.168.1.211
node2 IN A 192.168.1.202
node2-vip IN A 192.168.1.212
node3 IN A 192.168.1.203
node3-vip IN A 192.168.1.213
node4 IN A 192.168.1.204
node4-vip IN A 192.168.1.214
node5 IN A 192.168.1.205
node5-vip IN A 192.168.1.215
node6 IN A 192.168.1.206
node6-vip IN A 192.168.1.216
node7 IN A 192.168.1.207
node7-vip IN A 192.168.1.217
node8 IN A 192.168.1.208
node8-vip IN A 192.168.1.218

; 3 SCAN VIPs
clujko-scan IN A 192.168.1.73
clujko-scan IN A 192.168.1.74
clujko-scan IN A 192.168.1.75

 

DNS1:service named start

DNS2:service named start

 

Users creation on Node hosts

groupadd -g 1001 oinstall
groupadd -g 1011 asmadmin
groupadd -g 1021 asmdba
groupadd -g 1031 dba

useradd -u 1100 -g oinstall -G asmadmin,asmdba,dba grid
useradd -u 1101 -g oinstall -G dba,asmdba oracle
umask 022

Folders creation on Node hosts

mkdir -p /oracle/gridbase/
mkdir -p /oracle/grid/
mkdir -p /oracle/grid/11.2.0.3
mkdir -p /oracle/db/
mkdir -p /oracle/db/11.2.0.3/
mkdir -p /oracle/oraInventory

chown -R grid:oinstall /oracle/gridbase/
chown -R grid:oinstall /oracle/grid
chown -R grid:oinstall /oracle/oraInventory
chown -R oracle:oinstall /oracle/db/

Shared storage configuration

On the 2 openfiler server define 4 targets:

  • GRID (2GB)
  • DATA (360 GB)
  • RECO (60 GB)
  • FRA (640 GB)

On the windows 7 box define 1 target:

  • GRID (2GB)

Enable iSCSI service on nodes

chkconfig –level 35 iscsi on
service iscsi restart

Update or create /etc/scsi_id.config on each nodes

# cat > /etc/scsi_id.config
vendor=”ATA”,options=-p 0x80
options=-g

Connect storage to each nodes

[root@]# iscsiadm -m discovery -t sendtargets -p 192.168.102.5
192.168.102.5:3260,1 iqn.2013-04.mydomain.fr:GRID
192.168.102.5:3260,1 iqn.2013-04.mydomain.fr:FRA
192.168.102.5:3260,1 iqn.2013-04.mydomain.fr:DATA
192.168.102.5:3260,1 iqn.2013-04.mydomain.fr:RECO
[root@]# iscsiadm -m discovery -t sendtargets -p 192.168.102.6
192.168.102.6:3260,1 iqn2.2013-04.mydomain.fr:RECO
192.168.102.6:3260,1 iqn2.2013-04.mydomain.fr:GRID
192.168.102.6:3260,1 iqn2.2013-04.mydomain.fr:FRA
192.168.102.6:3260,1 iqn2.2013-04.mydomain.fr:DATA
iscsiadm -m discovery -t sendtargets -p 192.168.102.7
192.168.102.7:3260,1 iqn3.2013-04.mydomain.fr:GRID

Connect storage 1
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:GRID -p 192.168.102.5 -l
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:DATA -p 192.168.102.5 -l
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:FRA -p 192.168.102.5 -l
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:RECO -p 192.168.102.5 -l
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:GRID -p 192.168.102.5 –op update -n node.startup -v automatic
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:DATA -p 192.168.102.5 –op update -n node.startup -v automatic
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:FRA -p 192.168.102.5 –op update -n node.startup -v automatic
[root@]#iscsiadm -m node -T iqn.2013-04.mydomain.fr:RECO -p 192.168.102.5 –op update -n node.startup -v automatic

Connect storage 2
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:GRID -p 192.168.102.6 -l
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:DATA -p 192.168.102.6 -l
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:FRA -p 192.168.102.6 -l
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:RECO -p 192.168.102.6 -l
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:GRID -p 192.168.102.6 –op update -n node.startup -v automatic
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:DATA -p 192.168.102.6 –op update -n node.startup -v automatic
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:FRA -p 192.168.102.6 –op update -n node.startup -v automatic
[root@]#iscsiadm -m node -T iqn2.2013-04.mydomain.fr:RECO -p 192.168.102.6 –op update -n node.startup -v automatic

Connect storage 3 (for third voting file)
[root@]#iscsiadm -m node -T iqn3.2013-04.mydomain.fr:GRID -p 192.168.102.7 -l
[root@]#iscsiadm -m node -T iqn3.2013-04.mydomain.fr:GRID -p 192.168.102.7 –op update -n node.startup -v automatic

From first node create partitions using fdisk on each imported disk.

For each disk, query the iScsi ID: scsi_id -g -u -d /dev/sdx

14f504e46494c455238746c686c682d30466b512d70393770

 

Create /etc/udev/rules.d/99-oracle-asmdevices.rules file

KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c455238746c686c682d30466b512d70393770″, NAME+=”asmGRID1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c45524f4e64386a472d497035772d655a557a”, NAME+=”asmGRID2″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”2d116c4434a7bf37a”, NAME+=”asmGRID3″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c4552774e617456732d6a6470652d32766138″, NAME+=”asmRECO1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c45526661454e44792d59646c372d666f7261″, NAME+=”asmRECO2″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c455259573870414d2d4d4747642d58685254″, NAME+=”asmDATA1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c45526478385a77702d346472792d50536154″, NAME+=”asmDATA2″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c455246424c7365712d5a346a632d61396653″, NAME+=”asmFRA1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, SUBSYSTEM==”block”, ENV{DEVTYPE}==”disk”, ENV{ID_SERIAL}==”14f504e46494c4552496a46536f392d366c75332d58365364″, NAME+=”asmFRA2″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″

Enable changes

/sbin/start_udev

Check changes

[root@]# ll /dev/asm*
brw-rw—- 1 grid asmadmin 65, 0 Jun 19 13:37 /dev/asmDATA1
brw-rw—- 1 grid asmadmin 8, 96 Jun 19 13:37 /dev/asmDATA2
brw-rw—- 1 grid asmadmin 8, 160 Jun 19 13:37 /dev/asmFRA1
brw-rw—- 1 grid asmadmin 65, 16 Jun 19 13:37 /dev/asmFRA2
brw-rw—- 1 grid asmadmin 8, 144 Jun 19 13:37 /dev/asmGRID1
brw-rw—- 1 grid asmadmin 8, 48 Jun 19 13:37 /dev/asmGRID2
brw-rw—- 1 grid asmadmin 8, 224 Jun 19 12:38 /dev/asmGRID3
brw-rw—- 1 grid asmadmin 8, 80 Jun 19 13:37 /dev/asmRECO1
brw-rw—- 1 grid asmadmin 8, 208 Jun 19 13:37 /dev/asmRECO2

Copy /etc/udev/rules.d/99-oracle-asmdevices.rules file to other nodes.

Run /sbin/start_udev from each nodes and check changes.

At this stage you can start the installation of the Grid infrastructure.

This process will not be described as this topic is already covered y many articles. Remember just to setup ssh connectivity for grid and oracle users.

During the installation steps, when defining ASM diskgroup for the grid infrastructure, select the 3 GRID disks and specify normal redundancy.

Targets nodes are:

  • node1
  • node2
  • node3
  • node4
  • node5
  • node6
  • node7
  • node8

At the end, create the DATA,RECO and FRA diskgroups. Here is the cluster states at the end of the installation phase:

[root@node1]# crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ONLINE ONLINE node7
ONLINE ONLINE node8
ora.FRA.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ONLINE ONLINE node7
ONLINE ONLINE node8
ora.GRID.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ONLINE ONLINE node7
ONLINE ONLINE node8
ora.LISTENER.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ONLINE ONLINE node7
ONLINE ONLINE node8
ora.RECO.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ONLINE ONLINE node7
ONLINE ONLINE node8
ora.asm
ONLINE ONLINE node1 Started
ONLINE ONLINE node2 Started
ONLINE ONLINE node3 Started
ONLINE ONLINE node4 Started
ONLINE ONLINE node5 Started
ONLINE ONLINE node6 Started
ONLINE ONLINE node7 Started
ONLINE ONLINE node8 Started
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
OFFLINE OFFLINE node3
OFFLINE OFFLINE node4
OFFLINE OFFLINE node5
OFFLINE OFFLINE node6
OFFLINE OFFLINE node7
OFFLINE OFFLINE node8
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ONLINE ONLINE node7
ONLINE ONLINE node8
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ONLINE ONLINE node7
ONLINE ONLINE node8
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node7
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE node6
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE node8
ora.cvu
1 ONLINE ONLINE node8
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.node3.vip
1 ONLINE ONLINE node3
ora.node4.vip
1 ONLINE ONLINE node4
ora.node5.vip
1 ONLINE ONLINE node5
ora.node6.vip
1 ONLINE ONLINE node6
ora.node7.vip
1 ONLINE ONLINE node7
ora.node8.vip
1 ONLINE ONLINE node8
ora.oc4j
1 ONLINE ONLINE node5
ora.scan1.vip
1 ONLINE ONLINE node7
ora.scan2.vip
1 ONLINE ONLINE node6
ora.scan3.vip
1 ONLINE ONLINE node8
[root@node1]#

 

Features

Purpose

This demo infrastructure will host an Apex 4.2.2 demo application: http://jko-licorne.com:7001/apex/f?p=106

During normal operation, 6 nodes will be used to satisfy application needs 🙂

2 Server pools will be used (PMU and TST)

-bash-4.1$ srvctl status srvpool -g PMU
Server pool name: PMU
Active servers count: 6
-bash-4.1$ srvctl status srvpool -g TST
Server pool name: TST
Active servers count: 2
-bash-4.1$

After RDBMS installation and PMU,TST databases creation, here is the databases status:

-bash-4.1$ srvctl status srvpool -g PMU
Server pool name: PMU
Active servers count: 6
-bash-4.1$ srvctl status database -d PMU
Instance PMU_2 is running on node node1
Instance PMU_5 is running on node node2
Instance PMU_4 is running on node node3
Instance PMU_1 is running on node node4
Instance PMU_6 is running on node node5
Instance PMU_3 is running on node node6

-bash-4.1$ srvctl status database -d TST
Instance TST_1 is running on node node7
Instance TST_2 is running on node node8

 

What happen if one storage is lost?

Suppose now we lose the storage that handle GRID3 disk! This disk handle the third storage where the quorum disk is located that stored the third voting file!

When the disk is back, a query into v$asm_disk shows that the name attribute is null and the mount state is CLOSED.

In this case, as the whole LUN is down, the disk_repair time is not active and the disk is removed from the disk group immediately.

Recovery the situation was done recreating the failure group:

alter diskgroup grid add failgroup grid_0002 disk ‘/dev/asmGrid3’ force;

After the operation, crsctl query css votedisk shows again the 3 voting files.

This behavior is normal since when creating a diskgroup with asmca, the default compatible.rdbms is set to 10.1.0.0.0

In this case, the disk_repair_time parameters is not honored and therefore the whole lun is removed immediately.

To avoid this, the compatible.rdbms must be set to at least 11.2.0.0.0

Here is the chronology on what is happening in this case:

SQL> select name,MOUNT_STATUS,STATE from v$asm_disk;

NAME MOUNT_S STATE

—————————— ——- ——–

GRID_0002 CACHED NORMAL

FRA_0000 CACHED NORMAL

DATA_0000 CACHED NORMAL

FRA_0001 CACHED NORMAL

RECO_0001 CACHED NORMAL

GRID_0001 CACHED NORMAL

GRID_0000 CACHED NORMAL

DATA_0001 CACHED NORMAL

RECO_0000 CACHED NORMAL

9 rows selected.

 

root@node1 ~]# crsctl query css votedisk

## STATE File Universal Id File Name Disk group

— —– —————– ——— ———

  1. 1. ONLINE 9c14b3478c0b4fc9bf4eb1473a6381ad (/dev/asmGRID1) [GRID]
  2. 2. ONLINE 023b136537d14fbbbf23af712bc2d71f (/dev/asmGRID2) [GRID]
  3. 3. ONLINE 470f81bd12044fffbf957fc21cc2ad56 (/dev/asmGRID3) [GRID]

Located 3 voting disk(s).


kill asmGrid3!

in alertnode1.log

2013-06-03 18:55:13.630

[cssd(3048)]CRS-1615:No I/O has completed after 50% of the maximum interval. Voting file /dev/asmGRID3 will be considered not functional in 99340 milliseconds

2013-06-03 18:55:33.534

[cssd(3048)]CRS-1649:An I/O error occured for voting file: /dev/asmGRID3; details at (:CSSNM00059:) in /oracle/grid/11.2.0.3/log/node1/cssd/ocssd.log.

2013-06-03 18:55:33.534

[cssd(3048)]CRS-1649:An I/O error occured for voting file: /dev/asmGRID3; details at (:CSSNM00060:) in /oracle/grid/11.2.0.3/log/node1/cssd/ocssd.log.

2013-06-03 18:55:33.842

[cssd(3048)]CRS-1626:A Configuration change request completed successfully

2013-06-03 18:55:33.867

[cssd(3048)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 node3 node4 node5 node6 node7 node8 .


SQL> select name,MOUNT_STATUS,STATE from v$asm_disk;

NAME MOUNT_S STATE

—————————— ——- ——–

CLOSED NORMAL

GRID_0002 MISSING NORMAL

FRA_0000 CACHED NORMAL

DATA_0000 CACHED NORMAL

FRA_0001 CACHED NORMAL

RECO_0001 CACHED NORMAL

GRID_0001 CACHED NORMAL

GRID_0000 CACHED NORMAL

DATA_0001 CACHED NORMAL

RECO_0000 CACHED NORMAL

10 rows selected.

 

[root@node1 ~]# crsctl query css votedisk

## STATE File Universal Id File Name Disk group

— —– —————– ——— ———

  1. 1. ONLINE 9c14b3478c0b4fc9bf4eb1473a6381ad (/dev/asmGRID1) [GRID]
  2. 2. ONLINE 023b136537d14fbbbf23af712bc2d71f (/dev/asmGRID2) [GRID]

Located 2 voting disk(s).


Bring asmGrid3 back

SQL> select name,MOUNT_STATUS,STATE from v$asm_disk;

 

NAME MOUNT_S STATE

—————————— ——- ——–

CLOSED NORMAL

GRID_0002 MISSING NORMAL

FRA_0000 CACHED NORMAL

DATA_0000 CACHED NORMAL

FRA_0001 CACHED NORMAL

RECO_0001 CACHED NORMAL

GRID_0001 CACHED NORMAL

GRID_0000 CACHED NORMAL

DATA_0001 CACHED NORMAL

RECO_0000 CACHED NORMAL

10 rows selected.


SQL> alter diskgroup grid online all;

SQL> select name,MOUNT_STATUS,STATE from v$asm_disk;

NAME MOUNT_S STATE

—————————— ——- ——–

GRID_0002 CACHED NORMAL

FRA_0000 CACHED NORMAL

DATA_0000 CACHED NORMAL

FRA_0001 CACHED NORMAL

RECO_0001 CACHED NORMAL

GRID_0001 CACHED NORMAL

GRID_0000 CACHED NORMAL

DATA_0001 CACHED NORMAL

RECO_0000 CACHED NORMAL

9 rows selected.

 

In alertnode1.log

2013-06-03 19:03:09.731

[cssd(3048)]CRS-1605:CSSD voting file is online: /dev/asmGRID3; details in /oracle/grid/11.2.0.3/log/node1/cssd/ocssd.log.

2013-06-03 19:03:09.732

[cssd(3048)]CRS-1626:A Configuration change request completed successfully

2013-06-03 19:03:09.758

[cssd(3048)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 node3 node4 node5 node6 node7 node8 .

 

[root@node1 ~]# crsctl query css votedisk

## STATE File Universal Id File Name Disk group

— —– —————– ——— ———

  1. 1. ONLINE 9c14b3478c0b4fc9bf4eb1473a6381ad (/dev/asmGRID1) [GRID]
  2. 2. ONLINE 023b136537d14fbbbf23af712bc2d71f (/dev/asmGRID2) [GRID]
  3. 3. ONLINE 7c18879ab5c04fa5bf9b818147ecd23d (/dev/asmGRID3) [GRID]

Located 3 voting disk(s).

[root@node1 ~]#

 

Now the LUN is not dropped…

 

 

How Server pool concept works

Two Server POOLS

PMU importance=3 min=5 max=6 node list=node1,node2,node3,node4,node5,node6,node7,node8

TST importance=2 min=1 max=2 node list=node1,node2,node3,node4,node5,node6,node7,node8

 

THe following two commands set the required server pools properties:

-bash-4.1$ srvctl modify srvpool -g PMU -i 3 -l 5 -u 6
-bash-4.1$ srvctl modify srvpool -g TST -i 2 -l 1 -u 2

Let’s query db resources to check our two databases:

-bash-4.1$ crsctl status res -t -w “NAME co ‘db'”
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.pmu.db
1 ONLINE ONLINE node4 Open
2 ONLINE ONLINE node1 Open
3 ONLINE ONLINE node6 Open
4 ONLINE ONLINE node3 Open
5 ONLINE ONLINE node2 Open
6 ONLINE ONLINE node5 Open
ora.tst.db
1 ONLINE ONLINE node7 Open
2 ONLINE ONLINE node8 Open

 

PMU database details:

-bash-4.1$ crsctl status res ora.pmu.db -v
NAME=ora.pmu.db
TYPE=ora.database.type
LAST_SERVER=node4
STATE=ONLINE on node4
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 1 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:52:28
LAST_STATE_CHANGE=06/22/2013 21:52:28
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node1
STATE=ONLINE on node1
TARGET=ONLINE
CARDINALITY_ID=2
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 2 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:05
LAST_STATE_CHANGE=06/22/2013 21:53:05
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node6
STATE=ONLINE on node6
TARGET=ONLINE
CARDINALITY_ID=3
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 3 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:04
LAST_STATE_CHANGE=06/22/2013 21:53:04
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node3
STATE=ONLINE on node3
TARGET=ONLINE
CARDINALITY_ID=4
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 4 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:05
LAST_STATE_CHANGE=06/22/2013 21:53:05
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node2
STATE=ONLINE on node2
TARGET=ONLINE
CARDINALITY_ID=5
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 5 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:52:27
LAST_STATE_CHANGE=06/22/2013 21:52:27
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node5
STATE=ONLINE on node5
TARGET=ONLINE
CARDINALITY_ID=6
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 6 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:26
LAST_STATE_CHANGE=06/22/2013 21:53:26
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

TST database details:

-bash-4.1$ crsctl status res ora.tst.db -v
NAME=ora.tst.db
TYPE=ora.database.type
LAST_SERVER=node7
STATE=ONLINE on node7
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=554
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.tst.db 1 1
INCARNATION=1
LAST_RESTART=06/24/2013 19:03:19
LAST_STATE_CHANGE=06/24/2013 19:03:19
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node8
STATE=ONLINE on node8
TARGET=ONLINE
CARDINALITY_ID=2
CREATION_SEED=554
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.tst.db 2 1
INCARNATION=1
LAST_RESTART=06/24/2013 19:03:19
LAST_STATE_CHANGE=06/24/2013 19:03:19
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

What happen if we lose for example node5 that belong to PMU server pool?

Here is the result of :crsctl status res -t -w “NAME co ‘db'”

-bash-4.1$ crsctl status res -t -w “NAME co ‘db'”
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.pmu.db
1 ONLINE ONLINE node4 Open
2 ONLINE ONLINE node1 Open
3 ONLINE ONLINE node6 Open
4 ONLINE ONLINE node3 Open
5 ONLINE ONLINE node2 Open
6 ONLINE OFFLINE
ora.tst.db
1 ONLINE ONLINE node7 Open
2 ONLINE ONLINE node8 Open

-bash-4.1$ crsctl status res ora.pmu.db -v
NAME=ora.pmu.db
TYPE=ora.database.type
LAST_SERVER=node4
STATE=ONLINE on node4
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 1 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:52:28
LAST_STATE_CHANGE=06/22/2013 21:52:28
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node1
STATE=ONLINE on node1
TARGET=ONLINE
CARDINALITY_ID=2
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 2 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:05
LAST_STATE_CHANGE=06/22/2013 21:53:05
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node6
STATE=ONLINE on node6
TARGET=ONLINE
CARDINALITY_ID=3
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 3 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:04
LAST_STATE_CHANGE=06/22/2013 21:53:04
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node3
STATE=ONLINE on node3
TARGET=ONLINE
CARDINALITY_ID=4
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 4 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:05
LAST_STATE_CHANGE=06/22/2013 21:53:05
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node2
STATE=ONLINE on node2
TARGET=ONLINE
CARDINALITY_ID=5
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 5 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:52:27
LAST_STATE_CHANGE=06/22/2013 21:52:27
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node5
STATE=OFFLINE
TARGET=ONLINE
CARDINALITY_ID=6
CREATION_SEED=523
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 6 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:26
LAST_STATE_CHANGE=06/24/2013 20:44:35
STATE_DETAILS=
INTERNAL_STATE=STABLE

You can see that the status is now offline on server node5 and the state_details is null.

But the interesting part is that nothing has changed except the server we have killed. Why, because the minim servers required by PMU server pool is 5 and we have still 5 servers allocated which is ok and the TST server pool is also inline with it specification.

Now suppose we kill also node2.

Here is now the result of: crsctl status res -t -w “NAME co ‘db'”

As PMU server pool is now falling bellow it minimum value, and it importance is greater that TST server pool, a server from TST server pool will be given to PMU server pool.

To do this oracle will shutdown one TST instance

-bash-4.1$ crsctl status res -t -w “NAME co ‘db'”
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.pmu.db
1 ONLINE ONLINE node4 Open
2 ONLINE ONLINE node1 Open
3 ONLINE ONLINE node6 Open
4 ONLINE ONLINE node3 Open
5 ONLINE OFFLINE
6 ONLINE OFFLINE
ora.tst.db
1 ONLINE ONLINE node7 Open,STOPPING
2 ONLINE ONLINE node8 Open

Then oracle will start the PMU instance into that server

-bash-4.1$ crsctl status res -t -w “NAME co ‘db'”
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.pmu.db
1 ONLINE ONLINE node4 Open
2 ONLINE ONLINE node1 Open
3 ONLINE ONLINE node6 Open
4 ONLINE ONLINE node3 Open
5 ONLINE OFFLINE STARTING
6 ONLINE OFFLINE
ora.tst.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE ONLINE node8 Open

At the end of the process, here is the global database picture:
-bash-4.1$ crsctl status res -t -w “NAME co ‘db'”
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.pmu.db
1 ONLINE ONLINE node4 Open
2 ONLINE ONLINE node1 Open
3 ONLINE ONLINE node6 Open
4 ONLINE ONLINE node3 Open
5 ONLINE ONLINE node7 Open
6 ONLINE OFFLINE
ora.tst.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE ONLINE node8 Open

Here is the detail for TST database:-bash-4.1$ crsctl status res ora.tst.db -v

See state_details for node7

-bash-4.1$ crsctl status res ora.tst.db -v
NAME=ora.tst.db
TYPE=ora.database.type
LAST_SERVER=node7
STATE=OFFLINE
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=554
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.tst.db 1 1
INCARNATION=1
LAST_RESTART=06/24/2013 19:03:19
LAST_STATE_CHANGE=06/24/2013 20:52:11
STATE_DETAILS=Instance Shutdown
INTERNAL_STATE=STABLE

LAST_SERVER=node8
STATE=ONLINE on node8
TARGET=ONLINE
CARDINALITY_ID=2
CREATION_SEED=554
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.tst.db 2 1
INCARNATION=1
LAST_RESTART=06/24/2013 19:03:19
LAST_STATE_CHANGE=06/24/2013 19:03:19
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

What happen now if we bring back node5 and node 2?

-bash-4.1$ crsctl status res -t -w “NAME co ‘db'”
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.pmu.db
1 ONLINE ONLINE node4 Open
2 ONLINE ONLINE node1 Open
3 ONLINE ONLINE node6 Open
4 ONLINE ONLINE node3 Open
5 ONLINE ONLINE node7 Open
6 ONLINE ONLINE node5 Open
ora.tst.db
1 ONLINE ONLINE node2 Open
2 ONLINE ONLINE node8 Open

The two databases are back to initial states except that the location of instances is not the same. Anyway, this doesn’t matter as the most important here is the quality of the service in term of resource capacity to satisfy business needs!

In a policy managed database:

  • No fixed assignment. Cardinality-based approach (#nodes)
  • Only database and service cluster resources, no instance resources
  • You cannot directly specify where a instance will run

 

Possible wrong interpretation with policy managed database

Looking previous command result, we have our two database up and running with required amount of server. Now lets modify server pool PMU as follow:

-bash-4.1$ srvctl modify srvpool -g PMU -i 3 -l 5 -u 5
PRCS-1011 : Failed to modify server pool PMU
CRS-2736: The operation requires stopping resource ‘ora.pmu.db’ on server ‘node1’
CRS-2738: Unable to modify server pool ‘ora.PMU’ as this will affect running resources, but the force option was not specified

We cannot change the server pool because the consequences is that one instance must be shutdown. To do it, we must use the -f (force) option

-bash-4.1$ srvctl modify srvpool -g PMU -i 3 -l 5 -u 5 -f

Result is:

 

-bash-4.1$ crsctl status res -t -w “NAME co ‘db'”
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.pmu.db
1 ONLINE ONLINE node4 Open
2 ONLINE OFFLINE Instance Shutdown
3 ONLINE ONLINE node6 Open
4 ONLINE ONLINE node3 Open
5 ONLINE ONLINE node7 Open
6 ONLINE ONLINE node5 Open
ora.tst.db
1 ONLINE ONLINE node2 Open
2 ONLINE ONLINE node8 Open

And the PMU database detail, note the STATE_DETAILS value for node1!

This result could be wrongly interpreted by a monitoring tool but in fact it’s just a normal behavior when a server is removed from a server pool.
-bash-4.1$ crsctl status res ora.pmu.db -v
NAME=ora.pmu.db
TYPE=ora.database.type
LAST_SERVER=node4
STATE=ONLINE on node4
TARGET=ONLINE
CARDINALITY_ID=1
CREATION_SEED=555
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 1 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:52:28
LAST_STATE_CHANGE=06/22/2013 21:52:28
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node1
STATE=OFFLINE
TARGET=ONLINE
CARDINALITY_ID=2
CREATION_SEED=555
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 2 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:05
LAST_STATE_CHANGE=06/24/2013 21:34:55
STATE_DETAILS=Instance Shutdown
INTERNAL_STATE=STABLE

LAST_SERVER=node6
STATE=ONLINE on node6
TARGET=ONLINE
CARDINALITY_ID=3
CREATION_SEED=555
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 3 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:04
LAST_STATE_CHANGE=06/22/2013 21:53:04
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node3
STATE=ONLINE on node3
TARGET=ONLINE
CARDINALITY_ID=4
CREATION_SEED=555
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 4 1
INCARNATION=1
LAST_RESTART=06/22/2013 21:53:05
LAST_STATE_CHANGE=06/22/2013 21:53:05
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node7
STATE=ONLINE on node7
TARGET=ONLINE
CARDINALITY_ID=5
CREATION_SEED=555
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 5 1
INCARNATION=2
LAST_RESTART=06/24/2013 20:52:41
LAST_STATE_CHANGE=06/24/2013 20:52:41
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

LAST_SERVER=node5
STATE=ONLINE on node5
TARGET=ONLINE
CARDINALITY_ID=6
CREATION_SEED=555
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.pmu.db 6 1
INCARNATION=2
LAST_RESTART=06/24/2013 21:16:09
LAST_STATE_CHANGE=06/24/2013 21:16:09
STATE_DETAILS=Open
INTERNAL_STATE=STABLE

 

Services with Server pools

There is no instance assignment when creating service for a database in a server pool.

It can be created either SINGLETON (one node) or UNIFORM (all nodes).

  • SINGLETON: Service will run on only one node in the server pool (even if the service pool have more that one node)
  • UNIFORM: Service will run on all nodes that belong to the server pool.

Note:If you use Quality of Service, you must use UNIFORM service definition.

Test of SINGLETON and UNIFORM services

Creation of an SINGLETON OLTP service for our PMU database into the PMU server pool.

-bash-4.1$srvctl add service -d PMU -s OLTP -g PMU

The following two tabs change content below.

Jacques

I am Oracle Certified Master 11g & 12c database architect with significant experience in heterogeneous environments, and strong ability to lead complex and critical projects requiring multiple technical implementations. at Trivadis SA

1 thought on “Oracle 11gR2: Stretched Cluster

Leave a Reply

Your email address will not be published. Required fields are marked *