Oracle Database Appliance patching and CPU cores reduction

Here is a full upgrade steps log from bundle 2.3 to 2.4 and at the end, the CPU reconfiguration from 12 to 2 per node.

The patch number is:

Patch 14752755 – Patch Bundle 2.4.0.0.0

 

This patch is required if you want to reduce the CPU count as there is a bug on previous version.

The following command return null but it should not:

/usr/sbin/dmidecode -t1 |grep Serial

Serial Number:

 

Consider planning one full day for the operation if you have two ODA’s

Have fun!

 

 

 

Patching Infrastructure: From Node1 only

[root@stb01 ~]# cd /opt/oracle/oak/bin

 

Check if no em agent is running:

[root@stb01 bin]# ps -ef | grep agent

grid 13607 1 0 2012 ? 04:24:28 /u01/app/11.2.0.3/grid/bin/oraagent.bin

root 13670 1 0 2012 ? 10:45:31 /u01/app/11.2.0.3/grid/bin/orarootagent.bin

root 13766 1 0 2012 ? 01:09:37 /u01/app/11.2.0.3/grid/bin/cssdagent

root 20138 1 0 2012 ? 09:51:56 /u01/app/11.2.0.3/grid/bin/orarootagent.bin

grid 20146 1 0 2012 ? 03:12:51 /u01/app/11.2.0.3/grid/bin/oraagent.bin

grid 20262 1 0 2012 ? 00:14:10 /u01/app/11.2.0.3/grid/bin/scriptagent.bin

oracle 20452 1 0 2012 ? 14:20:11 /u01/app/11.2.0.3/grid/bin/oraagent.bin

root 21047 20832 0 13:01 pts/0 00:00:00 grep agent

 

[root@stb01 bin]# ./oakcli update -patch 2.4.0.0.0 –infra

INFO: DB, ASM, Clusterware may be stopped during the patch if required

INFO: Both nodes may get rebooted automatically during the patch if required

Do you want to continue: [Y/N]?: y

INFO: User has confirmed the reboot

INFO: Patch bundle must be unpacked on the second node also before applying this patch

Did you unpack the patch bundle on the second node?: [Y/N]?: y

 

Please enter the ‘root’ user password:

Please re-enter the ‘root’ user password:

INFO: Setting up the SSH

……….done

INFO: Running pre-install scripts

……….done

INFO: 2013-03-04 13:02:25: Running pre patch script for 2.4.0.0.0

INFO: 2013-03-04 13:02:25: Completed pre patch script for 2.4.0.0.0

 

 

INFO: 2013-03-04 13:02:28: ——————Patching HMP————————-

SUCCESS: 2013-03-04 13:02:59: Successfully upgraded the HMP

 

INFO: 2013-03-04 13:02:59: ———————-Patching OAK———————

SUCCESS: 2013-03-04 13:03:23: Succesfully upgraded OAK

 

INFO: 2013-03-04 13:03:25: —————–Installing / Patching TFA—————–

SUCCESS: 2013-03-04 13:05:26: Successfully updated / installed the TFA

 

INFO: 2013-03-04 13:05:27: ——————Patching OS————————-

INFO: 2013-03-04 13:05:37: Clusterware is running on one or more nodes of the cluster

INFO: 2013-03-04 13:05:37: Attempting to stop clusterware and its resources across the cluster

SUCCESS: 2013-03-04 13:07:40: Successfully stopped the clusterware

 

INFO: 2013-03-04 13:07:40: OS upgrade may take few minutes. Please wait…

SUCCESS: 2013-03-04 13:10:22: Successfully upgraded the OS

 

INFO: 2013-03-04 13:10:28: ———————-Patching IPMI———————

SUCCESS: 2013-03-04 13:10:29: Succesfully upgraded IPMI

 

INFO: 2013-03-04 13:10:36: —————-Patching the Storage——————-

INFO: 2013-03-04 13:10:36: ………………..Patching SSDs……………

INFO: 2013-03-04 13:10:36: Disk : d20 is already running with : ZeusIOPs G3 E125

INFO: 2013-03-04 13:10:37: Disk : d21 is already running with : ZeusIOPs G3 E125

INFO: 2013-03-04 13:10:37: Disk : d22 is already running with : ZeusIOPs G3 E125

INFO: 2013-03-04 13:10:37: Disk : d23 is already running with : ZeusIOPs G3 E125

INFO: 2013-03-04 13:10:37: ………………..Patching shared HDDs……………

INFO: 2013-03-04 13:10:37: Disk : d0 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:37: Disk : d1 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:37: Disk : d2 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:38: Disk : d3 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:38: Disk : d4 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:38: Disk : d5 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:38: Disk : d6 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:39: Disk : d7 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:39: Disk : d8 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:39: Disk : d9 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:39: Disk : d10 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:39: Disk : d11 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:40: Disk : d12 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:40: Disk : d13 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:40: Disk : d14 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:40: Disk : d15 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:41: Disk : d16 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:41: Disk : d17 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:41: Disk : d18 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:41: Disk : d19 is already running with : HUS1560SCSUN600G A6C0

INFO: 2013-03-04 13:10:42: ………………..Patching local HDDs……………

INFO: 2013-03-04 13:10:42: Disk : c0d0 is already running with : WD500BLHXSUN 5G08

INFO: 2013-03-04 13:10:42: Disk : c0d1 is already running with : WD500BLHXSUN 5G08

INFO: 2013-03-04 13:10:42: ………………..Patching Expanders……………

INFO: 2013-03-04 13:10:42: Expander : x0 is already running with : T4 Storage 0342

INFO: 2013-03-04 13:10:42: Expander : x1 is already running with : T4 Storage 0342

INFO: 2013-03-04 13:10:42: ………………..Patching Controllers……………

INFO: 2013-03-04 13:10:42: No-update for the Controller: c0

INFO: 2013-03-04 13:10:42: Controller : c1 is already running with : 0x0072 11.05.02.00

INFO: 2013-03-04 13:10:42: Controller : c2 is already running with : 0x0072 11.05.02.00

INFO: 2013-03-04 13:10:43: ————Finished the storage Patching————

 

 

INFO: 2013-03-04 13:10:45: —————–Patching Ilom & Bios—————–

INFO: 2013-03-04 13:10:45: Getting the ILOM Ip address

INFO: 2013-03-04 13:10:45: Updating the Ilom using LAN+ protocol

INFO: 2013-03-04 13:10:46: Updating the ILOM. It takes a while

INFO: 2013-03-04 13:15:13: Verifying the updated Ilom Version, it may take a while if ServiceProcessor is booting

INFO: 2013-03-04 13:15:15: Waiting for the service processor to be up

SUCCESS: 2013-03-04 13:19:01: Successfully updated the ILOM with the firmware 3.0.16.22.a r75629

 

 

INFO: Patching the infrastructure on node: stb02 , it may take upto 30 minutes. Please wait

…………done

 

INFO: Infrastructure patching summary on node: 192.168.16.24

SUCCESS: 2013-03-04 13:31:21: Successfully upgraded the HMP

SUCCESS: 2013-03-04 13:31:21: Succesfully updated the OAK

SUCCESS: 2013-03-04 13:31:21: Successfully updated the TFA

SUCCESS: 2013-03-04 13:31:21: Successfully upgraded the OS

SUCCESS: 2013-03-04 13:31:21: Succesfully updated the IPMI

INFO: 2013-03-04 13:31:21: Storage patching summary

SUCCESS: 2013-03-04 13:31:21: No failures during storage upgrade

SUCCESS: 2013-03-04 13:31:21: Successfully updated the ILOM & Bios

 

INFO: Infrastructure patching summary on node: 192.168.16.25

SUCCESS: 2013-03-04 13:31:21: Successfully upgraded the HMP

SUCCESS: 2013-03-04 13:31:21: Succesfully updated the OAK

SUCCESS: 2013-03-04 13:31:21: Successfully upgraded the OS

SUCCESS: 2013-03-04 13:31:21: Succesfully updated the IPMI

INFO: 2013-03-04 13:31:21: Storage patching summary

SUCCESS: 2013-03-04 13:31:21: No failures during storage upgrade

SUCCESS: 2013-03-04 13:31:21: Successfully updated the ILOM & Bios

 

INFO: Running post-install scripts

…………done

INFO: Some of the patched components require node reboot. Rebooting the nodes

INFO: Setting up the SSH

…………done

 

Broadcast message from root (Mon Mar 4 13:36:07 2013):

 

The system is going down for system halt NOW!

Takes nearly from 1h30 to 2h15

 

 

 

Patching Grid Infrastructure: from Node1 only

[root@stb01 ~]# cd /opt/oracle/oak/bin

[root@stb01 bin]# ./oakcli update -patch 2.4.0.0.0 –gi

 

Please enter the ‘root’ user password:

Please re-enter the ‘root’ user password:

 

Please enter the ‘grid’ user password:

Please re-enter the ‘grid’ user password:

INFO: Setting up the SSH

……….done

 

……….done

SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.

INFO: 2013-03-04 14:34:49: Setting up the ssh for grid user

……….done

SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.

INFO: 2013-03-04 14:35:10: Patching the GI home on node stb01

INFO: 2013-03-04 14:35:10: Updating the opatch

INFO: 2013-03-04 14:35:35: Performing the conflict checks

SUCCESS: 2013-03-04 14:35:45: Conflict checks passed for all the homes

INFO: 2013-03-04 14:35:45: Checking if the patch is already applied on any of the homes

INFO: 2013-03-04 14:35:48: No home is already up-to-date

SUCCESS: 2013-03-04 14:35:59: Successfully stopped the dbconsoles

INFO: 2013-03-04 14:36:04: Applying patch on the homes: /u01/app/11.2.0.3/grid

INFO: 2013-03-04 14:36:04: It may take upto 15 mins

SUCCESS: 2013-03-04 14:45:59: Successfully applied the patch on home: /u01/app/11.2.0.3/grid

SUCCESS: 2013-03-04 14:46:04: Successfully started the dbconsoles

INFO: 2013-03-04 14:46:04: Patching the GI home on node stb02

 

……….done

 

INFO: GI patching summary on node: stb01

SUCCESS: 2013-03-04 14:59:57: Successfully applied the patch on home /u01/app/11.2.0.3/grid

 

INFO: GI patching summary on node: stb02

SUCCESS: 2013-03-04 14:59:57: Successfully applied the patch on home /u01/app/11.2.0.3/grid

 

INFO: Running post-install scripts

……….done

INFO: Setting up the SSH

……….done

[root@stb01 bin]#

Takes nearly 30 minutes

 

 

 

Patching Database homes: From Node1 only

 

[root@stb01 bin]# ./oakcli update -patch 2.4.0.0.0 –database

 

Please enter the ‘root’ user password:

Please re-enter the ‘root’ user password:

 

Please enter the ‘oracle’ user password:

Please re-enter the ‘oracle’ user password:

INFO: Setting up the SSH

……….done

 

……….done

SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.

INFO: 2013-03-04 15:06:10: Getting the possible database homes for patching

INFO: 2013-03-04 15:06:17: Patching 11.2.0.3 Database homes on node stb01

 

Found the following 11.2.0.3 homes possible for patching:

 

HOME_NAME HOME_LOCATION

——— ————-

OraDb11203_home1 /u01/app/oracle/product/11.2.0.3/dbhome_1

OraDb11203_home2 /u01/app/oracle/product/11.2.0.3/dbhome_2

 

[Please note that few of the above database homes may be already up-to-date. They will be automatically ignored]

 

Would you like to patch all the above homes: Y | N ? :Y

INFO: 2013-03-04 15:06:27: Setting up ssh for the user oracle

……….done

SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.

INFO: 2013-03-04 15:06:47: Updating the opatch

INFO: 2013-03-04 15:07:13: Performing the conflict checks

SUCCESS: 2013-03-04 15:07:34: Conflict checks passed for all the homes

INFO: 2013-03-04 15:07:34: Checking if the patch is already applied on any of the homes

INFO: 2013-03-04 15:07:41: No home is already up-to-date

SUCCESS: 2013-03-04 15:07:46: Successfully stopped the dbconsoles

INFO: 2013-03-04 15:07:51: Applying patch on the homes: /u01/app/oracle/product/11.2.0.3/dbhome_1,/u01/app/oracle/product/11.2.0.3/dbhome_2

INFO: 2013-03-04 15:07:51: It may take upto 30 mins

SUCCESS: 2013-03-04 15:12:46: Successfully applied the patch on home: /u01/app/oracle/product/11.2.0.3/dbhome_1,/u01/app/oracle/product/11.2.0.3/dbhome_2

SUCCESS: 2013-03-04 15:12:46: Successfully started the dbconsoles

INFO: 2013-03-04 15:12:47: Patching 11.2.0.3 Database homes on node stb02

INFO: 2013-03-04 15:18:32: Running the catbundle.sql

INFO: 2013-03-04 15:18:34: Running catbundle.sql on the database STB

INFO: 2013-03-04 15:18:38: Running catbundle.sql on the database TEST

 

……….done

 

INFO: DB patching summary on node: stb01

SUCCESS: 2013-03-04 15:19:00: Successfully applied the patch on home /u01/app/oracle/product/11.2.0.3/dbhome_1,/u01/app/oracle/product/11.2.0.3/dbhome_2

 

INFO: DB patching summary on node: stb02

SUCCESS: 2013-03-04 15:19:00: Successfully applied the patch on home /u01/app/oracle/product/11.2.0.3/dbhome_1,/u01/app/oracle/product/11.2.0.3/dbhome_2

 

INFO: Setting up the SSH

……….done

[root@stb01 bin]#

 

Takes nearly 30 minutes for the two homes

 

 

Reducing core count: From Node1 only

[root@stb01 ~]# oakcli apply core_config_key /root/Key

INFO: Both nodes get rebooted automatically after applying the license

Do you want to continue: [Y/N]?: y

INFO: User has confirmed the reboot

 

 

Please enter the root password:

…………done

 

INFO: Applying core_config_key on ‘192.168.16.25’

INFO : Running as root: /usr/bin/ssh -l root 192.168.16.25 /tmp/tmp_lic_exec.pl

INFO : Running as root: /usr/bin/ssh -l root 192.168.16.25 /opt/oracle/oak/bin/oakcli enforce core_config_key /tmp/.lic_file

Waiting for the Node ‘192.168.16.25’ to reboot………………………

Node ‘192.168.16.25’ is rebooted

Waiting for the Node ‘192.168.16.25’ to be up before applying the license on the node ‘192.168.16.24’……………………………………….

INFO: Applying core_config_key on ‘192.168.16.24’

INFO : Running as root: /usr/bin/ssh -l root 192.168.16.24 /tmp/tmp_lic_exec.pl

INFO : Running as root: /usr/bin/ssh -l root 192.168.16.24 /opt/oracle/oak/bin/oakcli enforce core_config_key /tmp/.lic_file

 

Broadcast message from root (Mon Mar 4 15:55:12 2013):

 

The system is going down for reboot NOW!

 

Takes nearly 20 minutes

The following two tabs change content below.

Jacques

I am Oracle Certified Master 11g & 12c database architect with significant experience in heterogeneous environments, and strong ability to lead complex and critical projects requiring multiple technical implementations. at Trivadis SA

Leave a Reply

Your email address will not be published. Required fields are marked *