Wednesday 17 September 2014

Disable NUMA parameter



Disabling NUMA parameter 

 

Reasons to disable NUMA

1) To prevent NUMA from using high percentage of cpu
2) To imrove filesystem utility performance on linux subsystem

How to disable the NUMA parameter

1. Issue the following commands in sqlplus to disable NUMA:

alter system set "_enable_NUMA_optimization"=FALSE scope=spfile;
alter system set "_db_block_numa"=1 scope=spfile;

2. grep -i numa $ORACLE_HOME/dbs/spfilerelp.ora

and you should see the following:

*._db_block_numa=1
*._enable_NUMA_optimization=FALSE

3. Issue shutdown immediate in sqlplus

4. Verify all oracle processes are gone after the database is shutdown.

5. Issue startup in sqlplus

6. Issue show parameter spfile and verify the relp spfile is present

7. Issue show parameter numa and verify the settings are set in memory

8. check alert_relp.log for any errors when collections is running

Here are some 'My Oracle Support' documents you can reference for more detail:
 

Oracle NUMA usage recommendation [ID 759565.1]
Disable NUMA on database servers to improve performance of Linux file system utilities [ID 1053332.1]
High CPU Usage when NUMA enabled [ID 953733.1]

Deleting a Node from 11gR2 RAC


Deleting a Node from 11gR2 RAC !!!

 

In this article, we will see the steps for deleting a node from 11gR2 RAC cluster.

To delete a node we have to first remove the database instance, then the Oracle home and finally the grid infrastructure home.

The cluster here is a 4 node Oracle 11gR2 RAC. Here the nodes are called rac1, rac2, rac3 and rac4 (DB instance racdb). We will remove the fourth node i.e rac4 from the existing cluster.

===================================================================================== 

Perform the below steps to remove a node from Oracle 11gR2 RAC.

1) Backup the OCR manually.

- Login as root on rac1.
# /u01/crs/products/11.2.0/crs/bin/ocrconfig -manualbackup

# /u01/crs/products/11.2.0/crs/bin/ocrconfig -showbackup

===================================================================================== 

2) Remove the Database Instance.

- To remove the database instance, run the DBCA utility from node rac1 as oracle user.
$ dbca

- WELCOME Screen
Welcome Screen

The option “Oracle RAC database” is by default selected. Click on Next to continue.

 

- OPERATIONS Screen
Operations Screen

Select “Instance Management” and Click on Next to continue.

 

- INSTANCE MANAGEMENT Screen
Instance Management Screen

Select “Delete an Instance” and Click on Next to continue.

 

- LIST OF CLUSTER DATABASES Screen
List of Cluster Databases Screen

Enter credentials for sys user or any other user having sysdba privilege. Click on Next to continue.

 

- LIST OF CLUSTER DATABASE INSTANCES Screen
List of Cluster Database Instances Screen

Select the instance that you want to delete. In this case, we have selected the “rac4: racdb4″ instance. Click on Finish to start the deletion process. You can observe the deletion process as shown below.

http://www.findcopypaste.com/wp-content/uploads/2012/07/061.jpg

 

- Check if the redo log thread for the deleted instance is removed by querying the v$log view.
SQL> select GROUP#, THREAD#, MEMBERS, STATUS from v$log;

If not then remove it by using the below query.
SQL> alter database disable thread 4;

 

- Check if instance is removed from the cluster

Execute the below query from rac1 node.
$ srvctl config database -d racdb

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/products/11.2.0/db
Oracle user: oracle
Spfile: +DATAGRP1/racdb/spfileracdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2,racdb3
Disk Groups: DATAGRP1,CRSGRP,DATAGRP2
Mount point paths:
Services:
Type: RAC
Database is administrator managed

 

- As seen from above, now three instances are present. This concludes the first phase of the node removal.

======================================================================================================

 

4) Remove the Listener.

- In 11.2 the default listener runs from grid home. But if any listeners were explicitly created on oracle home then it must be disabled and stopped.

- Check from which home, listener is running.

 

1
2
3
4
5
6
7
8
9
10
$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home:
  /u01/crs/products/11.2.0/crs on node(s) rac4,rac3,rac2,rac1
End points: TCP:1521 
[oracle@rac4 ~]$ ps -ef | grep lsn
grid       6518           1  0 12:42 ?        00:00:00 /u01/crs/products/11.2.0/crs/bin/tnslsnr LISTENER -inherit
oracle   18504 18237  0 16:48 pts/1    00:00:00 grep lsn

 

- Since it’s running from grid home this step could be skipped. If it was running from Oracle home use the following commands to disable and stop it.
$ srvctl disable listener -l listener_name -n name_of_node_to_delete
$ srvctl stop listener -l listener_name -n name_of_node_to_delete

======================================================================================================

 

5) Update the inventory.

- Run the following on the node to be deleted to update the inventory.

 

1
2
3
4
5
6
7
8
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac4}" -local -silent
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 4678 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/crs/oraInventory
'UpdateNodeList' was successful.

 

- After executing the above command on the node to be deleted, “inventory.xml” will show only the node to be deleted in the Oracle Home Section.
$ cd /u01/crs/oraInventory/ContentsXML
$ cat inventory.xml

- Other nodes will show all the nodes in the “inventory.xml” file.

======================================================================================================

 

6) Detach the Oracle Home.

- For shared home detach the oracle home from the inventory using
$ ./runInstaller -detachHome ORACLE_HOME=Oracle_home_location

 

- For non-shared home run deinstall from the Oracle home,
$ cd $ORACLE_HOME/deinstall

$ ./deinstall -local
[If -local is not specified this will apply to the entire cluster.]

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/crs/oraInventory/logs/
 
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
 
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
 
Checking for existence of the Oracle home location /u01/app/oracle/products/11.2.0/db
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/crs/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/crs/products/11.2.0/crs
The following nodes are part of this cluster: rac4
Checking for sufficient temp space availability on node(s) : 'rac4' 
## [END] Install check configuration ## 
Network Configuration check config START 
Network de-configuration trace file location: /u01/crs/oraInventory/logs/netdc_check2012-07-22_05-19-23-PM.log 
Network Configuration check config END
 Database Check Configuration START
 Database de-configuration trace file location: /u01/crs/oraInventory/logs/databasedc_check2012-07-22_05-19-33-PM.log
 Database Check Configuration END
 Enterprise Manager Configuration Assistant START
 EMCA de-configuration trace file location: /u01/crs/oraInventory/logs/emcadc_check2012-07-22_05-19-36-PM.log
 Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/crs/oraInventory/logs//ocm_check778.log
Oracle Configuration Manager check END
 ######################### CHECK OPERATION END #########################
 ####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/crs/products/11.2.0/crs
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac4
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac4', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/products/11.2.0/db
Inventory Location where the Oracle home registered is: /u01/crs/oraInventory
The option -local will not modify any database configuration for this Oracle home.
 No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/crs/oraInventory/logs/deinstall_deconfig2012-07-22_05-19-11-PM.out'
Any error messages from this session will be written to: '/u01/crs/oraInventory/logs/deinstall_deconfig2012-07-22_05-19-11-PM.err'
 ######################## CLEAN OPERATION START ########################
 Enterprise Manager Configuration Assistant START
EMCA de-configuration trace file location: /u01/crs/oraInventory/logs/emcadc_clean2012-07-22_05-19-36-PM.log
Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/crs/oraInventory/logs/databasedc_clean2012-07-22_05-19-49-PM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/crs/oraInventory/logs/netdc_clean2012-07-22_05-19-49-PM.log
De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/crs/oraInventory/logs//ocm_clean778.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/oracle/products/11.2.0/db' from the central inventory on the local node : Done
Delete directory '/u01/app/oracle/products/11.2.0/db' on the local node : Done
 
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.
 
Oracle Universal Installer cleanup was successful.
 
Oracle Universal Installer clean END
 
## [START] Oracle install clean ##
 
Clean install operation removing temporary directory '/tmp/deinstall2012-07-22_05-15-42PM' on node 'rac4'
 
## [END] Oracle install clean ##
 
######################### CLEAN OPERATION END #########################
 
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/products/11.2.0/db' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/products/11.2.0/db' on the local node.
Oracle Universal Installer cleanup was successful.
 
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
 
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

 

- On the remaining nodes run the following to update the inventory with the remaining nodes of the cluster.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac1,rac2,rac3}”

- This concludes the second phase of removing the node.

=====================================================================================

7) Remove the GRID Home.

- Check whether the node to be removed is pinned or unpinned as grid user.

1
2
3
4
5
$ olsnodes -t -s
rac1    Active  Unpinned
rac2    Active  Unpinned
rac3    Active  Unpinned
rac4    Active  Unpinned

 

- If the node is pinned then remove it using the following command.
$ crsctl unpin css -n nodename

If they are already unpinned then no need to run the above unpin command.

 

- Run the deconfig script as root user.
# cd /u01/crs/products/11.2.0/crs/crs/install
# ./rootcrs.pl -deconfig -force
[If the node to be deleted is the last node in the cluster then include the -lastnode option]

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.0.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.0.15/192.168.0.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.0.16/192.168.0.0/255.255.255.0/eth0, hosting node rac2
VIP exists: /rac3-vip/192.168.0.17/192.168.0.0/255.255.255.0/eth0, hosting node rac3
VIP exists: /rac4-vip/192.168.0.18/192.168.0.0/255.255.255.0/eth0, hosting node rac4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac4'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac4' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac4'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac4'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac4'
CRS-2673: Attempting to stop 'ora.ARCHGRP.dg' on 'rac4'
CRS-2673: Attempting to stop 'ora.CRSGRP.dg' on 'rac4'
CRS-2673: Attempting to stop 'ora.DATAGRP1.dg' on 'rac4'
CRS-2673: Attempting to stop 'ora.DATAGRP2.dg' on 'rac4'
CRS-2673: Attempting to stop 'ora.FBCKGRP.dg' on 'rac4'
CRS-2677: Stop of 'ora.DATAGRP1.dg' on 'rac4' succeeded
CRS-2677: Stop of 'ora.DATAGRP2.dg' on 'rac4' succeeded
CRS-2677: Stop of 'ora.FBCKGRP.dg' on 'rac4' succeeded
CRS-2677: Stop of 'ora.ARCHGRP.dg' on 'rac4' succeeded
CRS-2677: Stop of 'ora.CRSGRP.dg' on 'rac4' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac4'
CRS-2677: Stop of 'ora.asm' on 'rac4' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac4' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac4' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac4'
CRS-2673: Attempting to stop 'ora.crf' on 'rac4'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac4'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac4'
CRS-2673: Attempting to stop 'ora.asm' on 'rac4'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac4'
CRS-2677: Stop of 'ora.evmd' on 'rac4' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac4' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac4' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac4' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac4' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac4' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac4'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac4' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac4'
CRS-2677: Stop of 'ora.cssd' on 'rac4' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac4'
CRS-2677: Stop of 'ora.gipcd' on 'rac4' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac4'
CRS-2677: Stop of 'ora.gpnpd' on 'rac4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

 

- From a node that’s not being deleted run the following command as root specifying the node being deleted.
# /u01/crs/products/11.2.0/crs/bin/crsctl delete node -n rac4
CRS-4661: Node rac4 successfully deleted.

 

- On the node that’s being deleted update the node list.

 

1
2
3
4
5
6
7
8
$ cd /u01/crs/products/11.2.0/crs/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac4}" -silent -local CRS=TRUE
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 5087 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/crs/oraInventory
'UpdateNodeList' was successful.

 

After executing the above command on the node to be deleted, “inventory.xml” will show only the node to be deleted in the GRID Home Section.
$ cd /u01/crs/oraInventory/ContentsXML
$ cat inventory.xml

Other nodes will show all the nodes in the “inventory.xml” file.

 

- Now detach the Grid home as follows.
$ cd $GRID_HOME/deinstall
$ ./deinstall -local

 

1
2
3
4
 
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2012-07-22_06-54-44PM/logs/
 ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
 
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
 
Checking for existence of the Oracle home location /u01/crs/products/11.2.0/crs
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/crs/grid
Checking for existence of central inventory location /u01/crs/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac4
Checking for sufficient temp space availability on node(s) : 'rac4'
 
## [END] Install check configuration ##
 
Traces log file: /tmp/deinstall2012-07-22_06-54-44PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac4"[rac4-vip]
> [Press Enter]
 
The following information can be collected by running "/sbin/ifconfig -a" on node "rac4"
Enter the IP netmask of Virtual IP "192.168.0.18" on node "rac4"[255.255.255.0]
> [Press Enter]
 
Enter the network interface name on which the virtual IP address "192.168.0.18" is active
> [Press Enter]
 
Enter an address or the name of the virtual IP[]
> [Press Enter]
 
Network Configuration check config START
 
Network de-configuration trace file location: /tmp/deinstall2012-07-22_06-54-44PM/logs/netdc_check2012-07-22_06-59-26-PM.log
 
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
 
Network Configuration check config END
 
Asm Check Configuration START
 
ASM de-configuration trace file location: /tmp/deinstall2012-07-22_06-54-44PM/logs/asmcadc_check2012-07-22_07-00-08-PM.log
 
######################### CHECK OPERATION END #########################
 
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac4
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac4', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/crs/products/11.2.0/crs
Inventory Location where the Oracle home registered is: /u01/crs/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2012-07-22_06-54-44PM/logs/deinstall_deconfig2012-07-22_06-56-55-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2012-07-22_06-54-44PM/logs/deinstall_deconfig2012-07-22_06-56-55-PM.err'
 
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2012-07-22_06-54-44PM/logs/asmcadc_clean2012-07-22_07-00-39-PM.log
ASM Clean Configuration END
 
Network Configuration clean config START
 
Network de-configuration trace file location: /tmp/deinstall2012-07-22_06-54-44PM/logs/netdc_clean2012-07-22_07-00-39-PM.log
 
De-configuring RAC listener(s): LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
 
De-configuring listener: LISTENER
    Stopping listener on node "rac4": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
 
De-configuring listener: LISTENER_SCAN3
    Stopping listener on node "rac4": LISTENER_SCAN3
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
 
De-configuring listener: LISTENER_SCAN2
    Stopping listener on node "rac4": LISTENER_SCAN2
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
 
De-configuring listener: LISTENER_SCAN1
    Stopping listener on node "rac4": LISTENER_SCAN1
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
 
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
 
De-configuring backup files...
Backup files de-configured successfully.
 
The network configuration has been cleaned up successfully.
 
Network Configuration clean config END
 
---------------------------------------->
 
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.
 
Run the following command as the root user or the administrator on node "rac4".
 
/tmp/deinstall2012-07-22_06-54-44PM/perl/bin/perl -I/tmp/deinstall2012-07-22_06-54-44PM/perl/lib -I/tmp/deinstall2012-07-22_06-54-44PM/crs/install /tmp/deinstall2012-07-22_06-54-44PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-07-22_06-54-44PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
 
Press Enter after you finish running the above commands

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

- Enter the above command in a new window as root user. The output is similar to below after you execute the above command.

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Using configuration parameter file: /tmp/deinstall2012-07-22_06-54-44PM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

Once completed press enter on the first shell session.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Remove the directory: /tmp/deinstall2012-07-22_06-54-44PM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
 
Detach Oracle home '/u01/crs/products/11.2.0/crs' from the central inventory on the local node : Done
 
Delete directory '/u01/crs/products/11.2.0/crs' on the local node : Done
 
Delete directory '/u01/crs/oraInventory' on the local node : Done
 
Delete directory '/u01/crs/grid' on the local node : Done
 
Oracle Universal Installer cleanup was successful.
 
Oracle Universal Installer clean END
 
## [START] Oracle install clean ##
 
Clean install operation removing temporary directory '/tmp/deinstall2012-07-22_06-54-44PM' on node 'rac4'
 
## [END] Oracle install clean ##
 
######################### CLEAN OPERATION END #########################
 
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac4"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/crs/products/11.2.0/crs' from the central inventory on the local node.
Successfully deleted directory '/u01/crs/products/11.2.0/crs' on the local node.
Successfully deleted directory '/u01/crs/oraInventory' on the local node.
Successfully deleted directory '/u01/crs/grid' on the local node.
Oracle Universal Installer cleanup was successful.
 
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac4' at the end of the session.
 
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac4' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
 
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

 

- On the remaining nodes run the following to update the inventory with the remaining nodes of the cluster.
$ cd $GRID_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME “CLUSTER_NODES={rac1,rac2,rac3}” CRS=TRUE

 

- Use cluvfy to check whether the node was removed successfully.
$ cluvfy stage -post nodedel -n rac4

 

1
2
3
4
5
6
7
8
9
10
11
Performing post-checks for node removal
 
Checking CRS integrity...
 
Clusterware version consistency passed
 
CRS integrity check passed
 
Node removal check passed
 
Post-check for node removal was successful.

 

This concludes the final step of removing the node from the RAC.

======================================================================================================