Friday 30 October 2015

Administering Oracle Clusterware Components


About Oracle Clusterware
Oracle Real Application Clusters (Oracle RAC) uses Oracle Clusterware as the infrastructure that binds multiple nodes that then operate as a single server. In an Oracle RAC environment, Oracle Clusterware monitors all Oracle components (such as instances and listeners). If a failure occurs, then Oracle Clusterware automatically attempts to restart the failed component and also redirects operations to a surviving component.

About Oracle Cluster Registry

Oracle Cluster Registry (OCR) is a file that contains information about the cluster node list and instance-to-node mapping information. Each node in a cluster also has a local copy of the OCR, called an Oracle Local Registry (OLR), that is created when Oracle Clusterware is installed. Multiple processes on each node have simultaneous read and write access to the OLR particular to the node on which they reside, whether Oracle Clusterware is fully functional. By default, OLR is located at Grid_home/cdata/$HOSTNAME.olr

Starting Oracle Clusterware

To Start Oracle Clusterware on all nodes in the cluster, execute the following command on any node:
                  crsctl start cluster -all
To start the Oracle Clusterware stack on specific nodes, use the -n option followed by a space-delimited list of node names. To use this command, the OHASD process must be running on the specified nodes.
                  crsctl start cluster -n racnode1 racnode4
To start the entire Oracle Clusterware stack on a node, including the OHASD process, run the following command on that node:
                  crsctl start crs
Stopping Oracle Clusterware
To stop Oracle Clusterware on all nodes in the cluster, execute the following command on any node. This command stops the resources managed by Oracle Clusterware, the Oracle ASM instance, and all the Oracle Clusterware processes (except for OHASD and its dependent processes).
                  crsctl stop cluster -all
To stop Oracle Clusterware and Oracle ASM on select nodes, include the -n option followed by a space-delimited list of node names:
                  crsctl stop cluster -n racnode1 racnode3
If you do not include either the -all or the -n option in the stop cluster command, then Oracle Clusterware and its managed resources are stopped only on the node where you execute the command.
                  crsctl stop cluster
To completely shut down the entire Oracle Clusterware stack on a node, including the OHASD process, run the following command on that node::
                  crsctl stop crs
If any resources that Oracle Clusterware manages are still running after executing the crsctl stop crs command, then the command fails. You must then use the -f option to unconditionally stop all resources and stop the Oracle Clusterware stack:
                  crsctl stop crs -f

Check that all cluster resources are up and running on all the Nodes
 [grid]$ crsctl stat res -t
Enable or disable Oracle Clusterware on a specific node:
# crsctl enable crs
  # crsctl disable crs
Verifying the Status of Oracle Clusterware on a specific node
  $ crsctl check crs
Check the viability of Cluster Synchronization Services (CSS) across nodes:
                  $ crsctl check cluster

Starting and Stopping ASM Instances by Using srvctl

One node at a time::

$ srvctl start asm -n host01

$ srvctl status asm -n host01


All nodes simultaneously:

$ srvctl stop asm
$ srvctl status asm -n host01
$ srvctl status asm

Starting and Stopping ASM Instances by Using SQL*Plus

$ export ORACLE_SID=+ASM1
$ export ORACLE_HOME=/u01/app/11.2.0/grid
$ $ORACLE_HOME/bin/sqlplus / AS SYSASM

SQL> startup

Starting and Stopping RAC Database Instances with srvctl

start/stop syntax:
srvctl start|stop instance -d <db_name> -i <inst_name_list> [-o open|mount|nomount|normal|transactional|immediate|abort>]
srvctl start|stop database -d <db_name> [-o open|mount|nomount|normal|transactional|immediate|abort>]

Examples:
$ srvctl start instance -d orcl -i orcl1,orcl2
$ srvctl stop instance -d orcl -i orcl1,orcl2
$ srvctl start database -d orcl -o open


Check the viability of Cluster Synchronization Services (CSS) across nodes:
  crsctl check cluster
To determine the location of the voting disk
  crsctl query css votedisk
To determine the location of the OCR
  cat /etc/oracle/ocr.loc
Check the ocssd.log for voting disks issues
  grep voting <grid_home>/log/<hostname>/cssd/ocssd.log
Check the integrity of the OCR
  cluvfy comp ocr –n all -verbose
  ocrcheck


Administering Voting Disks for Oracle Clusterware

If you choose to store Oracle Clusterware files on Oracle ASM and use redundancy for the disk group, then Oracle ASM automatically maintains the ideal number of voting files based on the redundancy of the diskgroup.

If you use a different form of shared storage to store the voting disks, then you can dynamically add and remove voting disks after installing Oracle RAC. path is the fully qualified path for the additional voting disk.

To move voting disks from shared storage to an Oracle ASM disk group:

1. Use the Oracle ASM Configuration Assistant (ASMCA) to create an Oracle ASM disk group.
2. Verify that the ASM Compatibility attribute for the disk group is set to 11.2.0.0 or higher.
3. Use CRSCTL to create a voting disk in the Oracle ASM disk group by specifying the disk group name in the following command:

crsctl replace votedisk +ASM_disk_group


Add or remove voting disks on non-Automatic Storage Management (ASM) storage, use the following commands
# crsctl delete css votedisk path_to_voting_disk
  # crsctl add css votedisk path_to_voting_disk
Add or delete one or more voting disks to non-ASM storage
# crsctl add css votedisk path_to_new_voting_disk
  # crsctl delete css votedisk path_to_old_voting_disk
Add a voting disk to ASM:
#crsctl replace votedisk +asm_disk_group



To migrate voting disks from non-ASM storage devices to ASM or vice versa, specify the ASM disk group name or path to the non-ASM storage device:
  # crsctl replace votedisk {+asm_disk_group |path_to_voting_disk}
To determine the node and location of the OCR Automatic Backups
  $ ocrconfig -showbackup auto
Changing the Automatic OCR Backup Location
# ocrconfig –backuploc <path to shared CFS or NFS>
Adding, Replacing, and Repairing OCR Locations
  # ocrconfig -add +DATA2
  # ocrconfig -add /dev/sde1
# ocrconfig -replace /dev/sde1 -replacement +DATA2
To repair OCR configuration, run this command on the node on which you have stopped Oracle Clusterware:
  # ocrconfig -repair -add +DATA1
Removing an Oracle Cluster Registry Location
# ocrconfig -delete +DATA2
# ocrconfig -delete /dev/sde1


https://docs.oracle.com/cd/E11882_01/rac.112/e17264/toc.htm
https://docs.oracle.com/cd/E11882_01/rac.112/e17264/adminoc.htm#TDPRC221

No comments:

Post a Comment