To start and stop oracle clusterware (run as the superuser/root)
[root@racnode1 ~]# crsctl stop crs

[root@racnode1 ~]# crsctl start crs


To start and stop oracle cluster resources running on all nodes
[root@racnode1 ~]#  crsctl stop cluster -all

[root@racnode1 ~]#  crsctl start cluster -all

To check the current status of a cluster(On Local Node) :
[grid@racnode1~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Check the current status of CRS
[grid@racnode1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Display the status cluster resources
[grid@racnode1 ~]$ crsctl stat res -t

Check version of  Oracle Clusterware :
[grid@racnode1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode1] is [12.1.0.2.0]
[grid@racnode1 ~]$
[grid@racnode1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[grid@racnode1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

Check status of OHASD (Oracle High Availability Services) daemon
[grid@racnode1 ~]$ crsctl check has
CRS-4638: Oracle High Availability Services is online

Forcefully deleting resource :
[grid@racnode1 ~]$ crsctl delete resource -f

Enabling and disabling CRS daemons (run as the superuser/root)
[root@racnode1 ~]# crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.

 [root@racnode1 ~]# crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.

To check the status of Oracle CRS
[grid@racnode1 ~]$ olsnodes
racnode1
racnode2

To print node name with node number
[grid@racnode1 ~]$ olsnodes -n
racnode1        1
racnode2        2

Show private interconnect address for the local node
[grid@racnode1 ~]$ olsnodes -l -p
racnode1        10.10.1.11

Show virtual IP address with node name
[grid@racnode1 ~]$ olsnodes -i
racnode1        racnode1-vip
racnode2        racnode2-vip
[grid@racnode1 ~]$ olsnodes -i racnode1
racnode1        racnode1-vip

Display information for the local node
[grid@racnode1 ~]$ olsnodes -l
racnode1

Show node status (active or inactive)
[grid@racnode1 ~]$ olsnodes -s
racnode1        Active
racnode2        Active
[grid@racnode1 ~]$ olsnodes -l -s
racnode1        Active

Show clusterware/scan name
[grid@racnode1 ~]$ olsnodes -c
funoracleapps-scan

Show global public and global cluster_interconnect
[grid@racnode1 ~]$ oifcfg getif
eth0  192.168.56.0  global  public
eth1  10.10.1.0  global  cluster_interconnect

Show database registered in the repository
[grid@racnode1 ~]$ srvctl config database
RACDB

Show the configuration details of the database
[grid@racnode1 ~]$ srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /home/oracle/product/11.2.0/db_home1
Oracle user: oracle
Spfile: +DATA/RACDB/spfileRACDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACDB
Database instances: RACDB1,RACDB2
Disk Groups: DATA
Mount point paths:
Services: RACDB
Type: RAC
Database is administrator managed

Change policy of database from automatic to manual
[grid@racnode1 ~]$ srvctl modify database -d RACDB -y MANUAL

Change the startup option of database from open to mount
[grid@racnode1 ~]$ srvctl modify database -d TESTDB -s mount

Start RAC listener
[grid@racnode1 ~]$ srvctl start listener

Show  the status of the database
[grid@racnode1 ~]$ srvctl status database -d RACDB
Instance RACDB1 is running on node racnode1
Instance RACDB2 is running on node racnode2

Show the status services running in the database
[grid@racnode1 ~]$ srvctl status service -d RACDB
Service RACDB is running on instance(s) RACDB1,RACDB2

Check nodeapps running on a node
[grid@racnode1 ~]$ srvctl status nodeapps
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1
VIP racnode2-vip is enabled
VIP racnode2-vip is running on node: racnode2
Network is enabled
Network is running on node: racnode1
Network is running on node: racnode2
GSD is enabled
GSD is not running on node: racnode1
GSD is not running on node: racnode2
ONS is enabled
ONS daemon is running on node: racnode1
ONS daemon is running on node: racnode2
[grid@racnode1 ~]$  srvctl status nodeapps -n racnode1
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1
Network is enabled
Network is running on node: racnode1
GSD is enabled
GSD is not running on node: racnode1
ONS is enabled
ONS daemon is running on node: racnode1




Start all instances associated with a database.
[grid@racnode1 ~]$ srvctl start database -d RACDB

Shut down instances and services
[grid@racnode1 ~]$ srvctl stop database -d RACDB

Other Options with start and stop of database
We can use -o option to specify startup/shutdown options.
Shutdown immediate database – srvctl stop database -d RACDB -o immediate
Startup force all instances – srvctl start database -d RACDB -o force
Perform normal shutdown – srvctl stop database -d RACDB -i instance racnode1

Start or stop the ASM instance on racnode01 cluster node
[grid@racnode1 ~]$ srvctl start asm -n racnode1
[grid@racnode1 ~]$ srvctl stop asm -n racnode1

Display current configuration of the SCAN VIP’s
[grid@racnode1 ~]$ srvctl config scan
SCAN name: funoracleapps-scan, Network: 1/192.168.56.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /funoracleapps-scan/192.168.56.121
SCAN VIP name: scan2, IP: /funoracleapps-scan/192.168.56.122
SCAN VIP name: scan3, IP: /funoracleapps-scan/192.168.56.123

Refreshing SCAN VIP’s with new IP addresses from DNS
[grid@racnode1 ~]$ srvctl modify scan -n your-scan-name

Stop or start SCAN listener and the  SCAN VIP resources
[grid@racnode1 ~]$ srvctl stop scan_listener
[grid@racnode1 ~]$ srvctl start scan_listener
[grid@racnode1 ~]$ srvctl stop scan
[grid@racnode1 ~]$ srvctl start scan

Show the status of SCAN VIP’s and SCAN listeners
[grid@racnode1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node racnode1
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node racnode2
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node racnode2

 [grid@racnode1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node racnode1
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node racnode2
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node racnode2

  
Perform quick health check of OCR
[grid@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
             Version                  :          3
             Total space (kbytes)     :     262120
             Used space (kbytes)      :       3304
             Available space (kbytes) :     258816
             ID                       : 1555543155
             Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
             Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

             Cluster registry integrity check succeeded

             Logical corruption check bypassed due to non-privileged user

Dump content of OCR file into an xml
[grid@racnode1 ~]$ ocrdump testdump.xml -xml

Add or relocate the OCR mirror file to the specified location
[grid@racnode1 ~]$ ocrconfig -replace ocrmirror ‘+TESTDG’
[grid@racnode1 ~]$ ocrconfig -replace +CURRENTOCRDG -replacement +NEWOCRDG

Relocate existing OCR file
[grid@racnode1 ~]$ ocrconfig  -replce ocr ‘+TESTDG’

Add mirror disk group for OCR
[grid@racnode1 ~]$ ocrconfig -add +TESTDG

Remove OCR mirror
ocrconfig -delete +TESTDG

Replace the OCR or the OCR mirror
[grid@racnode1 ~]$ ocrconfig -replace ocr

[grid@racnode1 ~]$ ocrconfig replace ocrmirror


List ocrbackup list
[grid@racnode1 ~]$ ocrconfig -showbackup

Perform OCR backup manually
[root@racnode1 ~]# ocrconfig -manualbackup


Change OCR autobackup directory
[root@racnode1 ~]# ocrconfig -backuploc /backups/ocr

Verify the integrity of all the cluster nodes
[grid@racnode1]$ cluvfy comp ocr -n all -verbose