oracle 11g RAC管理常用命令

   1)、检查集群状态: [grid@rac02 ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online 2)、所有 Oracle 实例 —(数据库状态): [grid@rac02 ~]

  1)、检查集群状态:

  [grid@rac02 ~]$ crsctl check cluster

  CRS-4537: Cluster Ready Services is online

  CRS-4529: Cluster Synchronization Services is online

  CRS-4533: Event Manager is online

  2)、所有 Oracle 实例 —(数据库状态):

  [grid@rac02 ~]$ srvctl status database -d racdb

  Instance racdb1 is running on node rac01

  Instance racdb2 is running on node rac02

  3)、检查单个实例状态:

  [grid@rac02 ~]$ srvctl status instance -d racdb -i racdb1

  Instance racdb1 is running on node rac01

  4)、节点应用程序状态:

  [grid@rac02 ~]$ srvctl status nodeapps

  VIP rac01-vip is enabled

  VIP rac01-vip is running on node: rac01

  VIP rac02-vip is enabled

  VIP rac02-vip is running on node: rac02

  Network is enabled

  Network is running on node: rac01

  Network is running on node: rac02

  GSD is disabled

  GSD is not running on node: rac01

  GSD is not running on node: rac02

  ONS is enabled

  ONS daemon is running on node: rac01

  ONS daemon is running on node: rac02

  eONS is enabled

  eONS daemon is running on node: rac01

  eONS daemon is running on node: rac02

  5)、列出所有的配置数据库:

  [grid@rac02 ~]$ srvctl config database

  racdb

  6)、数据库配置:

  [grid@rac02 ~]$ srvctl config database -d racdb -a

  Database unique name: racdb

  Database name: racdb

  Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

  Oracle user: oracle

  Spfile: +RACDB_DATA/racdb/spfileracdb.ora

  Domain: xzxj.edu.cn

  Start options: open

  Stop options: immediate

  Database role: PRIMARY

  Management policy: AUTOMATIC

  Server pools: racdb

  Database instances: racdb1,racdb2

  Disk Groups: RACDB_DATA,FRA

  Services:

  Database is enabled

  Database is administrator managed

  7)、ASM状态以及ASM配置:

  [grid@rac02 ~]$ srvctl status asm

  ASM is running on rac01,rac02

  [grid@rac02 ~]$ srvctl config asm -a

  ASM home: /u01/app/11.2.0/grid

  ASM listener: LISTENER

  ASM is enabled.

  8)、TNS监听器状态以及配置:

  [grid@rac02 ~]$ srvctl status listener

  Listener LISTENER is enabled

  Listener LISTENER is running on node(s): rac01,rac02

  [grid@rac02 ~]$ srvctl config listener -a

  Name: LISTENER

  Network: 1, Owner: grid

  Home:

  /u01/app/11.2.0/grid on node(s) rac02,rac01

  End points: TCP:1521

  9)、SCAN状态以及配置:

  [grid@rac02 ~]$ srvctl status scan

  SCAN VIP scan1 is enabled

  SCAN VIP scan1 is running on node rac02

  [grid@rac02 ~]$ srvctl config scan

  SCAN name: rac-scan.xzxj.edu.cn, Network: 1/192.168.1.0/255.255.255.0/eth0

  SCAN VIP name: scan1, IP: /rac-scan.xzxj.edu.cn/192.168.1.55

  10)、VIP各个节点的状态以及配置:

  [grid@rac02 ~]$ srvctl status vip -n rac01

  VIP rac01-vip is enabled

  VIP rac01-vip is running on node: rac01

  [grid@rac02 ~]$ srvctl status vip -n rac02

  VIP rac02-vip is enabled

  VIP rac02-vip is running on node: rac02

  [grid@rac02 ~]$ srvctl config vip -n rac01

  VIP exists.:rac01 VIP exists.: /rac01-vip/192.168.1.53/255.255.255.0/eth0 [grid@rac02 ~]$ srvctl config vip -n rac02 VIP exists.:rac02 VIP exists.: /rac02-vip/192.168.1.54/255.255.255.0/eth0 11)、节

  VIP exists.:rac01

  VIP exists.: /rac01-vip/192.168.1.53/255.255.255.0/eth0

  [grid@rac02 ~]$ srvctl config vip -n rac02

  VIP exists.:rac02

  VIP exists.: /rac02-vip/192.168.1.54/255.255.255.0/eth0

  11)、节点应用程序配置 —(VIP、GSD、ONS、监听器)

  [grid@rac02 ~]$ srvctl config nodeapps -a -g -s -l

  -l option has been deprecated and will be ignored.

  VIP exists.:rac01

  VIP exists.: /rac01-vip/192.168.1.53/255.255.255.0/eth0

  VIP exists.:rac02

  VIP exists.: /rac02-vip/192.168.1.54/255.255.255.0/eth0

  GSD exists.

  ONS daemon exists. Local port 6100, remote port 6200

  Name: LISTENER

  Network: 1, Owner: grid

  Home:

  /u01/app/11.2.0/grid on node(s) rac02,rac01

  End points: TCP:1521

  12)、验证所有集群节点间的时钟同步:

  [grid@rac02 ~]$ cluvfy comp clocksync -verbose

  Verifying Clock Synchronization across the cluster nodes

  Checking if Clusterware is installed on all nodes...

  Check of Clusterware install passed

  Checking if CTSS Resource is running on all nodes...

  Check: CTSS Resource running on all nodes

  Node Name Status

  ------------------------------------ ------------------------

  rac02 passed

  Result: CTSS resource check passed

  Querying CTSS for time offset on all nodes...

  Result: Query of CTSS for time offset passed

  Check CTSS state started...

  Check: CTSS state

  Node Name State

  ------------------------------------ ------------------------

  rac02 Active

  CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

  Reference Time Offset Limit: 1000.0 msecs

  Check: Reference Time Offset

  Node Name Time Offset Status

  ------------ ------------------------ ------------------------

  rac02 0.0 passed

  Time offset is within the specified limits on the following set of nodes:

  "[rac02]"

  Result: Check of clock time offsets passed

  Oracle Cluster Time Synchronization Services check passed

  Verification of Clock Synchronization across the cluster nodes was successful.

  13)、集群中所有正在运行的实例 — (SQL):

  SELECT inst_id , instance_number inst_no , instance_name inst_name , parallel , status ,

  database_status db_status , active_state state , host_name host FROM gv$instance ORDER BY inst_id;

  14)、所有数据库文件及它们所在的 ASM 磁盘组 — (SQL):

  15)、ASM 磁盘卷:

  16)、启动和停止集群:

  以下操作需用root用户执行。

  (1)、在本地服务器上停止Oracle Clusterware 系统:

  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster

  注:在运行“crsctl stop cluster”命令之后,如果 Oracle Clusterware 管理的

  资源中有任何一个还在运行,则整个命令失败。使用 -f 选项无条件地停止所有资源并

  停止 Oracle Clusterware 系统。

  另请注意,可通过指定 -all 选项在集群中所有服务器上停止 Oracle Clusterware

  系统。如下所示,在rac01和rac02上停止oracle clusterware系统:

  [root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster –all

  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster 注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。 [root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster

  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster

  注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。

  [root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster –all

  还可以通过列出服务器(各服务器之间以空格分隔)在集群中一个或多个指定的

  服务器上启动 Oracle Clusterware 系统:

  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n rac01 rac02

  使用 SRVCTL 启动/停止所有实例:

  [oracle@rac01 ~]#srvctl stop database -d racdb

  [oracle@rac01 ~]#srvctl start database -d racdb

时间: 2024-08-02 19:38:37

oracle 11g RAC管理常用命令的相关文章

Oracle 11g RAC crs_stat 命令结果完整显示

Oracle 11g RAC中crs_stat命令较之前的版本多出了很多新的不同的资源类型,缺省情况下,使用crs_stat -t来查看资源是密密麻麻一大片,看起来着实费力.作者Paul Elbow, Enkitec为我们提供了一个crsstat脚本以更清晰的格式来展现Oracle 11g RAC下的所有资源类型,见本文下面的描述.   1.安装crsstat  通常情况下,建议使用root用户或者oracle用户来安装到 /usr/local/bin 目录  当脚本首次运行时,会在安装目录产生

Oracle 11g RAC oc4j/gsd Offline

    Oracle 11g RAC中,发现oc4j以及gsd服务都处于offline状态,这是Oracle 11g RAC默认情形.即便如此,并不影响数据库的使用,因为 oc4j 是用于WLM 的一个资源, WLM在 11.2.0.2 才可用.GSD则是用于支持dbca,srvctl,oem等的交互工具.本文描述将这两个服务切换到online. 1.环境 [root@linux2 ~]# cat /etc/issue Enterprise Linux Enterprise Linux Serv

oc4j以及gsd服务都处于offline状态,这是Oracle 11g RAC默认情形

oc4j以及gsd服务都处于offline状态,这是Oracle 11g RAC默认情形  Oracle 11g RAC中,发现oc4j以及gsd服务都处于offline状态,这是Oracle 11g RAC默认情形.即便如此,并不影响数据库的使用,因为 oc4j 是用于WLM 的一个资源, WLM在 11.2.0.2 才可用.GSD则是用于支持dbca,srvctl,oem等的交互工具.本文描述将这两个服务切换到online. 1.环境    [root@linux2 ~]# cat /etc

Oracle 11g RAC CRS-4535/ORA-15077

    新安装了Oracle 11g rac之后,不知道是什么原因导致第二个节点上的crsd无法启动?其错误消息是CRS-4535: Cannot communicate with Cluster Ready Services.其具体的错误信息还需要查看crsd.log日志才知道. 1.环境 [root@linux2 ~]# cat /etc/issue Enterprise Linux Enterprise Linux Server release 5.5 (Carthage) Kernel

Oracle 11g RAC 环境下单实例非缺省监听及端口配置

      如果在Oracle 11g RAC环境下使用dbca创建单实例数据库后,Oracle会自动将其注册到缺省的1521端口及监听器.大多数情况下我们使用的为非缺省监听器以及非缺省的监听端口.而且在Oracle 11g RAC环境中,对于集群监听器的配置由grid用户来接管了.基于这种情形的单实例非缺省监听及端口该如何配置呢?本文给出了解决方案,并且使用了静态及动态两种注册方法.              关于单实例下非缺省监听及端口的配置可以参考下面的文章.实际上参照下列文章依旧可以完成

安装Oracle 11g RAC R2 之Linux DNS 配置

    Oracle 11g RAC 集群中引入了SCAN(Single Client Access Name)的概念,也就是指集群的单客户端访问名称.SCAN 这个特性为客户端提供了单一的主机名,用于访问集群中运行的 Oracle 数据库.如果您在集群中添加或删除节点,使用 SCAN 的客户端无需更改自己的 TNS 配置.无论集群包含哪些节点,SCAN 资源及其关联的 IP 地址提供了一个稳定的名称供客户端进行连接使用.在Oracle 11g grid 安装时即要求为该特性配置DNS解析方式或

Oracle 11g RAC 执行root.sh时遭遇 CRS-0184/PRCR-1070

Oracle 11g RAC安装时,在第一个节点执行root.sh时遭遇了CRS-0184/PRCR-1070,Google了很多帖子也没有找到解决办法.呜呜,还是静下心来看日志!!最后的发现原来是一个不经意的小问题,如下面的描述.   1.故障现象 #安装环境 [root@linux1 ~]# more /etc/issue Enterprise Linux Enterprise Linux Server release 5.5 (Carthage) Kernel \r on an \m [r

Oracle 11g RAC环境下Private IP修改方法及异常处理

Oracle 11g RAC环境下Private IP修改方法及异常处理 Oracle 11g RAC环境下Private IP修改方法及异常处理 一. 修改方法 1. 确认所有节点CRS服务以启动 # olsnodes -s -n –i host1 1 host1-vip Active host2 2 host2-vip Active 2. 修改Private IP配置信息 如果之前只有一个私有网卡,则直接删除时会报错,如:PRIF-31: Failed to delete the speci

Oracle 11g RAC安装时的INS-30507错误:Empty ASM disk group

最近的Oracle 11g RAC安装碰到了INS-30507错误,也就是在grid安装到创建ASM磁盘组的时候找不到任何候选磁盘,google了N多安装指导也没有找到蛛丝马迹.如果你碰到这个问题,不妨往下瞧. 1.错误信息与解释 SEVERE: [FATAL] [INS-30507] Empty ASM disk group. CAUSE: No disks were selected from a managed ASM disk group. ACTION: Select appropri