在Oracle RAC中,可以从多个层次,多个不同的机制来检测RAC的健康状况,即可以通过心跳机制以及一定的投票算法来隔离故障。如果检测到某节点失败,则存在故障的节点将会被逐出集群以避免故障节点破坏数据。本文主要描述了Oracle RAC下的几种心跳机制以及心跳参数的调整。
一、OCSSD与CSS
OCSSD是一个管理及提供Cluster Synchronization Services (CSS)服务的Linux或者Unix进程。使用Oracle用户来运行该进程并提供节点成员管理功能,一旦该进程失败,将导致节点重启。CSS服务提供2种心跳机制,一种为网络心跳,一种为磁盘心跳。两种心跳都有最大延时,网络心跳的延时叫MC(Misscount), 磁盘心跳延时叫作IOT (I/O Timeout)。 这2个参数都以秒为单位,缺省时情况下Misscount < Disktimeout。下面分别描述这2种心跳机制。
二、网络心跳
故名思义即是通过私有网络来检测节点的状态。如果私有网络硬件、软件导致集群节点间私有网络在一定时间内无法进行正常通信,由此而导致脑裂。由于集群环境中的存储为共享存储,因此此时必须要将故障节点从 集群隔离出来,以避免数据灾难。关于这个网络心跳的具体动作描述如下:
Every one second, a sending thread in the cssd sends a network tcp heartbeat to itself and all nodes. The receiving thread of the ocssd.bin receives the heartbeat.
If the package network is dropped or has error, the error correction mechanism on tcp would retransmit the package.
Oracle does not retransmit. From the ocssd.log, you will see a WARNING message about missing of heartbeat if a node does not receive a heartbeat from another node for 15 seconds (50% of miscount). Another warning is reported in ocssd.log if the same node is missing for 22 seconds (75% of miscount)..another warning continues from the same node for 27 seconds (90% miscount). When the heartbeat is missing 100% ..30 seconds miscount, the node is evicted
这个网络心跳的延迟称之为misscount,可以通过crsctl 工具查询及修改。
[grid@Linux-01 ~]$ crsctl get css misscount
CRS-4678: Successful get misscount 30 for Cluster Synchronization Services.
上面的查询结果表明,如果集群各节点间内联网络延迟大于30s,Oracle认为节点间发生了脑裂,需要将故障节点逐出集群。
如何寻找故障节点,Oracle则通过投票算法来决定,下面是一个算法描述示例,描述参考大话Oracle RAC。
集群中各个节点需要心跳机制来通报彼此的"健康状态",假设每收到一个节点的"通报"代表一票。对于三个节点的集群,正常运行时,每个节点都会有3票。当结点A心跳出现故障但节点A还在运行,这时整个集群就会分裂成2个小的partition。 节点A是一个,剩下的2个是一个。 这是必须剔除一个partition才能保障集群的健康运行。 对于这3个节点的集群, A 心跳出现问题后, B 和 C 是一个partion,有2票, A只有1票。 按照投票算法, B 和C 组成的集群获得控制权, A 被剔除。如果只有2个节点,投票算法就失效了。 因为每个节点上都只有1票。 这时就需要引入第三个设备:Quorum Device. Quorum Device 通常采用的是共享磁盘,这个磁盘也叫作Quorum disk。 这个Quorum Disk 也代表一票。 当2个结点的心跳出现问题时, 2个节点同时去争取Quorum Disk 这一票, 最早到达的请求被最先满足。故最先获得Quorum Disk的节点就获得2票。另一个节点就会被剔除。
节点一旦被隔离之后,在11gR2之前通常是重启故障节点。而在11gR2中,ClusterWare会首先尝试关闭该节点的所有资源,尝试对集群中失败的组建进行清理,即重启失败的组件。如果清理失败的组件未成功,为了强制清理,则再对节点进行重启。
三、磁盘心跳
A thread in ocssd.bin updates the voting disk every second.
If a node does not update the voting disks for 200 seconds, it's evicted.
However, the ocssd.bin on the local node has the logic that it will bring down the node if it has an I/O error more than majority of the voting disks. Also there is a CRS reconfiguration is happening when misscount is 27 second and the local node is rebooted. As a result, you rarely see an eviction due to failure of the voting disk on 10.2.0.4 (this is more common in 10.2.0.1)) because the ocssd.bin will abort the node before it get evicted by another node if writing to the voting disk is the problem.
如上所述,每个节点会每一秒钟更新一次表决磁盘。共享的表决磁盘用于检查磁盘心跳。如果ocssd进程更新表决磁盘的时间超过200s,即disktimeout设定的值,Oracle会认为该表决磁盘脱机,同时在Clusterware的告警日志中生成表决磁盘脱机记录。如果当前节点表决磁盘脱机的个数小于在线表决磁盘的个数,该节点能够幸存,如果脱机表决磁盘的个数大于或等于在线表决磁盘的个数,则clusterware认为磁盘心跳出现问题,故障节点会被逐出集群,执行自动修复过程。比如有3个表决磁盘,节点A有表决磁盘出现了脱机,此时脱机磁盘(1个)<在线磁盘(2),clusterware会在告警日志中生成脱机记录,但不采取任何行动。如果当前节点有2个或2个以上表决磁盘脱机,此时脱机磁盘(2个)>在线磁盘(1个),那节点A被踢出集群。
四、RebootTime参数
注意这个RebootTime参数,也很重要,缺省情况下为3s。
Default 3 seconds -the amount of time allowed for a node to complete a reboot
after the CSS daemon has been evicted.
crsctl get css reboottime
#Author : Leshami
#Blog : http://blog.csdn.net/leshami
五、心跳参数的调整
1) 10.2.0.2 to 11.1.0.7版本的修改方法
a) Shut down CRS on all but one node. For exact steps use note 309542.1
b) Execute crsctl as root to modify the misscount:
$CRS_HOME/bin/crsctl set css misscount <n> #### where <n> is the maximum private network latency in seconds
$CRS_HOME/bin/crsctl set css reboottime <r> [-force] #### (<r> is seconds)
$CRS_HOME/bin/crsctl set css disktimeout <d> [-force] #### (<d> is seconds)
c) Reboot the node where adjustment was made
d) Start all other nodes which was shutdown in step 1
e) Execute crsctl as root to confirm the change:
$CRS_HOME/bin/crsctl get css misscount
$CRS_HOME/bin/crsctl get css reboottime
$CRS_HOME/bin/crsctl get css disktimeout
2) 11gR2的修改方法
With 11gR2, these settings can be changed online without taking any node down:
a) Execute crsctl as root to modify the misscount:
$CRS_HOME/bin/crsctl set css misscount <n> #### where <n> is the maximum private network latency in seconds
$CRS_HOME/bin/crsctl set css reboottime <r> [-force] #### (<r> is seconds)
$CRS_HOME/bin/crsctl set css disktimeout <d> [-force] #### (<d> is seconds)
b) Execute crsctl as root to confirm the change:
$CRS_HOME/bin/crsctl get css misscount
$CRS_HOME/bin/crsctl get css reboottime
$CRS_HOME/bin/crsctl get css disktimeout