使用 runcluvfy 校验Oracle RAC安装环境

--*****************************************

-- 使用 runcluvfy 校Oracle RAC安装

--*****************************************

 

    所谓工欲善其事,必先利其器。安装 Orale RAC 可谓是一个浩大的工程,尤其是没有做好前期的规划与配置工作时将导致安装的复杂

度绝非想象。幸好有runcluvfy工具,这大大简化了安装工作。下面的演示是基于安装Oracle 10g RAC / Linux来完成的。

 

1.从安装文件路径下使用runcluvfy实施安装前的校验

    [oracle@node1 cluvfy]$ pwd

    /u01/Clusterware/clusterware/cluvfy

    [oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

   

    Performing pre-checks for cluster services setup

   

    Checking node reachability...

   

    Check: Node reachability from node "node1"

      Destination Node                      Reachable?             

      ------------------------------------  ------------------------

      node1                                 yes                    

      node2                                 yes                    

    Result: Node reachability check passed from node "node1".

   

    Checking user equivalence...

   

    Check: User equivalence for user "oracle"

      Node Name                             Comment                

      ------------------------------------  ------------------------

      node2                                 passed                 

      node1                                 passed                 

    Result: User equivalence check passed for user "oracle".

   

    Checking administrative privileges...

   

    Check: Existence of user "oracle"

      Node Name     User Exists               Comment                

      ------------  ------------------------  ------------------------

      node2         yes                       passed                 

      node1         yes                       passed                 

    Result: User existence check passed for "oracle".

    

    Check: Existence of group "oinstall"

      Node Name     Status                    Group ID               

      ------------  ------------------------  ------------------------

      node2         exists                    500                    

      node1         exists                    500                    

    Result: Group existence check passed for "oinstall".

   

    Check: Membership of user "oracle" in group "oinstall" [as Primary]

      Node Name         User Exists   Group Exists  User in Group  Primary       Comment    

      ----------------  ------------  ------------  ------------  ------------  ------------

      node2             yes           yes           yes           yes           passed     

      node1             yes           yes           yes           yes           passed     

    Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.

   

    Administrative privileges check passed.

   

    Checking node connectivity...

   

    Interface information for node "node2"

      Interface Name                  IP Address                      Subnet         

      ------------------------------  ------------------------------  ----------------

      eth0                            192.168.0.12                    192.168.0.0    

      eth1                            10.101.0.12                     10.101.0.0     

    Interface information for node "node1"

      Interface Name                  IP Address                      Subnet         

      ------------------------------  ------------------------------  ----------------

      eth0                            192.168.0.11                    192.168.0.0    

      eth1                            10.101.0.11                     10.101.0.0      

   

    Check: Node connectivity of subnet "192.168.0.0"

      Source                          Destination                     Connected?     

      ------------------------------  ------------------------------  ----------------

      node2:eth0                      node1:eth0                      yes            

    Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) node2,node1.

   

    Check: Node connectivity of subnet "10.101.0.0"

      Source                          Destination                     Connected?     

      ------------------------------  ------------------------------  ----------------

      node2:eth1                      node1:eth1                      yes            

    Result: Node connectivity check passed for subnet "10.101.0.0" with node(s) node2,node1.

   

    Suitable interfaces for the private interconnect on subnet "192.168.0.0":

    node2 eth0:192.168.0.12

    node1 eth0:192.168.0.11

   

    Suitable interfaces for the private interconnect on subnet "10.101.0.0":

    node2 eth1:10.101.0.12

    node1 eth1:10.101.0.11

   

    ERROR:

    Could not find a suitable set of interfaces for VIPs.

   

    Result: Node connectivity check failed.

   

    Checking system requirements for 'crs'...

   

    Check: Total memory

      Node Name     Available                 Required                  Comment  

      ------------  ------------------------  ------------------------  ----------

      node2         689.38MB (705924KB)       512MB (524288KB)          passed   

      node1         689.38MB (705924KB)       512MB (524288KB)          passed   

    Result: Total memory check passed.

   

    Check: Free disk space in "/tmp" dir

      Node Name     Available                 Required                  Comment  

      ------------  ------------------------  ------------------------  ----------

      node2         4.22GB (4428784KB)        400MB (409600KB)          passed   

      node1         4.22GB (4426320KB)        400MB (409600KB)          passed   

    Result: Free disk space check passed.

   

    Check: Swap space

      Node Name     Available                 Required                  Comment  

      ------------  ------------------------  ------------------------  ----------

      node2         2GB (2096472KB)           1GB (1048576KB)           passed   

      node1         2GB (2096472KB)           1GB (1048576KB)           passed   

    Result: Swap space check passed.

   

    Check: System architecture

      Node Name     Available                 Required                  Comment  

      ------------  ------------------------  ------------------------  ----------

      node2         i686                      i686                      passed   

      node1         i686                      i686                      passed   

    Result: System architecture check passed.

   

    Check: Kernel version

      Node Name     Available                 Required                  Comment  

      ------------  ------------------------  ------------------------  ----------

      node2         2.6.18-194.el5            2.4.21-15EL               passed   

      node1         2.6.18-194.el5            2.4.21-15EL               passed   

    Result: Kernel version check passed.

   

    Check: Package existence for "make-3.79"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           make-3.81-3.el5                 passed         

      node1                           make-3.81-3.el5                 passed         

    Result: Package existence check passed for "make-3.79".

   

    Check: Package existence for "binutils-2.14"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           binutils-2.17.50.0.6-14.el5     passed         

      node1                           binutils-2.17.50.0.6-14.el5     passed         

    Result: Package existence check passed for "binutils-2.14".

   

    Check: Package existence for "gcc-3.2"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           gcc-4.1.2-48.el5                passed         

      node1                           gcc-4.1.2-48.el5                passed         

    Result: Package existence check passed for "gcc-3.2".

    Check: Package existence for "glibc-2.3.2-95.27"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           glibc-2.5-49                    passed         

      node1                           glibc-2.5-49                    passed         

    Result: Package existence check passed for "glibc-2.3.2-95.27".

   

    Check: Package existence for "compat-db-4.0.14-5"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           compat-db-4.2.52-5.1            passed         

      node1                           compat-db-4.2.52-5.1            passed         

    Result: Package existence check passed for "compat-db-4.0.14-5".

   

    Check: Package existence for "compat-gcc-7.3-2.96.128"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           missing                         failed         

      node1                           missing                         failed         

    Result: Package existence check failed for "compat-gcc-7.3-2.96.128".

   

    Check: Package existence for "compat-gcc-c++-7.3-2.96.128"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           missing                         failed         

      node1                           missing                         failed         

    Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".

   

    Check: Package existence for "compat-libstdc++-7.3-2.96.128"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           missing                         failed         

      node1                           missing                         failed         

    Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".

   

    Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           missing                         failed         

      node1                           missing                         failed         

    Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".

   

    Check: Package existence for "openmotif-2.2.3"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           openmotif-2.3.1-2.el5_4.1       passed         

      node1                           openmotif-2.3.1-2.el5_4.1       passed         

    Result: Package existence check passed for "openmotif-2.2.3".

   

    Check: Package existence for "setarch-1.3-1"

      Node Name                       Status                          Comment        

      ------------------------------  ------------------------------  ----------------

      node2                           setarch-2.0-1.1                 passed         

      node1                           setarch-2.0-1.1                 passed         

    Result: Package existence check passed for "setarch-1.3-1".

   

    Check: Group existence for "dba"

      Node Name     Status                    Comment                

      ------------  ------------------------  ------------------------

      node2         exists                    passed                 

      node1         exists                    passed                 

    Result: Group existence check passed for "dba".

   

    Check: Group existence for "oinstall"

      Node Name     Status                    Comment                

      ------------  ------------------------  ------------------------

      node2         exists                    passed                 

      node1         exists                    passed                 

    Result: Group existence check passed for "oinstall".

   

    Check: User existence for "nobody"

      Node Name     Status                    Comment                

      ------------  ------------------------  ------------------------

      node2         exists                    passed                 

      node1         exists                    passed                 

    Result: User existence check passed for "nobody".

   

    System requirement failed for 'crs'

   

    Pre-check for cluster services setup was unsuccessful on all the nodes.

 

    Could not find a suitable set of interfaces for VIPs.”,可以忽略该错误

    信息,这是一个bug,Metalink中有详细说明,doc.id:338924.1。参考本文尾部列出的内容。

    

    对于上面描述的failed的包,尽可能的将其安装到系统。

 

2.安装Clusterware 后的检查,注意,此时执行的cluvfy是位于已安装的路径

    [oracle@node1 ~]$ pwd

    /u01/app/oracle/product/10.2.0/crs_1/bin

    [oracle@node1 ~]$./cluvfy stage -post crsinst -n node1,node2

   

    Performing post-checks for cluster services setup

   

    Checking node reachability...

    Node reachability check passed from node "node1".

   

   

    Checking user equivalence...

    User equivalence check passed for user "oracle".

   

    Checking Cluster manager integrity...

   

   

    Checking CSS daemon...

    Daemon status check passed for "CSS daemon".

   

    Cluster manager integrity check passed.

   

    Checking cluster integrity...

   

    Cluster integrity check passed

       

    Checking OCR integrity...  

   

    Checking the absence of a non-clustered configuration...

    All nodes free of non-clustered, local-only configurations.

   

    Uniqueness check for OCR device passed.

   

    Checking the version of OCR...

    OCR of correct Version "2" exists.

   

    Checking data integrity of OCR...

    Data integrity check for OCR passed.

   

    OCR integrity check passed.

   

    Checking CRS integrity...

   

    Checking daemon liveness...

    Liveness check passed for "CRS daemon".

   

    Checking daemon liveness...

    Liveness check passed for "CSS daemon".

   

    Checking daemon liveness...

    Liveness check passed for "EVM daemon".

   

    Checking CRS health...

    CRS health check passed.

   

    CRS integrity check passed.

   

    Checking node application existence...

   

   

    Checking existence of VIP node application (required)

    Check passed.

   

    Checking existence of ONS node application (optional)

    Check passed.

   

    Checking existence of GSD node application (optional)

    Check passed.

   

   

    Post-check for cluster services setup was successful.

   

    从上面的校验可以看出,Clusterware的相关后台进程,nodeapps相关资源以及OCR等处于passed状态,即Clusterware成功安装

 

3.cluvfy的用法

    [oracle@node1 ~]$ cluvfy -help  #直接使用-help参数即可获得cluvfy的帮助信息

   

    USAGE:

    cluvfy [ -help ]

    cluvfy stage { -list | -help }

    cluvfy stage {-pre|-post} <stage-name> <stage-specific options>  [-verbose]

    cluvfy comp  { -list | -help }

    cluvfy comp  <component-name> <component-specific options>  [-verbose]

   

    [oracle@node1 ~]$ cluvfy comp -list

       

    USAGE:

    cluvfy comp  <component-name> <component-specific options>  [-verbose]

   

    Valid components are:

            nodereach : checks reachability between nodes

            nodecon   : checks node connectivity

            cfs       : checks CFS integrity

            ssa       : checks shared storage accessibility

            space     : checks space availability

            sys       : checks minimum system requirements

            clu       : checks cluster integrity

            clumgr    : checks cluster manager integrity

            ocr       : checks OCR integrity

            crs       : checks CRS integrity

            nodeapp   : checks node applications existence

            admprv    : checks administrative privileges

            peer      : compares properties with peers

 

4.ID 338924.1

    CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs [ID 338924.1]

    ________________________________________

         Modified 29-JUL-2010     Type PROBLEM     Status PUBLISHED    

    In this Document

      Symptoms

      Cause

      Solution

      References

    ________________________________________

    Applies to:

    Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.7 - Release: 10.2 to 11.1

    Information in this document applies to any platform.

    Symptoms

    When running cluvfy to check network connectivity at various stages of the RAC/CRS installation process, cluvfy fails

    with errors similar to the following:

    =========================

    Suitable interfaces for the private interconnect on subnet "10.0.0.0":

    node1 eth0:10.0.0.1

    node2 eth0:10.0.0.2

   

    Suitable interfaces for the private interconnect on subnet "192.168.1.0":

    node1_internal eth1:192.168.1.2

    node2_internal eth1:192.168.1.1

   

    ERROR:

    Could not find a suitable set of interfaces for VIPs.

   

    Result: Node connectivity check failed.

   

    ========================

    On Oracle 11g, you may still see a warning in some cases, such as:

    ========================

    WARNING:

    Could not find a suitable set of interfaces for VIPs.

    ========================

    Output seen will be comparable to that noted above, but IP addresses and node_names may be different - i.e. the node names

     of 'node1','node2','node1_internal','node2_internal'  will be substituted with your actual Public and Private node names.

    

    A second problem that will be encountered in this situation is that at the end of the CRS installation for 10gR2, VIPCA

     will be run automatically in silent mode, as one of the 'optional' configuration assistants.  In this scenario, the VIPCA

      will fail at the end of the CRS installation.   The InstallActions log will show output such as:

    > />> Oracle CRS stack installed and running under init(1M)

    > />> Running vipca(silent) for configuring nodeapps

    > />> The given interface(s), "eth0" is not public. Public interfaces should

    > />> be used to configure virtual IPs.

    Cause

    This issue occurs due to incorrect assumptions made in cluvfy and vipca based on an Internet Best Practice document -

     "RFC1918 - Address Allocation for Private Internets".  This Internet Best Practice RFC can be viewed here:

    http://www.faqs.org/rfcs/rfc1918.html

    From an Oracle perspective, this issue is tracked in BUG:4437727

    Per BUG:4437727, cluvfy makes an incorrect assumption based on RFC 1918 that any IP address/subnet that begins with any

    of the following octets is private and hence may not be fit for use as a VIP:

    172.16.x.x  through 172.31.x.x

    192.168.x.x

    10.x.x.x

    However, this assumption does not take into account that it is possible to use these IPs as Public IP's on an internal

    network  (or intranet).   Therefore, it is very common to use IP addresses in these ranges as Public IP's and as Virtual

     IP(s), and this is a supported configuration.

    Solution

    The solution to the error above that is given when running 'cluvfy' is to simply ignore it if you intend to use an IP in

     one of the above ranges for your VIP. The installation and configuration can continue with no corrective action necessary.

    One result of this, as noted in the problem section, is that the silent VIPCA will fail at the end of the 10gR2 CRS

    installation.   This is because VIPCA is running in silent mode and is trying to notify that the IPs that were provided

    may not be fit to be used as VIP(s). To correct this, you can manually execute the VIPCA GUI after the CRS installation

    is complete.  VIPCA needs to be executed from the CRS_HOME/bin directory as the 'root'  user  (on Unix/Linux)  or as a

    Local Administrator (on Windows):

    $ cd $ORA_CRS_HOME/bin

    $ ./vipca

    Follow the prompts for VIPCA to select the appropriate interface for the public network, and assign the VIPs for each node

    when prompted.  Manually running VIPCA in the GUI mode, using the same IP addresses, should complete successfully.

    Note that if you patch to 10.2.0.3 or above, VIPCA will run correctly in silent mode.  The command to re-run vipca

    silently can be found in CRS_HOME/cfgtoollogs  (or CRS_HOME/cfgtoollogs) in the file 'configToolAllCommands' or

     'configToolFailedCommands'.  Thus, in the case of a new install, the silent mode VIPCA command will fail after the

     10.2.0.1 base release install, but once the CRS Home is patched to 10.2.0.3 or above, vipca can be re-run silently,

     without the need to invoke the GUI tool

    

    References

    NOTE:316583.1 - VIPCA FAILS COMPLAINING THAT INTERFACE IS NOT PUBLIC

     Related

    ________________________________________

    Products

    ________________________________________

    ?    Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition

    Keywords

    ________________________________________

    INSTALLATION FAILS; INTERCONNECT; PRIVATE INTERCONNECT; PRIVATE NETWORKS

    Errors

    ________________________________________RFC-1918

 

    上面的描述很多,下面给出处理办法

   

    在出现错误的节点修改vipca 文件

    [root@node2 ~]# vi $CRS_ORA_HOME/bin/vipca

    找到如下内容:

        Remove this workaround when the bug 3937317 is fixed

        arch=`uname -m`

        if [ "$arch" = "i686" -o "$arch" = "ia64" ]

        then

        LD_ASSUME_KERNEL=2.4.19

        export LD_ASSUME_KERNEL

        fi

        #End workaround

        在fi 后新添加一行:

        unset LD_ASSUME_KERNEL

       

    以及srvctl 文件

    [root@node2 ~]# vi $CRS_ORA_HOME/bin/srvctl

   

    找到如下内容:

        LD_ASSUME_KERNEL=2.4.19

        export LD_ASSUME_KERNEL

        同样在其后新增加一行:

        unset LD_ASSUME_KERNEL

        保存退出,然后在故障重新执行root.sh

   

5. 快捷参考

有关性能优化请参考

    Oracle 硬解析与软解析

    共享池的调整与优化(Shared pool Tuning)

    Buffer cache 的调整与优化(一)

    Oracle 表缓存(caching table)的使用

 

有关ORACLE体系结构请参考

    Oracle 表空间与数据文件

    Oracle 密码文件

    Oracle 参数文件

    Oracle 联机重做日志文件(ONLINE LOG FILE)

    Oracle 控制文件(CONTROLFILE)

    Oracle 归档日志

    Oracle 回滚(ROLLBACK)和撤销(UNDO)

    Oracle 数据库实例启动关闭过程

    Oracle 10g SGA 的自动化管理

    Oracle 实例和Oracle数据库(Oracle体系结构)

 

有关闪回特性请参考

    Oracle 闪回特性(FLASHBACK DATABASE)

    Oracle 闪回特性(FLASHBACK DROP & RECYCLEBIN)

    Oracle 闪回特性(Flashback Query、Flashback Table)

    Oracle 闪回特性(Flashback Version、Flashback Transaction)

 

有关基于用户管理的备份和备份恢复的概念请参考

    Oracle 冷备份

    Oracle 热备份

    Oracle 备份恢复概念

    Oracle 实例恢复

    Oracle 基于用户管理恢复的处理(详细描述了介质恢复及其处理)

    SYSTEM 表空间管理及备份恢复

    SYSAUX表空间管理及恢复

 

有关RMAN的备份恢复与管理请参考

    RMAN 概述及其体系结构

    RMAN 配置、监控与管理

    RMAN 备份详解

    RMAN 还原与恢复

    RMAN catalog 的创建和使用

    基于catalog 创建RMAN存储脚本

基于catalog 的RMAN 备份与恢复

使用RMAN迁移文件系统数据库到ASM

    RMAN 备份路径困惑(使用plus archivelog时)

 

有关ORACLE故障请参考

    ORA-32004 的错误处理

    ORA-01658 错误

    CRS-0215 错误处理

    ORA-00119,ORA-00132 错误处理

    又一例SPFILE设置错误导致数据库无法启动

    对参数FAST_START_MTTR_TARGET = 0 的误解及设定

    SPFILE 错误导致数据库无法启动(ORA-01565)

 

有关ASM请参考

    创建ASM实例及ASM数据库

    ASM 磁盘、目录的管理

    使用 ASMCMD 工具管理ASM目录及文件

 

有关SQL/PLSQL请参考

    SQLPlus 常用命令

    替代变量与SQL*Plus环境设置

    使用Uniread实现SQLplus翻页功能

    SQL 基础-->SELECT 查询

    SQL 基础--> NEW_VALUE 的使用

    SQL 基础--> 集合算(UNION 与UNION ALL)

    SQL 基础--> 常用函

    SQL 基础--> 视图(CREATE VIEW)

    SQL 基础--> 建和管理表

    SQL 基础--> 多表查询

    SQL 基础--> 过滤和排序

    SQL 基础--> 查询

    SQL 基础--> 组与

    SQL 基础--> 次化查询(START BY ... CONNECT BY PRIOR)

    SQL 基础--> ROLLUP与CUBE算符实现数汇总

    PL/SQL -->

    PL/SQL --> 理(Exception)

    PL/SQL --> 言基

    PL/SQL --> 流程控制

    PL/SQL --> PL/SQL记录

    PL/SQL --> 包的管理

    PL/SQL --> 式游标(SQL%FOUND)

    PL/SQL --> 包重、初始化

    PL/SQL --> DBMS_DDL包的使用

    PL/SQL --> DML 触发

    PL/SQL --> INSTEAD OF 触发

    PL/SQL --> 储过

    PL/SQL -->

    PL/SQL --> 动态SQL

    PL/SQL --> 动态SQL的常见错误

 

有关ORACLE其它特性

    Oracle 常用目录结构(10g)

    使用OEM,SQL*Plus,iSQL*Plus 管理Oracle实例

    日志记录模式(LOGGING 、FORCE LOGGING 、NOLOGGING)

    表段、索引段上的LOGGING与NOLOGGING

    Oralce OMF 功能详解

    Oracle 用户、对象权限、系统权限 

    Oracle 角色、配置文件

    Oracle 分区表

    Oracle 外部表

    使用外部表管理Oracle 告警日志(ALAERT_$SID.LOG)

    簇表及簇表管理(Index clustered tables)

    数据泵 EXPDP 导出工具的使用

    数据泵 IMPDP 导入工具的使用

    导入导出 Oracle 分区表数据

    SQL*Loader使用方法

    启用用户进程跟踪

    配置非默认端口的动态服务注册

    配置ORACLE 客户端连接到数据库

    system sys,sysoper sysdba 的区别

    ORACLE_SID、DB_NAME、INSTANCE_NAME、DB_DOMIAN、GLOBAL_NAME

    Oracle 补丁全集 (Oracle 9i 10g 11g Path)

    Oracle 10.2.0.1 升级到 10.2.0.4

    Oracle 彻底 kill session

 

时间: 2024-08-02 18:46:14

使用 runcluvfy 校验Oracle RAC安装环境的相关文章

如何使用runcluvfy来校验Oracle RAC安装环境

所谓工欲善其事,必先利其器.安装 Orale RAC 可谓是一个浩大的工程,尤其是没有做好前期的规划与配置工作时将导致安装的复杂度绝非想象.幸好有runcluvfy工具,这大大简化了安装工作.下面的演示是基于安装Oracle 10g RAC / Linux来完成的. 1.从安装文件路径下使用runcluvfy实施安装前的校验 [oracle@node1 cluvfy]$ pwd /u01/Clusterware/clusterware/cluvfy [oracle@node1 cluvfy]$

Oracle RAC OCR 与健忘症

    OCR就好比Windows的一个注册表,存储了所有与集群,RAC数据库相关的配置信息.而且是公用的配置,也就是说多个节点共享相同的配置信息.因此该配置应当存储于共享磁盘.本文主要基于Oracle 10g RAC描述了集群的OCR以及OCR产生的健忘问题.   一.OCR的特点   类似于Windows注册表,用于存储所有与集群,RAC数据库相关的配置信息   被多个节点所共享,因此,只能存储于共享磁盘.支持单disk以及镜像方式来存放.大小通常100MB-1GB.   在Oracle 1

Oracle RAC OCR 的管理与维护

   OCR相当于Windows的注册表.对于Windows而言,所有的软件信息,用户,配置,安全等等统统都放到注册表里边.而集群呢,同样如此,所有和集群相关的资源,配置,节点,RAC数据库统统都放在这个仓库里.如果OCR被破坏则导致集群服务启动异常,需要修复OCR.因此OCR的管理与维护对于整个集群而言,是相当重要的.本文主要描述了Oracle 10g RAC下的OCR的管理与维护. 1.环境 oracle@bo2dbp:~> cat /etc/issue Welcome to SUSE Li

Oracle RAC OCR 的备份与恢复

        Oracle Clusterware把整个集群的配置信息放在共享存储上,这些信息包括了集群节点的列表.集群数据库实例到节点的映射以及CRS应用程序资源信息.也即是存放在ocr 磁盘(或者ocfs文件)上.因此对于这个配置文件的重要性是不言而喻的.任意使得ocr配置发生变化的操作在操作之间或之后都建议立即备份ocr.本文主要基于Oracle 10g RAC环境描述OCR的备份与恢复.        OCR 相关参考:        Oracle RAC OCR 与健忘症      

Oracle RAC 环境下的 v$log v$logfile

      通常情况下,在Oracle RAC 环境中,v$视图可查询到你所连接实例的相关信息,而gv$视图则包含所有实例的信息.然而在RAC环境中,当我们查询v$log视图时说按照常理的话,v$log视图应当看到的是你所连接到实例的日志组的信息.但v$log是个例外,也就是说v$log视图里看到的不仅仅是自身实例所包含的redo日志组,其他所有剩余实例的redo日志组也同样会出现在该视图中.无论你从任意一个节点连接查询v$log视图都将获得相同的结果.该情形同样适用于v$logfile.这到底

Vmware server 下为Oracle RAC 添加共享磁盘

    在VMware下的Oracle RAC 环境中,对于共享存储不够或者需要增加新的共享磁盘来配置ocr或votingdisk的多路镜像,我们可以通过vmware下的命令行来增加共享存储,然后将这些新磁盘逐一追加到虚拟机即可.下面给出具体描述.   1.添加虚拟磁盘 #下面我们为虚拟机增加2块共享磁盘 #一块为添加ocr镜像,一块用于补充asm磁盘不够用的情形 C:\Users\robinson.cheng>cd C:\Program Files (x86)\VMware\VMware Se

Oracle RAC 连接到指定实例

        在某些特定的情形下,有时候需要从客户端连接到RAC中指定的实例,而不是由客户端Load_balance来动态选择或者是通过服务器端的监听器根据负载情形来转发.对此我们可以通过为tnsnames.ora中特定的网络服务名添加instance_name子项,或者是单独建立一个指向所需实例的网络服务名,下面描述这两种情形.   一.测试环境 -->Oracle 版本 SQL> select * from v$version where rownum<2; BANNER ----

Oracle RAC failover 测试(连接时故障转移)

    Oracle RAC 集群最突出的表现就是高可用性,这些内容主要包括load balance以及failover,通过这些技术使得单点故障不影响客户端端应用程序对数据库的正常访问,以及通过创建service实现节点间负载均衡.本文主要描述Oracle 10g rac环境下的Oracle failover测试.    下面是一些关于这方面的基础参考或相关链接:  有关负监听配置,载均衡(load balance)以及Oracle service请参考    ORACLE RAC 监听配置

Vmware 下Oracle RAC搬家引起CRS-1006/CRS-0215/CRS-0233

   最近虚拟机下的Oracle 10g RAC搬家,搬家完毕之后,Oracle 集群resource之VIP无法正常启动,收到了CRS-0233: Resource or relatives are currently involved with another operation 错误提示.为为啥呢,原来啊,搬家了地址发生变化了,你得使用你家里的新地址阿.... 1.环境描述 Oracle 10g RAC + Suse 10 注,将RAC虚拟机搬家之后,通常情况下我们在添加虚拟机时选择复制(