ORACLE 10.2.0.5RAC仲裁盘损坏后重建

场景描述:2016年1月6日,客户存储RAID5出现问题,更换AIX服务器管理模块,由于设备微码不匹配,导致RAID5中的仲裁盘LUN处于异常状态,进而导致oracle 10.2.0.5的双节点RAC异常宕机。

恢复过程中的问题:

1、首先,在root运行rootdelete.sh的脚本时,2个节点运行时间很长大概有半个小时

2、在为rac数据库划分新LUN时,名称及逻辑rhdiskn对应关系要与故障前保持一致;如果是IBM AIX小型机,在划分LUN后根据存储类型需要在2个节点设置生效参数reserve_policy=no_reserve;否则,在2个节点的RAC的第二个节点运行root.sh时会报如下异常错误:

cp:  /dev/rhdiskn: The requested
resource is busy.

3、在2个节点的RAC的第二个节点执行完毕root.sh后还需要注册vip、asm(如果RAC采用的是asm存储管理)、database、instance到OCR;在注册vip时需要服务器桌面支持,登录时必须使用oracle用户,否则以root 登录服务器suoracle,在执行vipca时无法启动图形桌面

 

下面是LINUX RED5的oracle 10g rac的仲裁盘损坏重建过程的模拟

RAC健康状态下的一些检查

[oracle@rac10gnode1
admin]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....0gdb.db
application    ONLINE    ONLINE    rac10gnode1

ora....b1.inst
application    ONLINE    ONLINE    rac10gnode1

ora....b2.inst
application    ONLINE    ONLINE    rac10gnode2

ora....SM1.asm
application    ONLINE    ONLINE    rac10gnode1

ora....E1.lsnr
application    ONLINE    ONLINE    rac10gnode1

ora....de1.gsd
application    ONLINE    ONLINE    rac10gnode1

ora....de1.ons
application    ONLINE    ONLINE    rac10gnode1

ora....de1.vip
application    ONLINE    ONLINE    rac10gnode1

ora....SM2.asm
application    ONLINE    ONLINE    rac10gnode2

ora....E2.lsnr
application    ONLINE    ONLINE    rac10gnode2

ora....de2.gsd
application    ONLINE    ONLINE    rac10gnode2

ora....de2.ons
application    ONLINE    ONLINE    rac10gnode2

ora....de2.vip
application    ONLINE    ONLINE    rac10gnode2

[oracle@rac10gnode1
admin]$ onsctl ping

Number of
onsconfiguration retrieved, numcfg = 2

onscfg[0]

   {node = rac10gnode1, port = 6200}

Adding remote host
rac10gnode1:6200

onscfg[1]

   {node = rac10gnode2, port = 6200}

Adding remote host
rac10gnode2:6200

ons is running ...

[oracle@rac10gnode1
admin]$ crsctl query css votedisk

 0.     0    /dev/raw/raw2

 

located 1
votedisk(s).

[oracle@rac10gnode1
admin]$ ocrcheck

Status of Oracle
Cluster Registry is as follows :

        Version                  :          2

        Total space (kbytes)     :     200692

        Used space (kbytes)      :       3792

        Available space (kbytes) :     196900

        ID                       : 1874739684

        Device/File Name         : /dev/raw/raw1

                                    Device/File
integrity check succeeded

 

                                    Device/File
not configured

 

        Cluster registry integrity check succeeded

实验前的必要备份

[oracle@rac10gnode1 admin]$ dd if=/dev/raw/raw2
of=/oracle/app/oracle/votediskbak

996030+0 records
in

996030+0 records
out

509967360 bytes
(510 MB) copied, 173.582 seconds, 2.9 MB/s

[oracle@rac10gnode1
admin]$ dd if=/dev/raw/raw2 of/oracle/app/oracle/raocrbak

401562+0 records
in

401562+0 records
out

205599744 bytes
(206 MB) copied, 70.1261 seconds, 2.9 MB/s

记得要使用RMAN进行全库备份,避免仲裁盘恢复失败数据丢失,用以RAC重建恢复。

故障模拟

[oracle@rac10gnode1
~]$ dd if=/dev/zero of=/dev/raw/raw2 bs=8k count=4k

 

Connection closed
by foreign host.

可以看到当仲裁盘被破坏后,2台RAC集群服务器同时自动重启

重启后可以查看RAC集群的状态,已经看不到crs和votedisk了

[oracle@rac10gnode1 bdump]$ crs_stat -t

CRS-0184: Cannot
communicate with the CRS daemon.

接下来是votedisk重建过程

首先删除原有的CRS和votedisk的配置

[root@rac10gnode1 ~]# cd
/oracle/app/oracle/product/10.2.0.1/crs/install

[root@rac10gnode1 install]# ./rootdelete.sh

Shutting down
Oracle Cluster Ready Services (CRS):

Stopping
resources.

Error while
stopping resources. Possible cause: CRSD is down.

Stopping CSSD.

Unable to
communicate with the CSS daemon.

Shutdown has
begun. The daemons should exit soon.

Checking to see if
Oracle CRS stack is down...

Oracle CRS stack
is not running.

Oracle CRS stack
is down now.

Removing script
for Oracle Cluster Ready services

Updating ocr file
for downgrade

Cleaning up SCR
settings in '/etc/oracle/scls_scr'  #需要提的是,生产环境中该步骤很慢

[root@rac10gnode2~]# cd
/oracle/app/oracle/product/10.2.0.1/crs/install

[root@rac10gnode2 install]# ./rootdelete.sh

Shutting down
Oracle Cluster Ready Services (CRS):

Stopping
resources.

Error while
stopping resources. Possible cause: CRSD is down.

Stopping CSSD.

Unable to
communicate with the CSS daemon.

Shutdown has
begun. The daemons should exit soon.

Checking to see if
Oracle CRS stack is down...

Oracle CRS stack
is not running.

Oracle CRS stack
is down now.

Removing script
for Oracle Cluster Ready services

Updating ocr file
for downgrade

Cleaning up SCR
settings in '/etc/oracle/scls_scr'  #需要提的是,生产环境中该步骤很慢

[root@rac10gnode1
install]# ./rootdeinstall.sh

 

Removing contents from OCR device

2560+0 records in

2560+0 records out

10485760 bytes (10
MB) copied, 0.393353 seconds, 26.7 MB/s

检查是否还有ocr、crs、evm进程存在

[root@rac10gnode1 install]# ps -e | grep -i 'ocs[s]d'

[root@rac10gnode1
install]# ps -e | grep -i 'cr[s]d.bin'

[root@rac10gnode1 install]# ps -e | grep -i
'ev[m]d.bin'

接下来是重建votedisk和ocr的过程,这里使用原先的ocr和votedisk裸设备,生产中可能需要重新划分votedisk和ocrLUN,需要注意reserve_policy参数的设置;另外,需要使用root用户到两个节点crs目录执行root.sh

[root@rac10gnode1 crs]# ./root.sh

WARNING: directory
'/oracle/app/oracle/product/10.2.0.1' is not owned by root

WARNING: directory
'/oracle/app/oracle/product' is not owned by root

WARNING: directory '/oracle/app/oracle' is not owned
by root

WARNING: directory
'/oracle/app' is not owned by root

WARNING: directory '/oracle' is not owned by root

Checking to see if
Oracle CRS stack is already configured

Setting the
permissions on OCR backup directory

Setting up NS
directories

Oracle Cluster
Registry configuration upgraded successfully

WARNING: directory
'/oracle/app/oracle/product/10.2.0.1' is not owned by root

WARNING: directory
'/oracle/app/oracle/product' is not owned by root

WARNING: directory '/oracle/app/oracle' is not owned
by root

WARNING: directory
'/oracle/app' is not owned by root

WARNING: directory '/oracle' is not owned by root

Successfully
accumulated necessary OCR keys.

Using ports:
CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node
:

node 1:
rac10gnode1 rac10g1priv rac10gnode1

node 2:
rac10gnode2 rac10g2priv rac10gnode2

Creating OCR keys
for user 'root', privgrp 'root'..

Operation
successful.

Now formatting
voting device: /dev/raw/raw2

Format of 1 voting
devices complete.

Startup will be
queued to init within 90 seconds.

Adding daemons to
inittab

Expecting the CRS
daemons to be up within 600 seconds.

CSS is active on
these nodes.

       rac10gnode1

CSS is inactive on
these nodes.

       rac10gnode2

Local node
checking complete.

Run root.sh on
remaining nodes to start CRS daemons.

 

[root@rac10gnode2 crs]# ./root.sh

WARNING: directory
'/oracle/app/oracle/product/10.2.0.1' is not owned by root

WARNING: directory
'/oracle/app/oracle/product' is not owned by root

WARNING: directory '/oracle/app/oracle' is not owned
by root

WARNING: directory
'/oracle/app' is not owned by root

WARNING: directory '/oracle' is not owned by root

Checking to see if
Oracle CRS stack is already configured

Setting the
permissions on OCR backup directory

Setting up NS
directories

Oracle Cluster Registry
configuration upgraded successfully

WARNING: directory
'/oracle/app/oracle/product/10.2.0.1' is not owned by root

WARNING: directory
'/oracle/app/oracle/product' is not owned by root

WARNING: directory '/oracle/app/oracle' is not owned
by root

WARNING: directory
'/oracle/app' is not owned by root

WARNING: directory '/oracle' is not owned by root

clscfg: EXISTING
configuration version 3 detected.

clscfg: version 3
is 10G Release 2.

Successfully
accumulated necessary OCR keys.

Using ports:
CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node
:

node 1:
rac10gnode1 rac10g1priv rac10gnode1

node 2:
rac10gnode2 rac10g2priv rac10gnode2

clscfg: Arguments
check out successfully.

NO KEYS WERE
WRITTEN. Supply -force parameter to override.

-force is
destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been
initialized

Startup will be
queued to init within 90 seconds.

Adding daemons to
inittab

Expecting the CRS
daemons to be up within 600 seconds.

CSS is active on
these nodes.

       rac10gnode1

       rac10gnode2

CSS is active on
all nodes.

Waiting for the
Oracle CRSD and EVMD to start

Oracle CRS stack
installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

/oracle/app/oracle/product/10.2.0.1/crs/jdk/jre//bin/java: error while
loading shared libraries: libpthread.so.0: cannot open shared object file: No
such file or directory

这里有报错,是因为vip网口没有配置,接下来配置vip

[root@rac10gnode1 crs]# su - oracle

[oracle@rac10gnode1
~]$ oifcfg setif -global eth1/192.168.56.0:public

[oracle@rac10gnode1 ~]$ oifcfg
setif -global eth0/10.10.10.0:cluster_interconnect

 [oracle@rac10gnode1 ~]$ oifcfg iflist

eth0  10.10.10.0

eth1  192.168.56.0

[oracle@rac10gnode1
~]$ oifcfg getif

eth1  192.168.56.0  global  public

eth0  10.10.10.0  global  cluster_interconnect

然后使用root用户到/oracle/app/oracle/product/10.2.0.1/crs/bin目录下运行vipca

配置VIP

 

vip配置完成后就可以查看RAC集群的状态了

[root@rac10gnode1
bin]# su - oracle

[oracle@rac10gnode1
~]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....de1.gsd
application    ONLINE    ONLINE    rac10gnode1

ora....de1.ons
application    ONLINE    ONLINE    rac10gnode1

ora....de1.vip
application    ONLINE    ONLINE    rac10gnode1

ora....de2.gsd
application    ONLINE    ONLINE    rac10gnode2

ora....de2.ons
application    ONLINE    ONLINE    rac10gnode2

ora....de2.vip
application    ONLINE    ONLINE    rac10gnode2

接下来配置数据库监听,事先要做备份

[oracle@rac10gnode1
db]$ cd network/

[oracle@rac10gnode1
network]$ ls

admin  doc  install  jlib  lib  lib32  log  mesg  tools  trace

[oracle@rac10gnode1 network]$ cp -R admin adminbak

[oracle@rac10gnode1
network]$ ls

admin  adminbak  doc  install  jlib  lib  lib32  log  mesg  tools  trace

[oracle@rac10gnode1 network]$ netca

 

 

 

[oracle@rac10gnode1 network]$ netca

 

Oracle Net Services Configuration:

Configuring Listener:LISTENER

rac10gnode1...

rac10gnode2...

Listener configuration complete.

Oracle Net Services configuration successful. The exit
code is 0

查看RAC集群状态,可以看到两个节点的监听服务online了

[oracle@rac10gnode1 network]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....E1.lsnr application    ONLINE    ONLINE    rac10gnode1

ora....de1.gsd application    ONLINE    ONLINE    rac10gnode1

ora....de1.ons application    ONLINE    ONLINE    rac10gnode1

ora....de1.vip application    ONLINE    ONLINE    rac10gnode1

ora....E2.lsnr application    ONLINE    ONLINE    rac10gnode2

ora....de2.gsd application    ONLINE    ONLINE    rac10gnode2

ora....de2.ons application    ONLINE    ONLINE    rac10gnode2

ora....de2.vip application    ONLINE    ONLINE    rac10gnode2

查看当前节点的监听,已经自动启动了

[oracle@rac10gnode1 network]$ lsnrctl status

 

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on
18-JUN-2015 19:56:25

 

Copyright (c) 1991, 2005, Oracle.  All rights reserved.

 

Connecting to
(ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

STATUS of the LISTENER

------------------------

Alias                     LISTENER_RAC10GNODE1

Version                   TNSLSNR for Linux: Version
10.2.0.1.0 - Production

Start Date                18-JUN-2015 19:55:30

Uptime                    0 days 0 hr. 0 min. 55 sec

Trace Level               off

Security                  ON: Local OS Authentication

SNMP                      OFF

Listener Parameter File   /oracle/app/oracle/product/10.2.0.1/db/network/admin/listener.ora

Listener Log File         /oracle/app/oracle/product/10.2.0.1/db/network/log/listener_rac10gnode1.log

Listening Endpoints Summary...

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.22)(PORT=1521)))

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1521)))

Services Summary...

Service "PLSExtProc" has 1 instance(s).

  Instance
"PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...

The command completed successfully

添加ONS配置

[oracle@rac10gnode1 network]$ racgons add_config
rac10gnode1:6251 rac10gnode2:6251

[oracle@rac10gnode1 network]$ onsctl ping

Number of onsconfiguration retrieved, numcfg = 2

onscfg[0]

   {node =
rac10gnode1, port = 6251}

Adding remote host rac10gnode1:6251

onscfg[1]

   {node =
rac10gnode2, port = 6251}

Adding remote host rac10gnode2:6251

ons is running ...

添加ASM配置

[oracle@rac10gnode1 network]$ srvctl add asm -n
rac10gnode1

[oracle@rac10gnode1 network]$ srvctl add asm -n
rac10gnode2

添加database配置

[oracle@rac10gnode1 network]$ srvctl add database -d
rac10gdb -o $ORACLE_HOME

添加实例配置

[oracle@rac10gnode1 network]$ srvctl add instance -d rac10gdb
-i rac10gdb1 -n rac10gnode1

[oracle@rac10gnode1 network]$ srvctl add instance -d rac10gdb
-i rac10gdb2 -n rac10gnode2

查看RAC集群的状态,会发现asm、instance、db等服务处于offline状态

[oracle@rac10gnode1 network]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....0gdb.db application    OFFLINE   OFFLINE              

ora....b1.inst application    OFFLINE   OFFLINE              

ora....b2.inst application    OFFLINE   OFFLINE              

ora....SM1.asm application    OFFLINE   OFFLINE              

ora....E1.lsnr application    ONLINE    ONLINE    rac10gnode1

ora....de1.gsd application    ONLINE    ONLINE    rac10gnode1

ora....de1.ons application    ONLINE    ONLINE    rac10gnode1

ora....de1.vip application    ONLINE    ONLINE    rac10gnode1

ora....SM2.asm application    OFFLINE   OFFLINE              

ora....E2.lsnr application    ONLINE    ONLINE    rac10gnode2

ora....de2.gsd application    ONLINE    ONLINE    rac10gnode2

ora....de2.ons application    ONLINE    ONLINE    rac10gnode2

ora....de2.vip application    ONLINE    ONLINE    rac10gnode2

接下来启动asm、db服务

[oracle@rac10gnode1 network]$ srvctl start asm -n
rac10gnode1

[oracle@rac10gnode1 network]$ srvctl start asm -n
rac10gnod2

[oracle@rac10gnode1 network]$ crs_stat -t

Name           Type           Target    State     Host        

------------------------------------------------------------

ora....0gdb.db application    OFFLINE   OFFLINE              

ora....b1.inst application    OFFLINE   OFFLINE              

ora....b2.inst application    OFFLINE   OFFLINE              

ora....SM1.asm application    ONLINE    ONLINE    rac10gnode1

ora....E1.lsnr application    ONLINE    ONLINE    rac10gnode1

ora....de1.gsd application    ONLINE    ONLINE    rac10gnode1

ora....de1.ons application    ONLINE    ONLINE    rac10gnode1

ora....de1.vip application    ONLINE    ONLINE    rac10gnode1

ora....SM2.asm application    ONLINE    ONLINE    rac10gnode2

ora....E2.lsnr application    ONLINE    ONLINE    rac10gnode2

ora....de2.gsd application    ONLINE    ONLINE    rac10gnode2

ora....de2.ons application    ONLINE    ONLINE    rac10gnode2

ora....de2.vip application    ONLINE    ONLINE    rac10gnode2

[oracle@rac10gnode1 network]$ srvctl start database -d
rac10gdb

[oracle@rac10gnode1 network]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....0gdb.db application    ONLINE    ONLINE    rac10gnode2

ora....b1.inst application    ONLINE    ONLINE    rac10gnode1

ora....b2.inst application    ONLINE    ONLINE    rac10gnode2

ora....SM1.asm application    ONLINE    ONLINE    rac10gnode1

ora....E1.lsnr application    ONLINE    ONLINE    rac10gnode1

ora....de1.gsd application    ONLINE    ONLINE    rac10gnode1

ora....de1.ons application    ONLINE    ONLINE    rac10gnode1

ora....de1.vip application    ONLINE    ONLINE    rac10gnode1

ora....SM2.asm application    ONLINE    ONLINE    rac10gnode2

ora....E2.lsnr application    ONLINE    ONLINE    rac10gnode2

ora....de2.gsd application    ONLINE    ONLINE    rac10gnode2

ora....de2.ons application    ONLINE    ONLINE    rac10gnode2

ora....de2.vip application    ONLINE    ONLINE    rac10gnode2

接下拉可以验证当前RAC修复情况

[oracle@rac10gnode1 network]$ lsnrctl status

 

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on
18-JUN-2015 20:09:35

 

Copyright (c) 1991, 2005, Oracle.  All rights reserved.

 

Connecting to
(ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

STATUS of the LISTENER

------------------------

Alias                     LISTENER_RAC10GNODE1

Version                   TNSLSNR for Linux: Version
10.2.0.1.0 - Production

Start Date                18-JUN-2015 19:55:30

Uptime                    0 days 0 hr. 14 min. 5 sec

Trace Level               off

Security                  ON: Local OS Authentication

SNMP                      OFF

Listener Parameter File   /oracle/app/oracle/product/10.2.0.1/db/network/admin/listener.ora

Listener Log File         /oracle/app/oracle/product/10.2.0.1/db/network/log/listener_rac10gnode1.log

Listening Endpoints Summary...

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.22)(PORT=1521)))

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1521)))

Services Summary...

Service "+ASM" has 1 instance(s).

  Instance
"+ASM1", status BLOCKED, has 1 handler(s) for this service...

Service "+ASM_XPT" has 1 instance(s).

  Instance
"+ASM1", status BLOCKED, has 1 handler(s) for this service...

Service "PLSExtProc" has 1 instance(s).

  Instance
"PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...

Service "rac10gdb" has 2 instance(s).

  Instance
"rac10gdb1", status READY, has 2 handler(s) for this service...

  Instance
"rac10gdb2", status READY, has 1 handler(s) for this service...

Service "rac10gdbXDB" has 2 instance(s).

  Instance
"rac10gdb1", status READY, has 1 handler(s) for this service...

  Instance
"rac10gdb2", status READY, has 1 handler(s) for this service...

Service "rac10gdb_XPT" has 2 instance(s).

  Instance
"rac10gdb1", status READY, has 2 handler(s) for this service...

  Instance
"rac10gdb2", status READY, has 1 handler(s) for this service...

The command completed successfully

[oracle@rac10gnode1 network]$ ps -ef|grep ora_

oracle   21906     1  0 20:08 ?        00:00:00 ora_pmon_rac10gdb1

oracle   21908     1  0 20:08 ?        00:00:00 ora_diag_rac10gdb1

oracle   21910     1  0 20:08 ?        00:00:00 ora_psp0_rac10gdb1

oracle   21912     1  1 20:08 ?        00:00:01 ora_lmon_rac10gdb1

oracle   21914     1  0 20:08 ?        00:00:00 ora_lmd0_rac10gdb1

oracle   21916     1  0 20:08 ?        00:00:00 ora_lms0_rac10gdb1

oracle   21920     1  0 20:08 ?        00:00:00 ora_mman_rac10gdb1

oracle   21922     1  0 20:08 ?        00:00:00 ora_dbw0_rac10gdb1

oracle   21924     1  0 20:08 ?        00:00:00 ora_lgwr_rac10gdb1

oracle   21926     1  0 20:08 ?        00:00:00 ora_ckpt_rac10gdb1

oracle   21928     1  0 20:08 ?        00:00:00 ora_smon_rac10gdb1

oracle   21930     1  0 20:08 ?        00:00:00 ora_reco_rac10gdb1

oracle   21932     1  0 20:08 ?        00:00:00 ora_cjq0_rac10gdb1

oracle   21934     1  2 20:08 ?        00:00:02 ora_mmon_rac10gdb1

oracle   21936     1  0 20:08 ?        00:00:00 ora_mmnl_rac10gdb1

oracle   21938     1  0 20:08 ?        00:00:00 ora_d000_rac10gdb1

oracle   21940     1  0 20:08 ?        00:00:00 ora_s000_rac10gdb1

oracle   21992     1  0 20:08 ?        00:00:00 ora_lck0_rac10gdb1

oracle   21997     1  0 20:08 ?        00:00:00 ora_asmb_rac10gdb1

oracle   22001     1  0 20:08 ?        00:00:00 ora_rbal_rac10gdb1

oracle   22079     1  0
20:08 ?        00:00:00
ora_o000_rac10gdb1

oracle   22083     1  0 20:08 ?        00:00:00 ora_o001_rac10gdb1

oracle   22085     1  0 20:08 ?        00:00:00 ora_o002_rac10gdb1

oracle   22265     1  0 20:08 ?        00:00:00 ora_arc0_rac10gdb1

oracle   22267     1  0 20:08 ?        00:00:00 ora_arc1_rac10gdb1

oracle   22285     1  0 20:08 ?        00:00:00 ora_arc2_rac10gdb1

oracle   22387     1  0 20:08 ?        00:00:00 ora_qmnc_rac10gdb1

oracle   22640     1  0 20:08 ?        00:00:00 ora_q000_rac10gdb1

oracle   22642     1  0 20:08 ?        00:00:00 ora_pz99_rac10gdb1

oracle   22646     1  0 20:08 ?        00:00:00 ora_q002_rac10gdb1

oracle   22657     1  0 20:08 ?        00:00:00 ora_j000_rac10gdb1

oracle   24560
28572  0 20:10 pts/1    00:00:00 grep ora_

[oracle@rac10gnode1 network]$ ps -ef|grep asm_

oracle   18485     1  0 20:06 ?        00:00:00 asm_pmon_+ASM1

oracle   18487     1  0 20:06 ?        00:00:00 asm_diag_+ASM1

oracle   18489     1  0 20:06 ?        00:00:00 asm_psp0_+ASM1

oracle   18491     1  0 20:06 ?        00:00:01 asm_lmon_+ASM1

oracle   18493     1  0 20:06 ?        00:00:00 asm_lmd0_+ASM1

oracle   18495     1  0 20:06 ?        00:00:00 asm_lms0_+ASM1

oracle   18499     1  0 20:06 ?        00:00:00 asm_mman_+ASM1

oracle   18501     1  0 20:06 ?        00:00:00 asm_dbw0_+ASM1

oracle   18503     1  0 20:06 ?        00:00:00 asm_lgwr_+ASM1

oracle   18505     1  0 20:06 ?        00:00:00 asm_ckpt_+ASM1

oracle   18507     1  0 20:06 ?        00:00:00 asm_smon_+ASM1

oracle   18509     1  0
20:06 ?        00:00:00 asm_rbal_+ASM1

oracle   18511     1  0 20:06 ?        00:00:00 asm_gmon_+ASM1

oracle   18538     1  0 20:06 ?        00:00:00 asm_lck0_+ASM1

oracle   21723     1  0 20:08 ?        00:00:00 asm_asmb_+ASM1

oracle   21727     1  0
20:08 ?        00:00:00 asm_o000_+ASM1

oracle   24712
28572  0 20:10 pts/1    00:00:00 grep asm_

[oracle@rac10gnode1 network]$ sqlplus / as sysdba

 

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun
18 20:10:29 2015

 

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 10g Enterprise Edition Release
10.2.0.1.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP
and Data Mining options

 

SQL> col file_name for a50

SQL> set lineisze 1000

SP2-0158: unknown SET option "lineisze"

SQL> set linesize 1000

SQL> select tablespace_name from dba_data_files;

 

TABLESPACE_NAME

------------------------------

USERS

SYSAUX

UNDOTBS1

SYSTEM

UNDOTBS2

TEST

 

6 rows selected.

 

SQL> select tablespace_name,file_name from
dba_data_files;

 

TABLESPACE_NAME          FILE_NAME

------------------------------
--------------------------------------------------

USERS                +ORADATA/rac10gdb/datafile/users.259.856721895

SYSAUX               +ORADATA/rac10gdb/datafile/sysaux.257.856721893

UNDOTBS1             +ORADATA/rac10gdb/datafile/undotbs1.258.856721895

SYSTEM               +ORADATA/rac10gdb/datafile/system.256.856721893

UNDOTBS2             +ORADATA/rac10gdb/datafile/undotbs2.267.856722041

TEST                 +ORADATA/rac10gdb/datafile/test.dbf

 

6 rows selected.

 

 

至此ORACLE 10G的仲裁盘重建全部成功完成!

 

时间: 2024-09-14 13:30:09

ORACLE 10.2.0.5RAC仲裁盘损坏后重建的相关文章

Fedora Core 3安装Oracle 10.1.0.3简要流程

core|oracle     上午花了点时间完成了第一次Linux平台的安装(:P,有点丢人的说,才第一次),主要是参考了http://www.oracle.com/technology/global/cn/pub/articles/smiley_10gdb_install.html的安装说明.该文主要是介绍RedHat系列的安装,而Fedora Core可以说是Red Hat的开源桌面版,可以说是通用的.下文将结合我的实际操作,对该文作简单的规整.    安装Linux部分不再说明了,唯一要

Oracle 10.2.0.5 64位RMAN如何迁移到11.2.0.3x64

本文是采用迁移的方式来实现数据库10g到11g的迁移升级. 一.环境介绍 1. 源数据库环境 操作系统版本: OEL5.8 x64 数据库版本  : 10.2.0.5 x64 数据库sid名 : orcl Oracle 10g 10.2.0.5(64bit)安装目录如下: 数据库软件:/u01/app/oracle/product/10.2.0/db_1 数据库文件:/u01/app/oracle/oradata/orcl 归档目录:/u01/archivelog RMAN目录:/u01/rma

把Oracle 10.2.0.5 x64升级到11.2.0.3 x64的过程

说明:11g数据库现在新部署的数量也很多的,对于10g数据库,现在整理一下10g到11g的升级过程.10.2.0.2以上版本才能升级到11.2.0.3版本. 升级说明:10.2.0.5(64)-> 升级到11.2.0.3(64) 一.环境介绍 1. 数据库环境 操作系统版本:OEL5.8 x64 数据库版本:10.2.0.5 x64 数据库sid名:orcl 1)Oracle 10g 10.2.0.5(64bit)安装按照标准文档环境进行安装. /u01/app/oracle/product/1

Linux下Oracle 10.2.0.1升级到10.2.0.4总结

最近部署测试环境时,将测试环境ORACLE数据库从10.2.0.1升级到了10.2.0.4,顺便整理记录一下升级过程. 实验环境: 操作系统:Oracle Linux Server release 5.7 数据库:Oracle 10.2.0.1 下载解压补丁包 1: [oracle@DB-Server tmp]$ unzip p6810189_10204_Linux-x86-64.zip 2:  3: [oracle@DB-Server Disk1]$ ls 4: 10204_buglist.h

Oracle 10.2.0.1 升级到 10.2.0.4

--********************************* -- Oracle 10.2.0.1 升级到 10.2.0.4 --*********************************     数据库升级并不难,只要遵循其步骤,一般问题不大.但是升级失败的情况也是屡见不鲜,尤其是生产数据库的升级,搞不定的时候甚至要创建SR.   下面描述基于Linux(Oracle Linux 5.4/2.6.18-164.el5PAE)平台下Oracle 10.2.0.1 升级到 10.

Oracle 10.2.0.1 32位如何冷备迁移升级到10.2.0.5 64位

说明: 官方推荐迁移到相同版本,比如:10.2.0.1(32)迁移到10.2.0.1(64). 再进行升级到10.2.0.5(64). 一.环境介绍 源库 操作系统版本:OEL5.8 32bit 数据库版本:10.2.0.1 32bit 数据库sid名:orcl 测试库 操作系统版本:OEL5.8 x64 数据库版本:10.2.0.5 x64 数据库sid名:orcl 二.源库 1. 关闭源库 # su - oracle $ sqlplus / as sysdba; SQL> shutdown

如何将Oracle RAC 10g升级到Oracle 10.2.0.5

1.Back Up database 一般情况下rman备份 2.备份ocr和vote disk [root@rac2 bin]# ./ocrconfig -export /tmp/ocr_export.bak [root@rac2 bin]# more /etc/oracle/ocr.loc ocrconfig_loc=/dev/raw/raw11 local_only=FALSE [root@rac2 bin]# dd if=/dev/raw/raw11 of=/tmp/ocr_dd.bak

Oracle 10.2.0.1在windows2003+MSCS双机热备环境

一.所有数据操作第一步:备份数据,我喜欢用expdp,很快. 二.在集群管理器中将oracle数据库实例及监听资源脱机 三.为防止系统崩溃,先做Node2节点升级(Node2不做域名服务器,比较好恢复),将升级文件拷贝分别拷贝至Node1及Node2,解压缩. 四.在Node2运行setup.exe安装,注意提示需要选择oracle_home,一定将oracle_home选择在原先oracle安装的oracle_home下,在安装过程中报错,提示有需要升级的文件或程序正在被使用,这时重启Node

Oracle 10.2.0.1.0数据库启动报错 ORA-00600 [keltnfy-ldminit]

问题描述: 在启动数据库时,出现ORA-00600错误,具体错误信息如下所示.其中数据库版本为10.2.0.1.0. ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], [] 首先,检查alert警告日志.检查错误信息.错误信息如下: Mon Nov 21 13:49:08 2016 Errors in file /home/oracle/oracle/product/1