一步一步搭建oracle 11gR2 rac+dg之database安装(五)

  1. 一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之database安装 (五)

本章目录结构:

这一步主要可能安装的时候找不见磁盘组,这个也不要急,一步一步肯定可以解决的,,,,,Database安装与配置

 

  1. 安装数据库

日志:tail -f /u01/app/oraInventory/logs/installActions2014-06-05_01-30-25AM.log

 

解压文件:

[oracle@localhost ~]$ ll linux*

-rw-r--r-- 1 oracle oinstall 1239269270 Apr 18 20:44 linux.x64_11gR2_database_1of2.zip

-rw-r--r-- 1 oracle oinstall 1111416131 Apr 18 20:47 linux.x64_11gR2_database_2of2.zip

[oracle@localhost ~]$ unzip linux.x64_11gR2_database_1of2.zip && unzip linux.x64_11gR2_database_2of2.zip

 

以Oracle 用户在rac1上安装:

[oracle@rac1 database]$ export DISPLAY=192.168.1.100:0.0

[oracle@rac1 database]$ xhost +

access control disabled, clients can connect from any host

[oracle@rac1 database]$ ./runInstaller

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 120 MB. Actual 10840 MB Passed

Checking swap space: must be greater than 150 MB. Actual 1599 MB Passed

Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-06-05_01-30-25AM. Please wait ...

 

 

安装oracle database软件

以下图形界面:

 

 

 

 

 

 

这里可能报错:INS-35354,ins-35354 the system on which you are attempting to install oracle rac is not part of a valid cluster.

解决办法:

修改文件:vi /u01/app/oraInventory/ContentsXML/inventory.xml

把:

修改为:

 

 

 

 

 

 

 

 

56% 的rman 工具的时候也很慢,,,,,

  1. 94%的时候很慢

 

这个时候在copy到rac2上,可以查看大小来确定是否挂起:

 

 

  1. 执行root脚本:

在两个节点上,分别以root身份执行上述脚本,然后点击OK。

[root@rac1 app]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@rac1 app]#

 

 

节点二执行:

[root@rac2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

 

[root@rac2 11.2.0]# /u01/app/oraInventory/orainstRoot.sh

Creating the Oracle inventory pointer file (/etc/oraInst.loc)

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@rac2 11.2.0]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@rac2 11.2.0]#

 

 

 

数据库软件安装完成,点击close退出。

 

数据库软件安装完成后就可以在2个节点上测试一下了:

[oracle@rac1 ~]$ sqlplus / as sysdba

 

SQL*Plus: Release 11.2.0.3.0 Production on Wed Oct 1 22:42:56 2014

 

Copyright (c) 1982, 2011, Oracle. All rights reserved.

 

Connected to an idle instance.

 

SQL>

 

  1. 使用DBCA创建数据库

 

在Oracle 用户下在节点一操作:

 

 

 

 

 

 

注意这一步2个节点都需要选择:

 

这一步中的em可以不用配置,不然太耗资源

 

 

 

 

 

这里的闪回区域选择FRA磁盘组:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. 日志路径

可以查看dbca建库日志

 

路径:/u01/app/oracle/cfgtoollogs/dbca/racdb

 

tail -f /u01/app/oracle/cfgtoollogs/dbca/racdb/trace.log

 

  1. 验证

验证集群化数据库已开启

 

[grid@node1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.CRS.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.DATADG.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.FRADG.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.LISTENER.lsnr

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.asm

ONLINE ONLINE node1 Started

ONLINE ONLINE node2 Started

ora.gsd

OFFLINE OFFLINE node1

OFFLINE OFFLINE node2

ora.net1.network

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.ons

ONLINE ONLINE node1

ONLINE ONLINE node2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node2

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE node1

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE node1

ora.cvu

1 ONLINE ONLINE node1

ora.node1.vip

1 ONLINE ONLINE node1

ora.node2.vip

1 ONLINE ONLINE node2

ora.oc4j

1 ONLINE ONLINE node1

ora.scan1.vip

1 ONLINE ONLINE node2

ora.scan2.vip

1 ONLINE ONLINE node1

ora.scan3.vip

1 ONLINE ONLINE node1

ora.zhongwc.db

1 ONLINE ONLINE node1 Open

2 ONLINE ONLINE node2 Open

检查集群的运行状况

[grid@node1 ~]$ crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[grid@node1 ~]$

所有 Oracle 实例

 

[grid@node1 ~]$ srvctl status database -d zhongwc

Instance zhongwc1 is running on node node1

Instance zhongwc2 is running on node node2

 

单个 Oracle 实例

[grid@node1 ~]$ srvctl status instance -d zhongwc -i zhongwc1

Instance zhongwc1 is running on node node1

节点应用程序

[grid@node1 ~]$ srvctl status nodeapps

VIP node1-vip is enabled

VIP node1-vip is running on node: node1

VIP node2-vip is enabled

VIP node2-vip is running on node: node2

Network is enabled

Network is running on node: node1

Network is running on node: node2

GSD is disabled

GSD is not running on node: node1

GSD is not running on node: node2

ONS is enabled

ONS daemon is running on node: node1

ONS daemon is running on node: node2

节点应用程序

[grid@node1 ~]$ srvctl config nodeapps

Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static

VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1

VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

数据库配置

[grid@node1 ~]$ srvctl config database -d zhongwc -a

Database unique name: zhongwc

Database name: zhongwc

Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: +DATADG/zhongwc/spfilezhongwc.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: zhongwc

Database instances: zhongwc1,zhongwc2

Disk Groups: DATADG,FRADG

Mount point paths:

Services:

Type: RAC

Database is enabled

Database is administrator managed

ASM 状态

[grid@node1 ~]$ srvctl status asm

ASM is running on node2,node1

ASM 配置

 

[grid@node1 ~]$ srvctl config asm -a

ASM home: /u01/app/11.2.0/grid

ASM listener: LISTENER

ASM is enabled.

TNS 监听器状态

 

[grid@node1 ~]$ srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): node2,node1

TNS 监听器配置

 

[grid@node1 ~]$ srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home:

/u01/app/11.2.0/grid on node(s) node2,node1

End points: TCP:1521

节点应用程序配置 VIP、GSD、ONS、监听器

 

[grid@node1 ~]$ srvctl config nodeapps -a -g -s -l

Warning:-l option has been deprecated and will be ignored.

Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static

VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1

VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

Name: LISTENER

Network: 1, Owner: grid

Home:

/u01/app/11.2.0/grid on node(s) node2,node1

End points: TCP:1521

SCAN 状态

 

[grid@node1 ~]$ srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node node2

SCAN VIP scan2 is enabled

SCAN VIP scan2 is running on node node1

SCAN VIP scan3 is enabled

SCAN VIP scan3 is running on node node1

[grid@node1 ~]$

SCAN 配置

 

[grid@node1 ~]$ srvctl config scan

SCAN name: cluster-scan.localdomain, Network: 1/192.168.0.0/255.255.0.0/eth0

SCAN VIP name: scan1, IP: /cluster-scan.localdomain/192.168.1.57

SCAN VIP name: scan2, IP: /cluster-scan.localdomain/192.168.1.58

SCAN VIP name: scan3, IP: /cluster-scan.localdomain/192.168.1.59

[grid@node1 ~]$

  1. 验证所有集群节点间的时钟同步

 

[grid@node1 ~]$ cluvfy comp clocksync -verbose

 

Verifying Clock Synchronization across the cluster nodes

 

Checking if Clusterware is installed on all nodes...

Check of Clusterware install passed

 

Checking if CTSS Resource is running on all nodes...

Check: CTSS Resource running on all nodes

Node Name Status

------------------------------------ ------------------------

node1 passed

Result: CTSS resource check passed

 

 

Querying CTSS for time offset on all nodes...

Result: Query of CTSS for time offset passed

 

Check CTSS state started...

Check: CTSS state

Node Name State

------------------------------------ ------------------------

node1 Active

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

Reference Time Offset Limit: 1000.0 msecs

Check: Reference Time Offset

Node Name Time Offset Status

------------ ------------------------ ------------------------

node1 0.0 passed

 

Time offset is within the specified limits on the following set of nodes:

"[node1]"

Result: Check of clock time offsets passed

 

 

Oracle Cluster Time Synchronization Services check passed

 

Verification of Clock Synchronization across the cluster nodes was successful.

 

  1. 登陆

 

[oracle@node1 ~]$ sqlplus / as sysdba

 

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 14:30:08 2012

 

Copyright (c) 1982, 2011, Oracle. All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

 

SQL> col host_name format a20

SQL> set linesize 200

SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE;

 

INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS

---------------- -------------------- ----------------- ----------------------- ------------ --------- ------------------ -----------------

zhongwc1 node1.localdomain 11.2.0.3.0 29-DEC-2012 13:55:55 OPEN NORMAL PRIMARY_INSTANCE ACTIVE

zhongwc2 node2.localdomain 11.2.0.3.0 29-DEC-2012 13:56:07 OPEN NORMAL PRIMARY_INSTANCE ACTIVE

 

[grid@node1 ~]$ sqlplus / as sysasm

 

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 14:31:04 2012

 

Copyright (c) 1982, 2011, Oracle. All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

 

SQL> select name from v$asm_diskgroup;

 

NAME

------------------------------------------------------------

CRS

DATADG

FRADG

时间: 2024-10-26 09:06:47

一步一步搭建oracle 11gR2 rac+dg之database安装(五)的相关文章

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)

一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之grid安装 (四) 本章目录结构:   这一步也比较重要,主要是安装ASM,如果前一步的共享磁盘没有准备好的话,执行root脚本的时候可能会报错,不过不要紧的,,,一定可以解决的,,,,     本章目录结构   Grid安装过程 下载软件,上传软件,解压软件: [root@rac1 share]# ll total 3398288 -rwxrwxrwx 1 root ro

一步一步搭建oracle 11gR2 rac+dg之环境准备(二)

  一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之环境准备 (二) 本篇目录结构:   Linux 环境准备 安装linux的环境,我就不介绍了,这一部分如果不会的童鞋就去百度吧,一百度一大堆,如果还是不会的话就直接下载我已经安装好的系统吧,下载下来直接可用(http://yunpan.cn/cgkEsf8wpHC2G (提取码:90f5)),复制3份,直接命名为rac1.rac2和dg即可,如图:     前期环境准备

一步一步搭建oracle 11gR2 rac+dg之共享磁盘设置(三)

  一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之共享磁盘准备 (三) 注意:这一步是配置rac的过程中非常重要的一步,很多童鞋多次安装rac都不成功,主要原因就是失败在共享磁盘的配置上,包括小麦苗我自己,多次安装才懂的这个道理,所以,这一步大家一定要睁大眼睛多看多想,如有不懂的地方就直接联系小麦苗吧.   本部分目录截图: 配置共享存储 这个是重点,也是最容易出错的地方,我最初安装的时候就是在这里老报错,大家看仔细了哟

一步一步搭建 oracle 11gR2 rac + dg 之前传 (一)

      一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg  之前传 (一)       2014年国庆放七天假,但对于我来说放不放假都一样,单身的我也不愿意多出去转转了,觉得没意思,看着人家一对一对的,我出去岂不是太煞风景,,,那做些什么好呢?那就搭建一套rac+dg玩玩呗,总不能荒废时光吧,,,好了,废话少说了,下边就进入正题了.       搭建过程比较长,我就多分几个章节来发布吧,这样显得比较条理化,该搭建过程比较适

一步一步搭建11gR2 rac+dg之结尾篇(十)

     一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + 物理 dg 系列blog已基本完成,前9篇文章见以下链接,本篇为终结篇:   一步一步搭建 oracle 11gR2 rac + dg 之前传(一) http://blog.itpub.net/26736162/viewspace-1290405/  一步一步搭建oracle 11gR2 rac+dg之环境准备(二)  http://blog.itpub.net/26736

【RAC安装】 AIX下安装Oracle 11gR2 RAC

[RAC安装] AIX下安装Oracle 11gR2 RAC   1.1  BLOG文档结构图       1.2  前言部分   1.2.1  导读和注意事项 各位技术爱好者,看完本文后,你可以掌握如下的技能,也可以学到一些其它你所不知道的知识,~O(∩_∩)O~: ① 基于aix安装rac(重点) ② 静默安装rac软件 ③ dbca静默创建rac数据库     Tips:        ① 若文章代码格式有错乱,推荐使用QQ.搜狗或360浏览器,也可以下载pdf格式的文档来查看,pdf文档

一步一步搭建11gR2 rac+dg之安装rac出现问题解决(六)

一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之安装rac出现的问题 (六)   本章主要是搜集了一些安装rac的过程中出现的问题及解决办法,如果没有出现问题的话那么这一章可以不看的       目录结构: crs安装出现的问题 Oracle 11g RAC CRS-4535/ORA-15077 新安装了Oracle 11g rac之后,不知道是什么原因导致第二个节点上的crsd无法启动?其错误消息是CRS-4535: C

用PXE方法从裸机批量推Oracle 11gR2 RAC环境

       9月29日,由中科院高级工程师唐波在"DBA+福州群"进行了一次关于用PXE方法从裸机批量推Oracle 11gR2 RAC环境的线上主题分享.小编特别整理出其中精华内容,供大家学习交流.     嘉宾简介    唐波,福建省第一批Oracle ERP实施顾问,中国科学院最佳技术顾问.Oracle 10g/11g OCM.RHCE,ACOUG&SHOUG核心成员.   2004年4月到2006年12月在北京担任中科院ARP项目组数据仓库架构师,参与完成该项目中的数

Oracle 11gR2 RAC集群服务启动与关闭总结

<Oracle 11gR2 RAC集群服务启动与关闭总结> 新年新群招募: 中国Oracle精英联盟 170513055 群介绍:本群是大家的一个技术分享社区,在这里可以领略大师级的技术讲座,还有机会参加Oracle举办的技术沙龙,与兴趣相投的小伙伴一起笑谈风云起,感悟职场情! 引言:这写篇文章的出处是因为我的一名学生最近在公司搭建RAC集群,但对其启动与关闭的顺序和原理不是特别清晰,我在教学工作中也发现了很多学员对RAC知识了解甚少,因此我在这里就把RAC里面涉及到的最常用的启动与关闭顺序和