Step by Sstep Install Oracle 11GR2 RAC for AS5U3 64bit v1.1


Step by Step Install Oracle 11gR2 RAC for AS5U3 64bit   第一部分 物理环境准备 1、 服务器准备: 使用两台至强服务器,安装REDHAT AS5U3 64bit操作系统,两块网卡:Eth0用于公网(10.10.20.*); Eth1用于存储(10.10.30.*)和心跳。 2、存储准备: 本文使用Openfiler 2.3 64bit 系统,安装在一台服务器上,将一块146GB SCSI磁盘,做成一 套146GB的 ISCSI存储。 在上面划分3个区:1个3GB的用于CRS,2个30GB的用于数据存储;   1 / 36      2 / 36      3 / 36    3、服务器中识别ISCSI存储设备 3.1 ISCIS软件包的安装 在RHEL5 x86_64系统中, 下载并安装iSCSI启动器软件包 Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel 2.6.18-128.el5 on an x86_64 login: root Password: Last login: Thu Oct 15 01:35:27 from 10.10.10.121 [root@rac1 ~]# mount -t iso9660 /dev/dvd /mnt mount: block device /dev/dvd is write-protected, mounting read-only [root@rac1 ~]# cd /mnt/Server/ [root@rac1 Server]# ls iscsi-initiator-utils* iscsi-initiator-utils-6.2.0.868-0.18.el5.x86_64.rpm [root@rac1 Server]# rpm -ivh iscsi-initiator-utils-6.2.0.868-0.18.el5.x86_64.rpm warning: iscsi-initiator-utils-6.2.0.868-0.18.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root@rac1 Server]# 3.2 服务器端ISCSI的配置 此处需修改ISCIS的配置文件,并增加iSCSI启动器服务到系统中 修改ISCIS的配置文件 在RHEL5系统中, 生成并查看iSCSI启动器的名称   4 / 36    # echo "InitiatorName=`iscsi-iname`" > /etc/iscsi/initiatorname.iscsi [root@rac1 Server]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.1994-05.com.redhat:ffccd3d5f87 # vi /etc/iscsi/iscsid.conf (iSCSI启动器服务的配置文件) =================== ***************** # Startup settings #***************** # To request that the iscsi initd scripts startup a session set to "automatic". # node.startup = automatic # # To manually startup the session set to "manual". The default is automatic. node.startup = automatic # ************* # CHAP Settings # ************* # To enable CHAP authentication set node.session.auth.authmethod # to CHAP. The default is None. node.session.auth.authmethod = CHAP # To set a CHAP username and password for initiator # authentication by the target(s), uncomment the following lines: node.session.auth.username =iqn.1994-05.com.redhat:ffccd3d5f87 node.session.auth.password = ffccd3d5f87 增加iSCSI启动器服务到系统中 # chkconfig iscsi --level 35 on 3.3、发现磁盘 将client端注册到存储上,两个节点均要执行 # iscsiadm -m discovery -t st -p 10.10.30.100 在RAC1上: [root@rac1 ~]# sfdisk -s /dev/sda: 78150744 /dev/sdb: 143374744   5 / 36    /dev/sdc: 3145728 /dev/sdd: 31424512 /dev/sde: 31424512 total: 287520240 blocks [root@rac1 ~]# 在RAC2上: [root@rac2 ~]# sfdisk -s /dev/sda: 78150744 /dev/sdb: 143374744 /dev/sdc: 3145728 /dev/sdd: 31424512 /dev/sde: 31424512 total: 287520240 blocks [root@rac2 ~]# 注意: 确保两台服务器认到相同的存储设备。   6 / 36    第二部分 软件环境准备 2.1 基础软件包的查询和安装 mount -o loop RHEL_5.3x86_64DVD.iso /mnt cd /mnt/Server 所需软件包的查询: rpm -qa |grep binutils-2.17.50.0.6 rpm -qa |grep compat-libstdc++-33-3.2.3 rpm -qa |grep elfutils-libelf-0.125 rpm -qa |grep elfutils-libelf-devel-0.125 rpm -qa |grep gcc-4.1.2 rpm -qa |grep gcc-c++-4.1.2 rpm -qa |grep glibc-2.5-24 rpm -qa |grep glibc-common-2.5 rpm -qa |grep glibc-devel-2.5 rpm -qa |grep glibc-headers-2.5 rpm -qa |grep ksh-20060214 rpm -qa |grep libaio-0.3.106 rpm -qa |grep libaio-devel-0.3.106 rpm -qa |grep libgcc-4.1.2 rpm -qa |grep libstdc++-4.1.2 rpm -qa |grep libstdc++-devel rpm -qa |grep make-3.81 rpm -qa |grep sysstat-7.0.2 rpm -qa |grep unixODBC-2.2.11 rpm -qa |grep unixODBC-devel-2.2.11 软件包的安装: 这个非常省事! rpm -ivh binutils-* rpm -ivh compat-libstdc++-* rpm -ivh elfutils-libelf-* rpm -ivh elfutils-libelf-devel-* rpm -ivh gcc-* rpm -ivh gcc-c++-* rpm -ivh glibc-* rpm -ivh glibc-common-* rpm -ivh glibc-devel-* rpm -ivh glibc-headers-*   7 / 36    rpm -ivh ksh-* rpm -ivh libaio-* rpm -ivh libaio-devel-* rpm -ivh libgcc-* rpm -ivh libstdc++-* rpm -ivh libstdc++-* rpm -ivh make-* rpm -ivh sysstat-* rpm -ivh unixODBC-* rpm -ivh unixODBC-devel-* 2.2 用户、组、目录结构和权限的建立 用户、组的建立 /usr/sbin/groupadd -g 1000 oinstall /usr/sbin/groupadd -g 1100 asmadmin /usr/sbin/groupadd -g 1200 dba /usr/sbin/groupadd -g 1300 asmdba /usr/sbin/groupadd -g 1301 asmoper /usr/sbin/groupadd -g 1400 oper /usr/sbin/useradd -u 1100 -g oinstall -G dba, asmdba , asmadmin,asmoper grid /usr/sbin/useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle 备注: Grid用户本文档未用到,可以不生成,凡涉及grid的配置、参数都可以不做。 安装目录的建立 mkdir -p /u01/app/grid mkdir -p /u01/app/11.2.0/grid chown -R grid:oinstall /u01 mkdir -p /u01/app/oracle chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01 chown -R oracle:oinstall /u01 chmod -R 775 /u01/   8 / 36    passwd oracle passwd grid 2.3 相关内核参数、配置文件的修改 备注:两个节点均需要做 vi /etc/security/limits.conf #ORACLE SETTING grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 vi /etc/sysctl.conf #ORACLE SETTING fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 Oracle用户相关的参数设置 [oracle@rac1 ~]$ cat /etc/hosts   9 / 36    # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost #::1 localhost6.localdomain6 localhost6 #public 10.10.20.11 rac1.localdomain rac1 10.10.20.12 rac2.localdomain rac2 #private 10.10.30.11 rac1-priv.localdomain rac1-priv 10.10.30.12 rac2-priv.localdomain rac2-priv #virtual 10.10.20.111 rac1-vip.localdomain rac1-vip 10.10.20.112 rac2-vip.localdomain rac2-vip #scan 10.10.20.201 rac-cluster.localdomain rac-cluster [oracle@rac2~]$ cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost #::1 localhost6.localdomain6 localhost6 #public 10.10.20.11 rac1.localdomain rac1 10.10.20.12 rac2.localdomain rac2 #private 10.10.30.11 rac1-priv.localdomain rac1-priv 10.10.30.12 rac2-priv.localdomain rac2-priv #virtual 10.10.20.111 rac1-vip.localdomain rac1-vip 10.10.20.112 rac2-vip.localdomain rac2-vip #scan 10.10.20.201 rac-cluster.localdomain rac-cluster [oracle@rac1 ~]$ [oracle@rac1 ~]$ cat .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc   10 / 36    fi # User specific aliases and functions ORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAME ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME ORACLE_SID=oradb1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM PATH=/usr/sbin:$PATH; export PATH PATH=$ORACLE_HOME/bin:$PATH; export PATH export ORACLE_UNQNAME=oradb LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi [oracle@rac1 ~]$ [oracle@rac2 ~]$ cat .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions ORACLE_HOSTNAME=rac2.localdomain; export ORACLE_HOSTNAME ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME ORACLE_SID=oradb2; export ORACLE_SID   11 / 36    ORACLE_TERM=xterm; export ORACLE_TERM PATH=/usr/sbin:$PATH; export PATH PATH=$ORACLE_HOME/bin:$PATH; export PATH export ORACLE_UNQNAME=oradb LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi [oracle@rac2 ~]$ 2.4 Gird时间同步所需要的设置 Network Time Protocol Setting /sbin/service ntpd stop chkconfig ntpd off mv /etc/ntp.conf /etc/ntp.conf.org 2.5 安装和配置自动存储管理 (ASMLib 2.0)   注意: 需要到oracle网站下载和操作系统相匹配的软件包 如果要了解有关 Oracle ASMLib 2.0 的详细信息,请访 问: http://www.oracle.com/technology/tech/linux/asmlib/ 两个节点均需要做。 [root@rac1 backup]# cd asm2.6.18-128.el5/ [root@rac1 asm2.6.18-128.el5]# ls oracleasm-2.6.18-128.el5-2.0.5-1.el5.x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm oracleasm-support-2.1.3-1.el5.x86_64.rpm   12 / 36    安装 rpm -Uvh oracleasm*.rpm Configure ASMLib using the following command. 配置 #/etc/init.d/oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []:dba Start Oracle ASM library driver on boot (y/n) [y]: Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] 做ASM卷标 在RAC1上: [root@rac1 ~]# sfdisk -s /dev/sda: 78150744 /dev/sdb: 143374744 /dev/sdc: 3145728 /dev/sdd: 31424512 /dev/sde: 31424512 total: 287520240 blocks 在RAC2上: [root@rac1 ~]# sfdisk -s /dev/sda: 78150744 /dev/sdb: 143374744 /dev/sdc: 3145728 /dev/sdd: 31424512 /dev/sde: 31424512 total: 287520240 blocks   13 / 36    做ASM卷,注意:rac1 需要做 , rac2 不需要做. 在RAC1节点: /usr/sbin/oracleasm createdisk VOL1CRS /dev/sdc1 /usr/sbin/oracleasm createdisk VOL2 /dev/sdd1 /usr/sbin/oracleasm createdisk VOL3 /dev/sde1 其它相关的命令: (如何删除标记 不再被 ASM 使用的磁盘也可以取消标记 /etc/init.d/oracleasm deletedisk VOL1CRS /etc/init.d/oracleasm deletedisk VOL2 /etc/init.d/oracleasm deletedisk VOL3 Removing ASM disk "VOL1": [  OK  ]   可以查询任意的操作系统磁盘,以了解它是否被 ASM 使用:     [root@dbsvr tmp]# /etc/init.d/oracleasm querydisk /dev/sdb2   Device "/dev/sdb2" is marked an ASM disk with the label "VOL1"   [root@dbsvr tmp]# /etc/init.d/oracleasm querydisk /dev/sdb3   Device "/dev/sdb3" is marked an ASM disk with the label "VOL2"   [root@dbsvr tmp]# /etc/init.d/oracleasm querydisk /dev/sdb4   Device "/dev/sdb4" is marked an ASM disk with the label "VOL3" ) 在两个节点,执行 /etc/init.d/oracleasm scandisks /usr/sbin/oracleasm listdisks [root@rac1 ~]# /usr/sbin/oracleasm listdisks VOL1CRS VOL2 VOL3 [root@ra2 ~]# /usr/sbin/oracleasm listdisks VOL1CRS VOL2 VOL3 准备工作基本完成。   14 / 36    第三部分:Grid集群软件的安装 [oracle@rac1]$ ./runInstaller Starting Oracle Universal Installer...   15 / 36    Add node2 , Node1 first setup SSh Node2 first setup SSh ,then test   16 / 36      17 / 36    按照顺序,执行两个脚本,因为没有配置DNS,出现报错,报错不影响后面的安装。   18 / 36    脚本执行细节: [root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@rac1 ~]# /u01/app/11.2.0/grid/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file...   19 / 36    Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-11-09 17:33:57: Parsing the host name 2009-11-09 17:33:57: Checking for super user privileges 2009-11-09 17:33:57: User has super user privileges Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded   20 / 36    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'rac1' CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded ASM created and started successfully. DiskGroup CRS_DG0 created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-2672: Attempting to start 'ora.crsd' on 'rac1' CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded CRS-4256: Updating the profile Successful addition of voting disk 07c5183bdf594f32bf6f3481e08cda15. Successfully replaced voting disk group with +CRS_DG0. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 07c5183bdf594f32bf6f3481e08cda15 (ORCL:VOL1CRS) [CRS_DG0] Located 1 voting disk(s). CRS-2673: Attempting to stop 'ora.crsd' on 'rac1' CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1' CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'   21 / 36    CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-26CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'rac1' CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac1' CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'rac1' CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.CRS_DG0.dg' on 'rac1' CRS-2676: Start of 'ora.CRS_DG0.dg' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1' CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded rac1 2009/11/09 17:38:52 /u01/app/11.2.0/grid/cdata/rac1/backup_20091109_173852.olr Preparing packages for installation... cvuqdisk-1.0.7-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4094 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.   22 / 36    [root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@rac2 ~]# /u01/app/11.2.0/grid/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-11-09 17:39:10: Parsing the host name 2009-11-09 17:39:10: Checking for super user privileges 2009-11-09 17:39:10: User has super user privileges Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster   23 / 36    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2' CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac2' CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2' CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2' CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac2' CRS-2672: Attempting to start 'ora.diskmon' on 'rac2' CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'rac2' CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2' CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac2' CRS-2676: Start of 'ora.asm' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac2' CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'rac2' CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded rac2 2009/11/09 17:42:35 /u01/app/11.2.0/grid/cdata/rac2/backup_20091109_174235.olr Preparing packages for installation... cvuqdisk-1.0.7-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4094 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [root@rac2 ~]#   24 / 36    第四部分 安装数据库软件,创建实例 4.1 使用ASMCA创建ASM卷组 cd /u01/app/11.2.0/grid/bin ./asmca   25 / 36      26 / 36    4.2 安装数据库软件,创建实例 在数据库软件目录,执行./runInstaller 一步一步执行即可   27 / 36    安装后的图形界面:   28 / 36      29 / 36    4.3 安装的后任务 需要执行脚本。 [oracle@rac1 ~]$ sqlplus / as sysdba SQL> @Oracle_home/rdbms/admin/utlrp.sql SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql 4.4 手动检查RAC的状态及相关命令 Check the RAC status: 1.[oracle@rac2 admin]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Wed Nov 11 15:14:43 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> SELECT inst_name FROM v$active_instances; INST_NAME ---------------------------------------------------------------------------- rac1:oradb1 rac2:oradb2 SQL>   30 / 36    SQL> show parameter cluster NAME TYPE VALUE ------------------------------------ ----------- cluster_database boolean TRUE cluster_database_instances integer 2 cluster_interconnects string SQL> 2. oracle@rac1 dbs]$ srvctl status asm -n rac1 -a ASM is running on rac1 ASM is enabled on node rac1. [oracle@rac1 dbs]$ srvctl status asm -n rac2 -a ASM is running on rac2 ASM is enabled on node rac2. 3. [oracle@rac1 dbs]$ srvctl status database -d oradb -v Instance oradb1 is running on node rac1 Instance oradb2 is running on node rac2 [oracle@rac1 dbs]$ [oracle@rac2 dbs]$ srvctl status instance -d oradb -i "oradb1,oradb2" -v Instance oradb1 is running on node rac1 Instance oradb2 is running on node rac2 4. [oracle@rac1 dbs]$ srvctl status diskgroup -g CRSDG -n rac1,rac2 -a Disk Group CRSDG is running on rac1,rac2 Disk Group CRSDG is enabled on rac1,rac2 [oracle@rac1 dbs]$ 5. [oracle@rac2 dbs]$ srvctl status nodeapps VIP rac1-vip is enabled VIP rac1-vip is running on node: rac1 VIP rac2-vip is enabled VIP rac2-vip is running on node: rac2 Network is enabled Network is running on node: rac1 Network is running on node: rac2 GSD is disabled GSD is not running on node: rac1   31 / 36    GSD is not running on node: rac2 ONS is enabled ONS daemon is running on node: rac1 ONS daemon is running on node: rac2 eONS is enabled eONS daemon is running on node: rac1 eONS daemon is running on node: rac2 6 .Enable GSD Service [oracle@rac2 dbs]$ srvctl enable nodeapps -g -v GSD is enabled successfully on node(s): rac1,rac2 [oracle@rac2 dbs]$ 7. [oracle@rac2 dbs]$ srvctl status nodeapps VIP rac1-vip is enabled VIP rac1-vip is running on node: rac1 VIP rac2-vip is enabled VIP rac2-vip is running on node: rac2 Network is enabled Network is running on node: rac1 Network is running on node: rac2 GSD is disabled GSD is not running on node: rac1 GSD is not running on node: rac2 ONS is enabled ONS daemon is running on node: rac1 ONS daemon is running on node: rac2 eONS is enabled eONS daemon is running on node: rac1 eONS daemon is running on node: rac2 [oracle@rac2 dbs]$ srvctl enable nodeapps -g -v GSD is enabled successfully on node(s): rac1,rac2 Start GSD Service [oracle@rac2 dbs]$ srvctl start nodeapps PRKO-2421 : Network resource is already started on node(s): rac1,rac2 PRKO-2420 : VIP is already started on node(s): rac1,rac2 PRKO-2420 : VIP is already started on node(s): rac1,rac2   32 / 36    PRKO-2422 : ONS is already started on node(s): rac1,rac2 PRKO-2423 : eONS is already started on node(s): rac1,rac2 [oracle@rac2 dbs]$ srvctl status nodeapps VIP rac1-vip is enabled VIP rac1-vip is running on node: rac1 VIP rac2-vip is enabled VIP rac2-vip is running on node: rac2 Network is enabled Network is running on node: rac1 Network is running on node: rac2 GSD is enabled GSD is running on node: rac1 GSD is running on node: rac2 ONS is enabled ONS daemon is running on node: rac1 ONS daemon is running on node: rac2 eONS is enabled eONS daemon is running on node: rac1 eONS daemon is running on node: rac2 [oracle@rac2 dbs]$ Question: ORA-01102: cannot mount database in EXCLUSIVE mode [oracle@rac1 ~]$ srvctl status database -d oradb -v Instance oradb1 is running on node rac1 Instance oradb2 is not running on node rac2 [oracle@rac1 ~]$ [oracle@rac2 dbs]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Thu Nov 12 09:48:57 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-01102: cannot mount database in EXCLUSIVE mode SQL> show parameter cluster NAME TYPE VALUE ------------------------------------ ----------- cluster_database boolean FALSE cluster_database_instances integer 2 cluster_interconnects string SQL> SQL> alter system set cluster_database=TRUE scope=spfile;   33 / 36    alter system set cluster_database=TRUE scope=spfile * ERROR at line 1: ORA-32000: write to SPFILE requested but SPFILE is not modifiable Sql>create pfile from spfile; SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@rac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Thu Nov 12 10:20:32 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. SQL> startup pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initoradb1.ora ORACLE instance started. Total System Global Area 839282688 bytes Fixed Size 2217992 bytes Variable Size 654313464 bytes Database Buffers 176160768 bytes Redo Buffers 6590464 bytes Database mounted. Database opened. SQL> alter system set cluster_database=TRUE scope=spfile; System altered. SQL> create spfile from pfile; File created. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started.   34 / 36    Total System Global Area 839282688 bytes Fixed Size 2217992 bytes Variable Size 654313464 bytes Database Buffers 176160768 bytes Redo Buffers 6590464 bytes Database mounted. Database opened. SQL> show parameter cluster NAME TYPE VALUE ------------------------------------ ----------- cluster_database boolean TRUE cluster_database_instances integer 2 cluster_interconnects string [oracle@rac1 ~]$ srvctl status database -d oradb -v Instance oradb1 is running on node rac1 Instance oradb2 is running on node rac2 [oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/bin oracle@rac1 bin]$ ./crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 01aebbc18ca64f26bf76eb01b0ccf473 (ORCL:VOL1CRS) [CRSDG] Located 1 voting disk(s). [oracle@rac1 bin]$ pwd /u01/app/11.2.0/grid/bin [oracle@rac1 bin]$ Backing Up Oracle Cluster Registry This section describes how to back up OCR content and use it for recovery. The first method uses automatically generated OCR copies and the second method enables you to issue a backup command manually: ■ Automatic backups: Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three backup copies of OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of files that Oracle Database retains. ■ Manual backups: Use the ocrconfig -manualbackup command to force Oracle Clusterware to perform a backup of OCR at any time, rather than wait for the automatic backup. The -manualbackup option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to   35 / 36      36 / 36    the OCR. The OLR only supports manual backups. [root@rac1 ~]# cd /u01/app/11.2.0/grid/bin/ [root@rac1 bin]# ./ocrconfig -manualbackup rac1 2009/11/19 16:42:55 /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20091119_164255.ocr [root@rac1 bin]# ./ocrconfig -showbackup rac1 2009/11/19 13:32:44 /u01/app/11.2.0/grid/cdata/rac-cluster/backup00.ocr rac1 2009/11/17 17:09:44 /u01/app/11.2.0/grid/cdata/rac-cluster/backup01.ocr rac1 2009/11/17 13:09:43 /u01/app/11.2.0/grid/cdata/rac-cluster/backup02.ocr rac1 2009/11/19 13:32:44 /u01/app/11.2.0/grid/cdata/rac-cluster/day.ocr rac1 2009/11/17 13:09:43 /u01/app/11.2.0/grid/cdata/rac-cluster/week.ocr rac1 2009/11/19 16:42:55 /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20091119_164255.ocr [root@rac1 bin]# 清除心跳盘数据用命令: dd if=/dev/zero of=/dev/sdc1 bs=1024k count=25000 (clear crs and vote) by old_bear 2009/12/31
还剩35页未读

继续阅读

下载pdf到电脑,查找使用更方便

pdf的实际排版效果,会与网站的显示效果略有不同!!

需要 8 金币 [ 分享pdf获得金币 ] 1 人已下载

下载pdf