虚拟机RAC增删节点


一. 环境准备 1. 克隆第三台虚拟机 PS C:\Users\user> VBoxManage clonehd C:\VM\rac2\rac2.vdi C:\VM\rac3\rac3.vdi 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: 5a46d865-10ce-43a9-97af-ea0ea5a517c8 切换到管理窗口,创建一台新的虚拟机: 点击下一步,输入虚拟机名称和相应的操作系统: 点击下一步: 设置为 1024M,点击下一步: 选择使用现有的虚拟磁盘,然后点击下一步: 点击创建,创建完毕。 修改其他设置,与 rac1 和 rac2 保持一致,比如网卡设置。 2. 添加共享磁盘 PS C:\Users\user> cd C:\VM\share_disk PS C:\VM\share_disk> dir 目录: C:\VM\share_disk Mode LastWriteTime Length Name ---- ------------- ------ ---- -a--- 2012/10/18 10:51 4294987776 datafile.vdi -a--- 2012/10/18 10:51 314580992 ocr.vdi -a--- 2012/10/18 10:51 314580992 ocr2.vdi -a--- 2012/10/18 10:51 1073750016 otherfile.vdi -a--- 2012/10/18 10:51 314580992 votingdisk.vdi -a--- 2012/10/18 10:51 314580992 votingdisk2.vdi PS C:\VM\share_disk> VBoxManage storageattach rac3 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium vo tingdisk.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage storageattach rac3 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium oc r.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage storageattach rac3 --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium da tafile.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage storageattach rac3 --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium ot herfile.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage storageattach rac3 --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium oc r2.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage storageattach rac3 --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium vo tingdisk2.vdi --mtype shareable PS C:\VM\share_disk> 添加完毕后,配置如下: 3. 启动虚拟机 RAC3 修改机器名为 rac3 修改网卡 ip 4. 修改 rac1 和 rac2 的 hosts 文件,内容如下: # Public 192.168.56.101 rac1 rac1 192.168.56.102 rac2 rac2 192.168.56.103 rac3 rac3 # Private 10.0.0.101 rac1-priv rac1-priv 10.0.0.102 rac2-priv rac2-priv 10.0.0.103 rac3-priv rac3-priv # Virtual 192.168.56.111 rac1-vip rac1-vip 192.168.56.112 rac2-vip rac2-vip 192.168.56.113 rac3-vip rac3-vip ~ 5. 建立 rac1、rac2、rac3 之间的 ssh 连通性,oracle 和 root 用户都需要配置 [root@rac3 .ssh]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/id_rsa already exists. Overwrite (y/n)? y Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: b8:93:41:28:04:48:99:c4:2f:d1:f8:5c:a1:b6:9c:14 root@rac3 [root@rac3 .ssh]# ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/root/.ssh/id_dsa): /root/.ssh/id_dsa already exists. Overwrite (y/n)? y Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: 5e:6f:81:e5:c7:fc:45:ef:d1:e8:75:85:38:c2:cb:46 root@rac3 [root@rac3 .ssh]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys [root@rac3 .ssh]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys [root@rac3 .ssh]# scp ~/.ssh/authorized_keys rac1:~/.ssh/authorized_keys root@rac1's password: authorized_keys 100% 3361 3.3KB/s [root@rac3 .ssh]# scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys root@rac2's password: authorized_keys 100% 3361 3.3KB/s 00:00 6. 修改时间同步脚本,内容如下: #!/bin/sh #this script need superior user to commit. date=`date +%Y-%m-%d` time=`date +%H:%M:%S` echo $date echo $time for i in rac1 rac2 rac3 do `ssh $i date -s $date > /dev/null 2>&1` `ssh $i date -s $time > /dev/null 2>&1` `ssh $i /sbin/clock -w > /dev/null 2>&1` done exit 验证时间同步情况: [root@rac1 ~]# ssh rac1 date;ssh rac2 date;ssh rac3 date Fri Oct 19 20:50:42 CST 2012 Fri Oct 19 20:50:41 CST 2012 Fri Oct 19 20:50:40 CST 2012 7. 添加第三个节点的 undo 及 redo 裸设备 PS C:\VM\share_disk> VBoxManage createhd --filename datafile2.vdi --size 2048 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Disk image created. UUID: a584a81f-ecab-4244-b0b6-d211a62d9a5d PS C:\VM\share_disk> VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium da tafile2.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium d atafile2.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage storageattach rac3 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium d atafile2.vdi --mtype shareable PS C:\VM\share_disk> VBoxManage modifyhd datafile2.vdi --type shareable redo31 /dev/sdh1 100m /dev/raw/raw17 /home/db/oracle/oradata/mbs/redo31 redo32 /dev/sdh2 100m /dev/raw/raw18 /home/db/oracle/oradata/mbs/redo32 undo03 /dev/sdh3 300m /dev/raw/raw19 /home/db/oracle/oradata/mbs/undo03_01.dbf 启动三台虚拟机,格式化共享磁盘,创建软连接等。 二. 1. Pre check oracle@rac1 [/home/db/oracle/product/10.2.0/crs/bin] ./cluvfy stage -pre crsinst -n rac1,rac2,rac3 -r 10gR2 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "rac1". Checking user equivalence... User equivalence check passed for user "oracle". Checking administrative privileges... User existence check passed for "oracle". Group existence check passed for "oinstall". Membership check for user "oracle" in group "oinstall" [as Primary] passed. Administrative privileges check passed. Checking node connectivity... Node connectivity check passed for subnet "192.168.56.0" with node(s) rac2,rac1,rac3. Node connectivity check passed for subnet "10.0.0.0" with node(s) rac2,rac1,rac3. Suitable interfaces for the private interconnect on subnet "192.168.56.0": rac2 eth0:192.168.56.102 eth0:192.168.56.112 rac1 eth0:192.168.56.101 eth0:192.168.56.111 rac3 eth0:192.168.56.103 Suitable interfaces for the private interconnect on subnet "10.0.0.0": rac2 eth1:10.0.0.102 rac1 eth1:10.0.0.101 rac3 eth1:10.0.0.103 ERROR: Could not find a suitable set of interfaces for VIPs. Node connectivity check failed. Checking system requirements for 'crs'... Total memory check failed. Check failed on nodes: rac2,rac1,rac3 Free disk space check passed. Swap space check passed. System architecture check passed. Kernel version check passed. Package existence check passed for "binutils-2.17.50.0.6-2.el5". Package existence check passed for "control-center-2.16.0-14.el5". Package existence check passed for "gcc-4.1.1-52". Package existence check passed for "glibc-2.5-12". Package existence check passed for "glibc-common-2.5-12". Package existence check passed for "libstdc++-4.1.1-52.el5". Package existence check passed for "libstdc++-devel-4.1.1-52.el5". Package existence check passed for "make-3.81-1.1". Package existence check failed for "sysstat-7.0.0-3.el5". Check failed on nodes: rac2,rac1,rac3 Package existence check passed for "setarch-2.0-1.1". Group existence check passed for "dba". Group existence check passed for "oinstall". User existence check passed for "nobody". Hard resource limit check passed for "open file descriptors". Soft resource limit check passed for "open file descriptors". Hard resource limit check passed for "maximum user processes". Soft resource limit check passed for "maximum user processes". System requirement failed for 'crs' Pre-check for cluster services setup was unsuccessful on all the nodes. 2. 执行 addNode.sh 脚本 oracle@rac1 [/home/db/oracle/product/10.2.0/crs/oui/bin] ./addNode.sh Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Oracle Universal Installer, Version 10.2.0.5.0 Production Copyright (C) 1999, 2010, Oracle. All rights reserved. 点击 next: 输入 public node name 为 rac3,点击 next: 点击 install: 根据要求在 rac1 上以 root 用户执行 rootaddnode.sh 脚本: [root@rac1 ~]# /home/db/oracle/product/10.2.0/crs/install/rootaddnode.sh clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Attempting to add 1 new nodes to the configuration Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node : node 3: rac3 rac3-priv rac3 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. /home/db/oracle/product/10.2.0/crs/bin/srvctl add nodeapps -n rac3 -A rac3-vip/255.255.255.0/eth0 -o /home/db/oracle/product/10.2.0/crs 在 rac3 上执行 root.sh: [root@rac3 10.2.0]# /home/db/oracle/product/10.2.0/crs/root.sh WARNING: directory '/home/db/oracle/product/10.2.0' is not owned by root WARNING: directory '/home/db/oracle/product' is not owned by root WARNING: directory '/home/db/oracle' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured OCR LOCATIONS = /dev/raw/raw2 OCR backup directory '/home/db/oracle/product/10.2.0/crs/cdata/crs' does not exist. Creating now Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/home/db/oracle/product/10.2.0' is not owned by root WARNING: directory '/home/db/oracle/product' is not owned by root WARNING: directory '/home/db/oracle' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node : node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 rac2 rac3 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Crs_stat –t 命令检查: [root@rac1 ~]# /home/db/oracle/product/10.2.0/crs/bin/crs_stat -t Name Type Target State Host ------------------------------------------------------------ appvip1 application OFFLINE OFFLINE ora.mbs.db application ONLINE ONLINE rac1 ora....s1.inst application ONLINE ONLINE rac1 ora....s2.inst application ONLINE ONLINE rac2 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2 ora.rac3.gsd application ONLINE ONLINE rac3 ora.rac3.ons application ONLINE ONLINE rac3 ora.rac3.vip application ONLINE ONLINE rac3 3. 配置 ONS [root@rac1 bin]# cat /home/db/oracle/product/10.2.0/crs/opmn/conf/ons.config localport=6113 remoteport=6200 loglevel=3 useocr=on [root@rac1 bin]# ./racgons add_config rac3:6200 [root@rac1 bin]# 4. ADD RAC HOME to the new node oracle@rac1 [/home/db/oracle/product/10.2.0/db/oui/bin] oracle@rac1 [/home/db/oracle/product/10.2.0/db/oui/bin] ./addNode.sh Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Oracle Universal Installer, Version 10.2.0.5.0 Production Copyright (C) 1999, 2010, Oracle. All rights reserved. 点击 next: 点击 next: 点击 install,开始往远程节点复制: 以 root 身份在 rac3 上运行 root.sh 脚本后,点击 OK。 [root@rac3 crs]# /home/db/oracle/product/10.2.0/db/root.sh Running Oracle 10g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /home/db/oracle/product/10.2.0/db Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 点击 exit 退出。 5. 为新节点创建 listener oracle@rac1 [/home/db/oracle/product/10.2.0/db/oui/bin] cd $ORACLE_HOME/bin oracle@rac1 [/home/db/oracle/product/10.2.0/db/bin] netca Oracle Net Services Configuration: [root@rac3 crs]# /home/db/oracle/product/10.2.0/crs/bin/crs_stat -t Name Type Target State Host ------------------------------------------------------------ appvip1 application OFFLINE OFFLINE ora.mbs.db application ONLINE ONLINE rac1 ora....s1.inst application ONLINE ONLINE rac1 ora....s2.inst application ONLINE ONLINE rac2 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2 ora....C3.lsnr application ONLINE ONLINE rac3 ora.rac3.gsd application ONLINE ONLINE rac3 ora.rac3.ons application ONLINE ONLINE rac3 ora.rac3.vip application ONLINE ONLINE rac3 [root@rac3 crs]# 6. 添加数据库实例到新的节点 在 rac1 上执行 dbca: 点击 next: 选择 Instance Management,点击 next: 选择 Add an instance,点击 next: 输入 sys 用户名及密码,点击 next: 点击 next: 7. 三.
还剩26页未读

继续阅读

下载pdf到电脑,查找使用更方便

pdf的实际排版效果,会与网站的显示效果略有不同!!

需要 6 金币 [ 分享pdf获得金币 ] 0 人已下载

下载pdf

pdf贡献者

lsq_008

贡献于2015-06-19

下载需要 6 金币 [金币充值 ]
亲,您也可以通过 分享原创pdf 来获得金币奖励!
下载pdf