내맘대로긍정이 알려주는
Oracle 23ai 신기능
무료 세미나 발표자료
OS환경 : Oracle Linux6.8(64bit)
DB 환경 : Oracle Database 11.2.0.4 2node RAC
호스트 네임: rac1, rac2
삭제 된 노드: rac3
추가 할 노드: rac3
방법 :
11g R2 RAC에서 노드가 삭제되면 나머지 노드의 OCR 및 인벤토리만 업데이트 됨
삭제 된 노드의 grid 소프트웨어가 제거되지 않고 삭제 된 노드의 인벤토리가 업데이트되지 않음
(/oracle/app/oraInventory/ContentsXML/inventory.xml)
그러므로, 삭제 된 노드를 다시 추가하기 위해 이전에 삭제 된 노드에 Grid 소프트웨어를 복사 할 필요가 없음
아래 절차만 진행하면 됨
#복사 옵션이 없는 현재 기존노드(예: rac1)에서 addNode.sh를 실행한다.
#grid계정 또는 oracle 계정으로 로그인
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 | [oracle@rac1 ~]$ cd /oracle/app/11.2.0/grid/oui/bin [oracle@rac1 bin]$ ./addNode.sh -silent -noCopy ORACLE_HOME=/oracle/app/11.2.0/grid "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}" Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "rac1" Checking user equivalence... User equivalence check passed for user "oracle" Checking CRS integrity... Clusterware version consistency passed CRS integrity check passed Checking shared resources... Checking CRS home location... "/oracle/app/11.2.0/grid" is not shared Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" TCP connectivity check passed for subnet "192.168.137.0" Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" TCP connectivity check passed for subnet "10.10.10.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.137.0". Subnet mask consistency check passed for subnet "10.10.10.0". Subnet mask consistency check passed. Node connectivity check passed Checking multicast communication... Checking subnet "192.168.137.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.137.0" for multicast communication with multicast group "230.0.1.0" passed. Checking subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "rac1:/oracle/app/11.2.0/grid" Free disk space check passed for "rac3:/oracle/app/11.2.0/grid" Free disk space check passed for "rac1:/tmp" Free disk space check passed for "rac3:/tmp" Check for multiple users with UID value 54321 passed User existence check passed for "oracle" Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "binutils" Package existence check passed for "compat-libcap1" Package existence check passed for "compat-libstdc++-33(x86_64)" Package existence check passed for "libgcc(x86_64)" Package existence check passed for "libstdc++(x86_64)" Package existence check passed for "libstdc++-devel(x86_64)" Package existence check passed for "sysstat" Package existence check passed for "gcc" Package existence check passed for "gcc-c++" Package existence check passed for "ksh" Package existence check passed for "make" Package existence check passed for "glibc(x86_64)" Package existence check passed for "glibc-devel(x86_64)" Package existence check passed for "libaio(x86_64)" Package existence check passed for "libaio-devel(x86_64)" Check for multiple users with UID value 0 passed Current group ID check passed Starting check for consistency of primary group of root user Check for consistency of root user's primary group passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... NTP Configuration file check passed Checking daemon liveness... Liveness check passed for "ntpd" Check for NTP daemon or service alive passed on all nodes NTP daemon slewing option check passed NTP daemon's boot time configuration check for slewing option passed NTP common Time Server Check started... Check of common NTP Time Server passed Clock time offset check from NTP Time Server started... Clock time offset check passed Clock synchronization check using Network Time Protocol(NTP) passed User "oracle" is not part of "root" group. Check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Pre-check for node addition was successful. Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 7999 MB Passed Oracle Universal Installer, Version 11.2.0.4.0 Production Copyright (C) 1999, 2013, Oracle. All rights reserved. Performing tests to see whether nodes rac2,rac3 are available ............................................................... 100% Done. .. ----------------------------------------------------------------------------- Cluster Node Addition Summary Global Settings Source: /oracle/app/11.2.0/grid New Nodes Space Requirements New Nodes rac3 /oracle: Required 4.59GB : Available 9.03GB Installed Products Product Names Oracle Grid Infrastructure 11g 11.2.0.4.0 Java Development Kit 1.5.0.51.10 Installer SDK Component 11.2.0.4.0 Oracle One-Off Patch Installer 11.2.0.3.4 Oracle Universal Installer 11.2.0.4.0 Oracle RAC Required Support Files-HAS 11.2.0.4.0 Oracle USM Deconfiguration 11.2.0.4.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Enterprise Manager Common Core Files 10.2.0.4.5 Oracle DBCA Deconfiguration 11.2.0.4.0 Oracle RAC Deconfiguration 11.2.0.4.0 Oracle Quality of Service Management (Server) 11.2.0.4.0 Installation Plugin Files 11.2.0.4.0 Universal Storage Manager Files 11.2.0.4.0 Oracle Text Required Support Files 11.2.0.4.0 Automatic Storage Management Assistant 11.2.0.4.0 Oracle Database 11g Multimedia Files 11.2.0.4.0 Oracle Multimedia Java Advanced Imaging 11.2.0.4.0 Oracle Globalization Support 11.2.0.4.0 Oracle Multimedia Locator RDBMS Files 11.2.0.4.0 Oracle Core Required Support Files 11.2.0.4.0 Bali Share 1.1.18.0.0 Oracle Database Deconfiguration 11.2.0.4.0 Oracle Quality of Service Management (Client) 11.2.0.4.0 Expat libraries 2.0.1.0.1 Oracle Containers for Java 11.2.0.4.0 Perl Modules 5.10.0.0.1 Secure Socket Layer 11.2.0.4.0 Oracle JDBC/OCI Instant Client 11.2.0.4.0 Oracle Multimedia Client Option 11.2.0.4.0 LDAP Required Support Files 11.2.0.4.0 Character Set Migration Utility 11.2.0.4.0 Perl Interpreter 5.10.0.0.2 PL/SQL Embedded Gateway 11.2.0.4.0 OLAP SQL Scripts 11.2.0.4.0 Database SQL Scripts 11.2.0.4.0 Oracle Extended Windowing Toolkit 3.4.47.0.0 SSL Required Support Files for InstantClient 11.2.0.4.0 SQL*Plus Files for Instant Client 11.2.0.4.0 Oracle Net Required Support Files 11.2.0.4.0 Oracle Database User Interface 2.2.13.0.0 RDBMS Required Support Files for Instant Client 11.2.0.4.0 RDBMS Required Support Files Runtime 11.2.0.4.0 XML Parser for Java 11.2.0.4.0 Oracle Security Developer Tools 11.2.0.4.0 Oracle Wallet Manager 11.2.0.4.0 Enterprise Manager plugin Common Files 11.2.0.4.0 Platform Required Support Files 11.2.0.4.0 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 RDBMS Required Support Files 11.2.0.4.0 Oracle Ice Browser 5.2.3.6.0 Oracle Help For Java 4.2.9.0.0 Enterprise Manager Common Files 10.2.0.4.5 Deinstallation Tool 11.2.0.4.0 Oracle Java Client 11.2.0.4.0 Cluster Verification Utility Files 11.2.0.4.0 Oracle Notification Service (eONS) 11.2.0.4.0 Oracle LDAP administration 11.2.0.4.0 Cluster Verification Utility Common Files 11.2.0.4.0 Oracle Clusterware RDBMS Files 11.2.0.4.0 Oracle Locale Builder 11.2.0.4.0 Oracle Globalization Support 11.2.0.4.0 Buildtools Common Files 11.2.0.4.0 HAS Common Files 11.2.0.4.0 SQL*Plus Required Support Files 11.2.0.4.0 XDK Required Support Files 11.2.0.4.0 Agent Required Support Files 10.2.0.4.5 Parser Generator Required Support Files 11.2.0.4.0 Precompiler Required Support Files 11.2.0.4.0 Installation Common Files 11.2.0.4.0 Required Support Files 11.2.0.4.0 Oracle JDBC/THIN Interfaces 11.2.0.4.0 Oracle Multimedia Locator 11.2.0.4.0 Oracle Multimedia 11.2.0.4.0 Assistant Common Files 11.2.0.4.0 Oracle Net 11.2.0.4.0 PL/SQL 11.2.0.4.0 HAS Files for DB 11.2.0.4.0 Oracle Recovery Manager 11.2.0.4.0 Oracle Database Utilities 11.2.0.4.0 Oracle Notification Service 11.2.0.3.0 SQL*Plus 11.2.0.4.0 Oracle Netca Client 11.2.0.4.0 Oracle Advanced Security 11.2.0.4.0 Oracle JVM 11.2.0.4.0 Oracle Internet Directory Client 11.2.0.4.0 Oracle Net Listener 11.2.0.4.0 Cluster Ready Services Files 11.2.0.4.0 Oracle Database 11g 11.2.0.4.0 ----------------------------------------------------------------------------- Instantiating scripts for add node (Friday, November 30, 2018 1:50:08 PM KST) . 1% Done. Instantiation of add node scripts complete Saving inventory on nodes (Friday, November 30, 2018 1:50:14 PM KST) . 100% Done. Save inventory complete WARNING: The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes. /oracle/app/11.2.0/grid/root.sh #On nodes rac3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node The Cluster Node Addition of /oracle/app/11.2.0/grid was successful. Please check '/tmp/silentInstall.log' for more details. |
#This step updates the inventories on the nodes and instatiates scripts on local node.
#이 단계는 노드의 인벤토리와 로컬 노드의 instatiates scripts를 업데이트한다.
#새로 추가 된 노드(rac3)에서 root 계정으로 root.sh를 실행
#root.sh가 새로 추가 된 노드에 없으면 기존 노드 중 하나에서 복사해서 사용
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | [root@rac3 ~]# /oracle/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to upstart CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded |
#추가 된 노드 확인
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | [root@rac1 ~]# olsnodes rac1 rac2 rac3 [oracle@rac1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 OFFLINE OFFLINE rac3 ora.OCR_VOTE.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ORADATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ORAFRA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ONLINE ONLINE rac3 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 OFFLINE OFFLINE rac3 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac2 ora.oc4j 1 ONLINE ONLINE rac2 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.rac3.vip 1 ONLINE ONLINE rac3 ora.racdb.db 1 ONLINE ONLINE rac1 Open 2 ONLINE ONLINE rac2 Open ora.scan1.vip 1 ONLINE ONLINE rac1 |
#ora.racdb.db 3번이 없다.
#grid 또는 oracle 계정으로 인스턴스를 추가해준다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | [oracle@rac1 ~]$ srvctl add instance -n rac3 -d racdb -i racdb3 [oracle@rac1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 OFFLINE OFFLINE rac3 ora.OCR_VOTE.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ORADATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ORAFRA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ONLINE ONLINE rac3 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 OFFLINE OFFLINE rac3 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac2 ora.oc4j 1 ONLINE ONLINE rac2 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.rac3.vip 1 ONLINE ONLINE rac3 ora.racdb.db 1 ONLINE ONLINE rac1 Open 2 ONLINE ONLINE rac2 Open 3 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE rac1 |
#ora.racdb.db 3번이 생기고 OFFLINE 로 표시됨
#3번노드 db를 enable 후 start 시켜준다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | [oracle@rac1 ~]$ srvctl enable instance -d racdb -i racdb3 [oracle@rac1 ~]$ srvctl start instance -d racdb -i racdb3 [oracle@rac1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.OCR_VOTE.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ORADATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ORAFRA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ONLINE ONLINE rac3 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 OFFLINE OFFLINE rac3 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ONLINE ONLINE rac3 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac2 ora.oc4j 1 ONLINE ONLINE rac2 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.rac3.vip 1 ONLINE ONLINE rac3 ora.racdb.db 1 ONLINE ONLINE rac1 Open 2 ONLINE ONLINE rac2 Open 3 ONLINE ONLINE rac3 Open ora.scan1.vip 1 ONLINE ONLINE rac1 |
#정상적으로 ONLINE 및 OPEN 으로 나타남
#1번노드 alert log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | Fri Nov 30 14:03:58 2018 Reconfiguration started (old inc 7, new inc 9) List of instances: 1 2 3 (myinst: 1) Global Resource Directory frozen Communication channels reestablished Master broadcasted resource hash value bitmaps Non-local Process blocks cleaned out Fri Nov 30 14:03:58 2018 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived Set master node info Submitted all remote-enqueue requests Dwn-cvts replayed, VALBLKs dubious All grantable enqueues granted Submitted all GCS remote-cache requests Fix write in gcs resources Reconfiguration complete |
#2번노드 alert log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | Fri Nov 30 14:03:58 2018 Reconfiguration started (old inc 7, new inc 9) List of instances: 1 2 3 (myinst: 2) Global Resource Directory frozen Communication channels reestablished Fri Nov 30 14:03:58 2018 * domain 0 valid = 1 according to instance 1 Master broadcasted resource hash value bitmaps Non-local Process blocks cleaned out Fri Nov 30 14:03:58 2018 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived Set master node info Submitted all remote-enqueue requests Dwn-cvts replayed, VALBLKs dubious All grantable enqueues granted Submitted all GCS remote-cache requests Fix write in gcs resources Reconfiguration complete Fri Nov 30 14:04:00 2018 minact-scn: Master returning as live inst:3 has inc# mismatch instinc:0 cur:9 errcnt:0 |
#3번노드 alert log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Initial number of CPU is 1 Number of processor cores in the system is 1 Number of processor sockets in the system is 1 Private Interface 'eth1:1' configured from GPnP for use as a private interconnect. [name='eth1:1', type=1, ip=169.254.231.66, mac=00-50-56-26-c4-ff, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62] Public Interface 'eth0' configured from GPnP for use as a public interface. [name='eth0', type=1, ip=192.168.137.52, mac=00-50-56-25-92-28, net=192.168.137.0/24, mask=255.255.255.0, use=public/1] Public Interface 'eth0:1' configured from GPnP for use as a public interface. [name='eth0:1', type=1, ip=192.168.137.55, mac=00-50-56-25-92-28, net=192.168.137.0/24, mask=255.255.255.0, use=public/1] CELL communication is configured to use 0 interface(s): CELL IP affinity details: NUMA status: non-NUMA system cellaffinity.ora status: N/A CELL communication will use 1 IP group(s): Grp 0: Picked latch-free SCN scheme 3 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. LICENSE_MAX_USERS = 0 SYS auditing is disabled Starting up: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options. ORACLE_HOME = /oracle/app/oracle/product/11.2.0/db_1 System name: Linux Node name: rac3 Release: 4.1.12-124.16.4.el6uek.x86_64 Version: #2 SMP Thu Jun 14 18:55:52 PDT 2018 Machine: x86_64 VM name: VMWare Version: 6 Using parameter settings in server-side pfile /oracle/app/oracle/product/11.2.0/db_1/dbs/initracdb3.ora System parameters with non-default values: processes = 150 spfile = "+ORADATA/racdb/spfileracdb.ora" memory_target = 1584M control_files = "+ORADATA/racdb/controlfile/current.256.993489059" control_files = "+ORAFRA/racdb/controlfile/current.256.993489061" db_block_size = 8192 compatible = "11.2.0.4.0" cluster_database = TRUE db_create_file_dest = "+ORADATA" db_recovery_file_dest = "+ORAFRA" db_recovery_file_dest_size= 6627M thread = 3 undo_tablespace = "UNDOTBS3" instance_number = 3 remote_login_passwordfile= "EXCLUSIVE" db_domain = "" remote_listener = "rac-scan:1521" audit_file_dest = "/oracle/app/oracle/admin/racdb/adump" audit_trail = "DB" db_name = "racdb" open_cursors = 300 diagnostic_dest = "/oracle/app/oracle" Cluster communication is configured to use the following interface(s) for this instance 169.254.231.66 cluster interconnect IPC version:Oracle UDP/IP (generic) IPC Vendor 1 proto 2 Fri Nov 30 14:03:54 2018 PMON started with pid=2, OS id=35664 Fri Nov 30 14:03:54 2018 PSP0 started with pid=3, OS id=35666 Fri Nov 30 14:03:55 2018 VKTM started with pid=4, OS id=35677 at elevated priority VKTM running at (1)millisec precision with DBRM quantum (100)ms Fri Nov 30 14:03:55 2018 GEN0 started with pid=5, OS id=35681 Fri Nov 30 14:03:55 2018 DIAG started with pid=6, OS id=35683 Fri Nov 30 14:03:55 2018 DBRM started with pid=7, OS id=35685 Fri Nov 30 14:03:56 2018 PING started with pid=8, OS id=35688 Fri Nov 30 14:03:56 2018 ACMS started with pid=9, OS id=35690 Fri Nov 30 14:03:56 2018 DIA0 started with pid=10, OS id=35693 Fri Nov 30 14:03:56 2018 LMON started with pid=11, OS id=35696 Fri Nov 30 14:03:56 2018 LMD0 started with pid=12, OS id=35698 * Load Monitor used for high load check * New Low - High Load Threshold Range = [960 - 1280] Fri Nov 30 14:03:56 2018 LMS0 started with pid=13, OS id=35700 at elevated priority Fri Nov 30 14:03:56 2018 RMS0 started with pid=14, OS id=35704 Fri Nov 30 14:03:56 2018 LMHB started with pid=15, OS id=35706 Fri Nov 30 14:03:56 2018 MMAN started with pid=16, OS id=35708 Fri Nov 30 14:03:56 2018 DBW0 started with pid=17, OS id=35710 Fri Nov 30 14:03:56 2018 LGWR started with pid=18, OS id=35712 Fri Nov 30 14:03:56 2018 CKPT started with pid=19, OS id=35714 Fri Nov 30 14:03:56 2018 SMON started with pid=20, OS id=35716 Fri Nov 30 14:03:56 2018 RECO started with pid=21, OS id=35718 Fri Nov 30 14:03:56 2018 RBAL started with pid=22, OS id=35720 Fri Nov 30 14:03:56 2018 ASMB started with pid=23, OS id=35722 Fri Nov 30 14:03:56 2018 MMON started with pid=24, OS id=35724 Fri Nov 30 14:03:56 2018 MMNL started with pid=25, OS id=35728 NOTE: initiating MARK startup Starting background process MARK Fri Nov 30 14:03:57 2018 MARK started with pid=26, OS id=35730 lmon registered with NM - instance number 3 (internal mem no 2) NOTE: MARK has subscribed Reconfiguration started (old inc 0, new inc 9) List of instances: 1 2 3 (myinst: 3) Global Resource Directory frozen * allocate domain 0, invalid = TRUE Communication channels reestablished * domain 0 valid = 1 according to instance 1 Master broadcasted resource hash value bitmaps Non-local Process blocks cleaned out LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived Set master node info Submitted all remote-enqueue requests Dwn-cvts replayed, VALBLKs dubious All grantable enqueues granted Submitted all GCS remote-cache requests Fix write in gcs resources Reconfiguration complete Fri Nov 30 14:03:58 2018 LCK0 started with pid=28, OS id=35736 Starting background process RSMN Fri Nov 30 14:03:58 2018 RSMN started with pid=29, OS id=35738 ORACLE_BASE not set in environment. It is recommended that ORACLE_BASE be set in the environment Fri Nov 30 14:03:59 2018 ALTER SYSTEM SET local_listener=' (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.137.55)(PORT=1521))' SCOPE=MEMORY SID='racdb3'; ALTER DATABASE MOUNT /* db agent *//* {1:46942:425} */ NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libasm.so NOTE: Loaded library: System SUCCESS: diskgroup ORADATA was mounted SUCCESS: diskgroup ORAFRA was mounted NOTE: dependency between database racdb and diskgroup resource ora.ORADATA.dg is established NOTE: dependency between database racdb and diskgroup resource ora.ORAFRA.dg is established Successful mount of redo thread 3, with mount id 991206553 Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE) Lost write protection disabled Completed: ALTER DATABASE MOUNT /* db agent *//* {1:46942:425} */ ALTER DATABASE OPEN /* db agent *//* {1:46942:425} */ Picked broadcast on commit scheme to generate SCNs Thread 3 opened at log sequence 3 Current log# 5 seq# 3 mem# 0: +ORADATA/racdb/onlinelog/group_5.266.993490003 Current log# 5 seq# 3 mem# 1: +ORAFRA/racdb/onlinelog/group_5.259.993490005 Successful open of redo thread 3 MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set SMON: enabling cache recovery [35740] Successfully onlined Undo Tablespace 5. Undo initialization finished serial:0 start:11722354 end:11722694 diff:340 (3 seconds) Verifying file header compatibility for 11g tablespace encryption.. Verifying 11g file header compatibility for tablespace encryption completed Fri Nov 30 14:04:07 2018 SMON: enabling tx recovery Database Characterset is KO16MSWIN949 No Resource Manager plan active Starting background process GTX0 Fri Nov 30 14:04:08 2018 GTX0 started with pid=32, OS id=35760 Starting background process RCBG Fri Nov 30 14:04:08 2018 RCBG started with pid=33, OS id=35762 replication_dependency_tracking turned off (no async multimaster replication found) Fri Nov 30 14:04:09 2018 minact-scn: Inst 3 is a slave inc#:9 mmon proc-id:35724 status:0x2 minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000 Starting background process QMNC Fri Nov 30 14:04:09 2018 QMNC started with pid=34, OS id=35764 Fri Nov 30 14:04:10 2018 Completed: ALTER DATABASE OPEN /* db agent *//* {1:46942:425} */ Fri Nov 30 14:09:11 2018 Starting background process SMCO Fri Nov 30 14:09:11 2018 SMCO started with pid=27, OS id=36222 |
참조 : http://oracleinaction.com/add-back-deleted-node/
'ORACLE > Rac' 카테고리의 다른 글
오라클 11g R2 RAC 환경 운영중에 /etc/passwd 파일이 삭제된다면? (0) | 2020.06.19 |
---|---|
오라클 11g R2 asm rac 환경에서 ocr 백업 확인 및 백업 하기 (0) | 2020.01.04 |
오라클 11g R2 RAC IP 변경 방법(IP Change) (0) | 2018.12.10 |
오라클 11g R2 RAC 노드 제거 방법 (2) | 2018.11.30 |
오라클 11g R2 RAC 환경에서 리스너 관리 (0) | 2018.11.28 |