프린트 하기

OS환경 : Oracle Linux6.8(64bit)


DB 환경 : Oracle Database 11.2.0.4 3node RAC

호스트 네임: rac1, rac2, rac3

삭제 할 노드: rac3


방법 : 

11g R2 RAC는 10g RAC 또는 11g R1 RAC와 비교하여 노드를 클러스터에 삭제하는 프로세스를 매우 단순화되었음

10g RAC 또는 11g R1 RAC pl에서 노드를 삭제하는 절차 링크 => 여기를 클릭해서 확인가능함

이제 SCAN 및 GPNP가 도입 된 후 매우 간단한 단계로 삭제가 가능함



#삭제할 노드(rac3)에 root로 로그인하여 $GRID_HOME/crs/install로 이동한다.

1
[root@rac3 ~]# cd /oracle/app/11.2.0/grid/crs/install/

 


#노드상의 클러스터 소프트웨어 응용 프로그램 및 응용 프로그램을 비활성화 한다. (rac3)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@rac3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists1/192.168.137.0/255.255.255.0/eth0, type static
VIP exists/rac1-vip/192.168.137.53/192.168.137.0/255.255.255.0/eth0, hosting node rac1
VIP exists/rac2-vip/192.168.137.54/192.168.137.0/255.255.255.0/eth0, hosting node rac2
VIP exists/rac3-vip/192.168.137.55/192.168.137.0/255.255.255.0/eth0, hosting node rac3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'rac3'
CRS-2677: Stop of 'ora.racdb.db' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.ORADATA.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.ORAFRA.dg' on 'rac3'
CRS-2677: Stop of 'ora.ORADATA.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ORAFRA.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

#Oracle 클러스터웨어 스택이 성공적으로 구성 해제되었음


#1번노드 alert log

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Fri Nov 30 10:57:31 2018
Reconfiguration started (old inc 5, new inc 7)
List of instances:
 1 2 (myinst: 1
 Global Resource Directory frozen
 * dead instance detected - domain 0 invalid = TRUE 
 Communication channels reestablished
Fri Nov 30 10:57:31 2018
 * domain 0 not valid according to instance 2 
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Fri Nov 30 10:57:31 2018
 LMS 00 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
 Submitted all GCS remote-cache requests
 Fix write in gcs resources
Reconfiguration complete


#2번노드 alert log

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Fri Nov 30 10:57:31 2018
Reconfiguration started (old inc 5, new inc 7)
List of instances:
 1 2 (myinst: 2
 Global Resource Directory frozen
 * dead instance detected - domain 0 invalid = TRUE 
 Communication channels reestablished
Fri Nov 30 10:57:31 2018
 * domain 0 valid = 0 according to instance 1 
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Fri Nov 30 10:57:31 2018
 LMS 00 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
 Post SMON to start 1st pass IR
 Submitted all GCS remote-cache requests
 Post SMON to start 1st pass IR
 Fix write in gcs resources
Fri Nov 30 10:57:31 2018
Instance recovery: looking for dead threads
Reconfiguration complete
Beginning instance recovery of 1 threads
Started redo scan
Completed redo scan
 read 0 KB redo, 0 data blocks need recovery
Started redo application at
 Thread 3: logseq 2, block 313, scn 289493
Recovery of Online Redo Log: Thread 3 Group 6 Seq 2 Reading mem 0
  Mem# 0+ORADATA/racdb/onlinelog/group_6.267.993490005
  Mem# 1+ORAFRA/racdb/onlinelog/group_6.260.993490007
Completed redo application of 0.00MB
Completed instance recovery at
 Thread 3: logseq 2, block 313, scn 309494
 0 data blocks read, 0 data blocks written, 0 redo k-bytes read
Thread 3 advanced to log sequence 3 (thread recovery)
Fri Nov 30 10:57:32 2018
minact-scn: Master returning as live inst:1 has inc# mismatch instinc:5 cur:7 errcnt:0
minact-scn: Master considers inst:3 dead


#3번노드 alert log

1
2
3
4
5
6
7
Fri Nov 30 10:57:30 2018
Shutting down instance (abort)
License high water mark = 6
USER (ospid: 7602): terminating the instance
Instance terminated by USER, pid = 7602
Fri Nov 30 10:57:31 2018
Instance shutdown complete



#남아있는 노드(rac1)에서 지운노드(rac3)를 삭제한다.

1
2
[root@rac1 ~]# crsctl delete node -n rac3
CRS-4661: Node rac3 successfully deleted.



#남아있는 노드에서 oracle 또는 grid 유저로 로그인하고, $GRID_HOME/oui/bin으로 이동하여 노드 목록을 업데이트한다.

1
2
3
4
5
6
7
8
9
[root@rac1 bin]# su - oracle
[oracle@rac1 ~]$ cd /oracle/app/11.2.0/grid/oui/bin/
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/11.2.0/grid "CLUSTER_NODES={rac1,rac2}"
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 7999 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.


#root 유저로 실행 할 경우 경고 메세지 발생


#1번노드 alert log

1
2
Fri Nov 30 11:02:44 2018
Stopping background process CJQ0


#2번노드 alert log

1
2
Fri Nov 30 11:02:44 2018
Stopping background process CJQ0



#노드1과 노드2만 남아 있는지 확인한다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[oracle@rac1 ~]$ crsctl stat res -t
[oracle@rac2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.OCR_VOTE.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ORADATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ORAFRA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.racdb.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
      3        ONLINE  OFFLINE                               Instance Shutdown   
ora.scan1.vip
      1        ONLINE  ONLINE       rac1

#1번노드, 2번노드에서 동일하게 나온다.



#ora.racdb.db 이부분에 3번이 나오는데 srvctl 명령어로 제거 할 수 있다.

1
2
[root@rac1 ~]# srvctl remove instance -d racdb -i racdb3
Remove instance from the database racdb? (y/[n]) y



#노드1과 노드2만 남아 있는지 확인한다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.OCR_VOTE.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ORADATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ORAFRA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.racdb.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
ora.scan1.vip
      1        ONLINE  ONLINE       rac1



참조 : 

http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle11gRAC/CLUSTER_24.shtml

http://oracleinaction.com/delete-node/