OS 환경 : Oracle Linux 8.4 (64bit)
DB 환경 : Oracle Database 19.27.0.0
방법 : [INS-43042] The cluster nodes [ora19rac3] specified for addnode is already part of a cluster.
2node rac에서 3번노드를 delnode 한 뒤 다시 add node 함
참고 : 오라클 19c RAC 노드 제거 방법(silent) ( https://positivemh.tistory.com/1244 )
참고 : 오라클 19c RAC 노드 추가 방법(silent) ( https://positivemh.tistory.com/1243 )
이때 발생한 에러들을 게시글로 정리함
내용이 많아 글을 나눠서 업로드함
메인 게시글 : 오라클 19c RAC 제거한 노드 다시 추가 시 발생한 에러들 ( https://positivemh.tistory.com/1312 )
에러1. ORA-01613: instance ORA19DB3 (thread 3) only has 0 logs - at least 2 logs required to enable. ( https://positivemh.tistory.com/1308 )
에러2. ORA-30012: undo tablespace 'UNDOTBS3' does not exist or of wrong type ( https://positivemh.tistory.com/1309 )
에러3. [INS-43042] The cluster nodes [ora19rac3] specified for addnode is already part of a cluster. ( https://positivemh.tistory.com/1310 )
에러4. PRCR-1079 : Failed to start resource ora.cvu ( https://positivemh.tistory.com/1311 )
에러3. db, grid 노드 제거 후 노드 재추가시 발생한 메세지
|
1
2
3
4
|
$ $GRID_HOME/addnode/addnode.sh -silent -ignoreSysPrereqs -ignorePrereqFailure -waitForCompletion CLUSTER_NEW_NODES=ora19rac3 CLUSTER_NEW_VIRTUAL_HOSTNAMES=ora19rac3-vip
[FATAL] [INS-43042] The cluster nodes [ora19rac3] specified for addnode is already part of a cluster.
CAUSE: Cluster nodes specified already has clusterware configured.
ACTION: Ensure that the nodes that do not have clusterware configured are provided for addnode operation.
|
아직까지 클러스터의 일부라고 나옴
남아있는 노드에서 olsnodes 확인
|
1
2
3
4
|
$ olsnodes -s -t
ora19rac1 Active Unpinned
ora19rac2 Active Unpinned
ora19rac3 Inactive Unpinned
|
3번 노드 db 인스턴스를 제거하고 grid deconfig도 했지만 아직 남아있음
남아있는 노드에서 노드 제거 명령 수행(root에서 수행)
|
1
2
|
# crsctl delete node -n ora19rac3
CRS-4661: Node ora19rac3 successfully deleted.
|
제거됨
남아있는 노드에서 olsnodes 재확인
|
1
2
3
|
$ olsnodes -s -t
ora19rac1 Active Unpinned
ora19rac2 Active Unpinned
|
정상적으로 3번 노드가 제거됨
cluvfy stage 명령 수행
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
$ cluvfy stage -post nodedel -n ora19rac3
This software is "233" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 756671.1 for more details.
Performing following verification checks ...
Node Removal ...
CRS Integrity ...PASSED
Clusterware Version Consistency ...PASSED
Node Removal ...PASSED
Post-check for node removal was successful.
CVU operation performed: stage -post nodedel
Date: Nov 29, 2025 5:13:19 PM
CVU version: 19.27.0.0.0 (041025x8664)
Clusterware version: 19.0.0.0.0
CVU home: /oracle/app/grid/19c
Grid home: /oracle/app/grid/19c
User: oracle
Operating system: Linux5.4.17-2136.342.5.3.el8uek.x86_64
|
이후 addnode 시 에러 미발생함
참조 :
https://positivemh.tistory.com/284
https://positivemh.tistory.com/1244
Database fails to start on due to ORA-01618: redo thread 2 is not enabled - cannot mount (Doc ID 1677362.1)
Top Issues When Adding Node for Grid Infrastructure via GridSetup.sh (Doc ID 2955583.1)
https://pat98.tistory.com/731
https://docs.oracle.com/en/database/oracle/oracle-database/19/cwadd/cluster-verification-utility-reference.html#GUID-B445A858-9F00-4423-990E-109545AC11C3
https://dbmentors.blogspot.com/2013/11/clusterware-resource-oracvu.html
Clusterware resource ora.cvu FAQ (Doc ID 1524235.1)
Cluster Verification Utility (CLUVFY) FAQ (Doc ID 316817.1)
"Roottfa.sh: Not Found" after executing root.sh script. (Doc ID 2960836.1)
root.sh Hanging During Relink on AIX (Doc ID 3091869.1)
Root.sh Failed During Rac Installation Due To Antivirus (Doc ID 3058530.1)
https://dbacentrals.blogspot.com/2017/08/srvm1337crs-10051-cvu-found-following.html
