OS 환경 : Oracle Linux 7.4 (64bit)
DB 환경 : Oracle Database 11.2.0.4
Oracle Linux 7.4 + ORACLE 11gR2 database RAC (2node)구성 중 Oracle Grid infrastructure 설치 단계
에러 : 11gR2 Grid install – Error: ohasd failed to start the Clusterware.
Grid Install 시 root.sh를 실행 시 마지막에 ohasd fail 나는 문제
에러 상황
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
[root@localhost /]# /app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node localhost successfully pinned.
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2016-01-01 02:14:46.806:
[client(11401)]CRS-2101:The OLR was formatted using version 3.
2016-01-01 02:14:49.572:
[client(11424)]CRS-1001:The OCR was formatted using version 3.
...
[ohasd(13302)]CRS-0715:Oracle High Availabiluty Service has timed out waiting for init.ohasd to be started.
...
ohasd failed to start at /app/11.2.0/grid/crs/install/roothas.pl line 377, line 4.
|
해결 방법 : 패치번호 18370031 적용
패치번호 18370031 적용
(grid 설치) ./runInstaller에서 제일 마지막에 나오는 root.sh를 이미 실행 시켰기 때문에 deinstall 을 먼저 진행
1. GRID 삭제(deinstall)
세션1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
|
[oracle@rac1 grid]$ cd /app/grid/product/11.2.0/grid/deinstall
[oracle@rac1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2018-03-08_00-49-11AM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /app/grid/product/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /app/oracle
Checking for existence of central inventory location /app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /app/grid/product/11.2.0/grid
The following nodes are part of this cluster: rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac1,rac2'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2018-03-08_00-49-11AM/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2018-03-08_00-49-11AM/logs/netdc_check2018-03-08_12-49-32-AM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2018-03-08_00-49-11AM/logs/asmcadc_check2018-03-08_12-49-32-AM.log
Automatic Storage Management (ASM) instance is detected in this Oracle home /app/grid/product/11.2.0/grid.
ASM Diagnostic Destination : /app/oracle
ASM Diskgroups : +OCR_VOTE
ASM diskstring : <Default>
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it''s contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect,
you can modify by entering 'y'. Do you want to modify above information (y|n) [n]: y <-- [Y 입력]
Specify the ASM Diagnostic Destination [/app/oracle]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance [+OCR_VOTE]:
De-configuring ASM will drop the diskgroups at cleanup time. Do you want deconfig tool to drop the diskgroups y|n [y]: y
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /app/grid/product/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2
Oracle Home selected for deinstall is: /app/grid/product/11.2.0/grid
Inventory Location where the Oracle home registered is: /app/oraInventory
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y <-- [Y 입력]
A log of this session will be written to: '/tmp/deinstall2018-03-08_00-49-11AM/logs/deinstall_deconfig2018-03-08_12-49-22-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2018-03-08_00-49-11AM/logs/deinstall_deconfig2018-03-08_12-49-22-AM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2018-03-08_00-49-11AM/logs/asmcadc_clean2018-03-08_12-50-46-AM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2018-03-08_00-49-11AM/logs/netdc_clean2018-03-08_12-50-57-AM.log
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac1".
/tmp/deinstall2018-03-08_00-49-11AM/perl/bin/perl -I/tmp/deinstall2018-03-08_00-49-11AM/perl/lib -I/tmp/deinstall2018-03-08_00-49-11AM/crs/install /tmp/deinstall2018-03-08_00-49-11AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-03-08_00-49-11AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
|
이렇게 나오게 되는데 중간에 <-- [Y 입력] 이라고 적은 부분에 꼭 y를 입력
마지막에 나오는 파란색 스크립트를
세션 2에서 위에 붉은색으로 적혀있는 노드(rac1) 를 열어 root 계정으로 실행
세션2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
[root@rac1 ~]# /tmp/deinstall2018-03-08_00-49-11AM/perl/bin/perl -I/tmp/deinstall2018-03-08_00-49-11AM/perl/lib -I/tmp/deinstall2018-03-08_00-49-11AM/crs/install /tmp/deinstall2018-03-08_00-49-11AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-03-08_00-49-11AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2018-03-08_00-49-11AM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/192.168.137.0/255.255.255.0/ens33, type static
VIP exists: /rac1-vip/192.168.137.52/192.168.137.0/255.255.255.0/ens33, hosting node rac1
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
|
그 후 위 ./deinstall 입력한 세션1에서 엔터 입력
세션1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
|
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Press Enter after you finish running the above commands 엔터 <-- [스크립트 실행 후 엔터 입력]
<----------------------------------------
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/app/grid/product/11.2.0/grid' from the central inventory on the local node : Done
Delete directory '/app/grid/product/11.2.0/grid' on the local node : Done
Delete directory '/app/oraInventory' on the local node : Done
Delete directory '/app/oracle' on the local node : Done
Detach Oracle home '/app/grid/product/11.2.0/grid' from the central inventory on the remote nodes 'rac2' : Done
Delete directory '/app/grid/product/11.2.0/grid' on the remote nodes 'rac2' : Done
Delete directory '/app/oraInventory' on the remote nodes 'rac2' : Done
Delete directory '/app/oracle' on the remote nodes 'rac2' : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2018-03-08_00-49-11AM' on node 'rac1'
Clean install operation removing temporary directory '/tmp/deinstall2018-03-08_00-49-11AM' on node 'rac2'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/app/grid/product/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/app/grid/product/11.2.0/grid' on the local node.
Successfully deleted directory '/app/oraInventory' on the local node.
Successfully deleted directory '/app/oracle' on the local node.
Successfully detached Oracle home '/app/grid/product/11.2.0/grid' from the central inventory on the remote nodes 'rac2'.
Successfully deleted directory '/app/grid/product/11.2.0/grid' on the remote nodes 'rac2'.
Successfully deleted directory '/app/oraInventory' on the remote nodes 'rac2'.
Successfully deleted directory '/app/oracle' on the remote nodes 'rac2'.
Oracle Universal Installer cleanup was successful.
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1,rac2' at the end of the session.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1,rac2' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
|
이렇게 깔끔하게 deinstall 이 완료됨
1
2
3
4
5
6
7
|
[oracle@rac1 ~]$ cd /app/media/grid/
[oracle@rac1 grid]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 13718 MB Passed
Checking swap space: must be greater than 150 MB. Actual 7994 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /app/tmp/OraInstall2018-03-08_01-00-57AM. Please wait ...
|
1
|
[oracle@rac1 ~]$ cd $GRID_HOME
|
1
|
[oracle@rac1 grid]$ mv OPatch/ OPatch_bak
|
1
|
[oracle@rac1 grid]$ unzip p6880880_112000_Linux-x86-64.zip
|
1
2
|
[oracle@rac1 ~]$ cd /app/media/
[oracle@rac1 media]$ unzip p18370031_112040_Linux-x86-64.zip
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
|
[oracle@rac1 ~]$ cd $GRID_HOME/OPatch
[oracle@rac1 OPatch]$ ./opatch napply -local /app/media/18370031/
Oracle Interim Patch Installer version 11.2.0.3.18
Copyright (c) 2018, Oracle Corporation. All rights reserved.
Oracle Home : /app/grid/product/11.2.0/grid
Central Inventory : /app/oraInventory
from : /app/grid/product/11.2.0/grid/oraInst.loc
OPatch version : 11.2.0.3.18
OUI version : 11.2.0.4.0
Log file location : /app/grid/product/11.2.0/grid/cfgtoollogs/opatch/opatch2018-03-08_00-04-32AM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 18370031
Do you want to proceed? [y|n]
y <-- [Y 입력]
User Responded with: Y
All checks passed.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y <-- [Y 입력]
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/app/grid/product/11.2.0/grid')
Is the local system ready for patching? [y|n]
y <-- [Y 입력]
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/app/grid/product/11.2.0/grid'
Patching component oracle.crs, 11.2.0.4.0...
Patch 18370031 successfully applied.
Log file location: /app/grid/product/11.2.0/grid/cfgtoollogs/opatch/opatch2018-03-08_00-04-32AM_1.log
OPatch succeeded.
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
|
[root@rac1 ~]# /app/grid/product/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /app/grid/product/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 엔터 <-- [엔터 입력]
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/grid/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to oracle-ohasd.service
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group OCR_VOTE created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 04f85b747aa04f9ebf0575ccabd0f9ac.
Successful addition of voting disk 26fa8f93cebb4f53bfdc0d22f0ddb646.
Successful addition of voting disk 651208c80e484f0cbf99cda03fe3837b.
Successfully replaced voting disk group with +OCR_VOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 04f85b747aa04f9ebf0575ccabd0f9ac (ORCL:OCR_VOTE01) [OCR_VOTE]
2. ONLINE 26fa8f93cebb4f53bfdc0d22f0ddb646 (ORCL:OCR_VOTE02) [OCR_VOTE]
3. ONLINE 651208c80e484f0cbf99cda03fe3837b (ORCL:OCR_VOTE03) [OCR_VOTE]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.OCR_VOTE.dg' on 'rac1'
CRS-2676: Start of 'ora.OCR_VOTE.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
[root@rac2 ~]# /app/grid/product/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /app/grid/product/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 엔터 <-- [엔터 입력]
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/grid/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to oracle-ohasd.service
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.OCR_VOTE.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
|
원인 : ohasd 프로세스를 기동하는 방식 차이 문제
Oracle Linux 7에서 프로세스를 start 시킬 때 sytemd를 이용하는데
grid 11.2.0.4의 ohasd 프로세스는 systemd 가 아닌 initd(Oracle Linux 6까지 쓰던 프로세스) 로 start 되어 발생하는 오류
권장 패치를 해줘야함
참조 :
https://docs.oracle.com/cd/E11882_01/relnotes.112/e23558/toc.htm#CJAJEBGG
(문서 ID 1951613.1)
관련 패치 파일 : p6880880_112000_Linux-x86-64.zip, p18370031_112040_Linux-x86-64.zip