내맘대로긍정이 알려주는
Oracle 23ai 신기능
무료 세미나 발표자료
OS환경 : Oracle Linux 6.8 (64bit)
DB 환경 : Oracle Database 11.2.0.4 RAC
방법 : 오라클 11g R2 tfa(ahf) 설치 및 로그수집 가이드(SRDC)
TFA 란?
Oracle Support는 데이터 수집을 위해 TFA (Trace File Analyzer) 수집기를 사용하는 것이 좋음
TFA 수집기는 이벤트 시간을 기준으로 관련 정보 만 수집하므로 수집 된 데이터의 크기는 훨씬 작음
TFA Collector는 모든 CRS 로그 파일, ASM trace 파일, 데이터베이스 trace 파일, OSWatcher 출력값 및 CHM (Cluster Health Monitor) 출력값을 수집함
AHF 란?
AHF (Autonomous Health Framework)는 단순히 TFA, ORAchk 및 EXAchk, oswatcher, oratop 등
여러가지 도구들이 결합 된 설치 프로그램임
TFA, ORAchk 및 EXAchk는 이전과 동일하게 작동함
AHF 설치
먼저 Doc. 2291661.1 에서 해당 OS 용 파일 다운로드
서버에 업로드 후 root 유저로 해당 파일 압축 해제
1
2
3
4
5
6
7
8
9
10
|
# pwd
# /root/ahf
# ls
AHF-LINUX_v20.2.0.zip
# unzip AHF-LINUX_v20.2.0.zip
Archive: AHF-LINUX_v20.2.0.zip
inflating: README.txt
inflating: ahf_setup
# ls
AHF-LINUX_v20.2.0.zip ahf_setup README.txt
|
AHF 셋업
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
|
# ./ahf_setup
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_202000_3550_2020_06_27-01_06_07.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 20.2.0 Build Date: 202006260723
Default AHF Location : /opt/oracle.ahf
Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : y [Y 입력]
AHF Location : /opt/oracle.ahf
AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.
Choose Data Directory from below options :
1. /app/oracle [Free Space : 4999 MB]
2. Enter a different Location
Choose Option [1 - 2] : 1 [1 입력]
AHF Data Directory : /app/oracle/oracle.ahf/data
Do you want to add AHF Notification Email IDs ? [Y]|N : n [N 입력]
AHF will also be installed/upgraded on these Cluster Nodes :
1. rac2
The AHF Location and AHF Data Directory must exist on the above nodes
AHF Location : /opt/oracle.ahf
AHF Data Directory : /app/oracle/oracle.ahf/data
Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : y [Y 입력]
Extracting AHF to /opt/oracle.ahf
Configuring TFA Services
Discovering Nodes and Oracle Resources
Not generating certificates as GI discovered
Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
.------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+------+---------------+------+------+------------+----------------------+
| rac1 | RUNNING | 4431 | 5000 | 20.2.0.0.0 | 20200020200626072308 |
'------+---------------+------+------+------------+----------------------'
Running TFA Inventory...
Adding default users to TFA Access list...
.----------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+----------------------------------------+
| Parameter | Value |
+-----------------+----------------------------------------+
| AHF Location | /opt/oracle.ahf |
| TFA Location | /opt/oracle.ahf/tfa |
| Orachk Location | /opt/oracle.ahf/orachk |
| Data Directory | /app/oracle/oracle.ahf/data |
| Repository | /app/oracle/oracle.ahf/data/repository |
| Diag Directory | /app/oracle/oracle.ahf/data/rac1/diag |
'-----------------+----------------------------------------'
Starting orachk scheduler from AHF ...
AHF install completed on rac1 1번노드 AHF 설치 완료
Installing AHF on Remote Nodes :
AHF will be installed on rac2, Please wait.
AHF will prompt twice to install/upgrade per Remote Node. So total 2 prompts
Do you want to continue Y|[N] : y [Y 입력]
AHF will continue with Installing on remote nodes
Installing AHF on rac2 :
[rac2] Copying AHF Installer
root@rac2's password: [2번노드 root 계정 암호 입력]
[rac2] Running AHF Installer
root@rac2's password: [2번노드 root 계정 암호 입력]
AHF binaries are available in /opt/oracle.ahf/bin
AHF is successfully installed 2번노드 AHF 설치 완료
Moving /tmp/ahf_install_202000_3550_2020_06_27-01_06_07.log to /app/oracle/oracle.ahf/data/rac1/diag/ahf/
|
설치 완료
RAC 환경에서 노드 1개씩 설치하고 싶은 경우 -local 옵션 사용
RAC 환경에서 노드 1개씩 설치하고 싶은 경우 -local 옵션 사용
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
|
# ./ahf_setup -local
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_202000_6750_2020_06_28-20_20_18.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 20.2.0 Build Date: 202006260723
Default AHF Location : /opt/oracle.ahf
Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : y [Y 입력]
AHF Location : /opt/oracle.ahf
AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.
Choose Data Directory from below options :
1. /app/oracle [Free Space : 4798 MB]
2. Enter a different Location
Choose Option [1 - 2] : 1 [1 입력]
AHF Data Directory : /app/oracle/oracle.ahf/data
Do you want to add AHF Notification Email IDs ? [Y]|N : n [N 입력]
Extracting AHF to /opt/oracle.ahf
Configuring TFA Services
Discovering Nodes and Oracle Resources
Not generating certificates as GI discovered
Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
.------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+------+---------------+------+------+------------+----------------------+
| rac1 | RUNNING | 7565 | 5000 | 20.2.0.0.0 | 20200020200626072308 |
'------+---------------+------+------+------------+----------------------'
Running TFA Inventory...
Adding default users to TFA Access list...
.----------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+----------------------------------------+
| Parameter | Value |
+-----------------+----------------------------------------+
| AHF Location | /opt/oracle.ahf |
| TFA Location | /opt/oracle.ahf/tfa |
| Orachk Location | /opt/oracle.ahf/orachk |
| Data Directory | /app/oracle/oracle.ahf/data |
| Repository | /app/oracle/oracle.ahf/data/repository |
| Diag Directory | /app/oracle/oracle.ahf/data/rac1/diag |
'-----------------+----------------------------------------'
Starting orachk scheduler from AHF ...
AHF binaries are available in /opt/oracle.ahf/bin
AHF is successfully installed
Moving /tmp/ahf_install_202000_6750_2020_06_28-20_20_18.log to /app/oracle/oracle.ahf/data/rac1/diag/ahf/
|
설치 완료
AHF 셋업 로그 마지막에 나와있는 경로로 가서 로그 확인
1
2
3
|
# cd /app/oracle/oracle.ahf/data/rac1/diag/ahf/
# ls
ahf_install_202000_3550_2020_06_27-01_06_07.log rac2_remote_install_3550.log
|
AHF 설치 로그와 2번노드에 원격으로 설치된 로그들이 생성되어 있음
rac2_remote_install_3550.log 확인
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
# cat rac2_remote_install_3550.log
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_202000_3500_2020_06_27-01_07_58.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 20.2.0 Build Date: 202006260723
AHF Location : /opt/oracle.ahf
AHF Data Directory : /app/oracle/oracle.ahf/data
Extracting AHF to /opt/oracle.ahf
Configuring TFA Services
Discovering Nodes and Oracle Resources
Not generating certificates as GI discovered
Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
.------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+------+---------------+------+------+------------+----------------------+
| rac2 | RUNNING | 4137 | 5000 | 20.2.0.0.0 | 20200020200626072308 |
'------+---------------+------+------+------------+----------------------'
Running TFA Inventory...
Adding default users to TFA Access list...
.----------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+----------------------------------------+
| Parameter | Value |
+-----------------+----------------------------------------+
| AHF Location | /opt/oracle.ahf |
| TFA Location | /opt/oracle.ahf/tfa |
| Orachk Location | /opt/oracle.ahf/orachk |
| Data Directory | /app/oracle/oracle.ahf/data |
| Repository | /app/oracle/oracle.ahf/data/repository |
| Diag Directory | /app/oracle/oracle.ahf/data/rac2/diag |
'-----------------+----------------------------------------'
Starting orachk scheduler from AHF ...
AHF binaries are available in /opt/oracle.ahf/bin
AHF is successfully installed
Moving /tmp/ahf_install_202000_3500_2020_06_27-01_07_58.log to /app/oracle/oracle.ahf/data/rac2/diag/ahf/
|
1번 노드에서 설치된 로그처럼 2번노드도 똑같이 나옴
TFA로 로그 수집
oracle 유저로 접속 후 tfactl 실행
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
$ tfactl diagcollect -srdc dbrac
Enter the time of the incident [YYYY-MM-DD HH24:MI:SS,<RETURN>=ALL] : [문제 발생시점 시간 입력 또는 엔터]
Enter the Database Name, if the incident was NOT specific to a database (e.g. Node Reboot/Eviction) choose ALL [Required for this SRDC] :
[모든 노드 수집 시 엔터]
Scripts to be run by this srdc:
Components included in this srdc: OS CRS DATABASE CHMOS ASM
Collecting data for the last 1 hours for all components...
Collecting data for all nodes
Collection Id : 20200627012049rac1
Detailed Logging at : /app/oracle/oracle.ahf/data/repository/srdc_dbrac_collection_Sat_Jun_27_01_20_50_EDT_2020_node_all/diagcollect_20200627012049_rac1.log
2020/06/27 01:20:56 EDT : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom
2020/06/27 01:20:56 EDT : Collection Name : tfa_srdc_dbrac_Sat_Jun_27_01_20_50_EDT_2020.zip
2020/06/27 01:20:57 EDT : Collecting diagnostics from hosts : [rac1, rac2]
2020/06/27 01:20:57 EDT : Scanning of files for Collection in progress...
2020/06/27 01:20:57 EDT : Collecting additional diagnostic information...
2020/06/27 01:21:02 EDT : Getting list of files satisfying time range [06/27/2020 00:20:56 EDT, 06/27/2020 01:21:02 EDT]
2020/06/27 01:21:15 EDT : Collecting ADR incident files...
2020/06/27 01:21:24 EDT : Completed collection of additional diagnostic information...
2020/06/27 01:21:25 EDT : Completed Local Collection
2020/06/27 01:21:25 EDT : Remote Collection in Progress...
.---------------------------------.
| Collection Summary |
+------+-----------+-------+------+
| Host | Status | Size | Time |
+------+-----------+-------+------+
| rac2 | Completed | 872kB | 34s |
| rac1 | Completed | 897kB | 28s |
'------+-----------+-------+------'
Logs are being collected to: /app/oracle/oracle.ahf/data/repository/srdc_dbrac_collection_Sat_Jun_27_01_20_50_EDT_2020_node_all
/app/oracle/oracle.ahf/data/repository/srdc_dbrac_collection_Sat_Jun_27_01_20_50_EDT_2020_node_all/rac1.tfa_srdc_dbrac_Sat_Jun_27_01_20_50_EDT_2020.zip
/app/oracle/oracle.ahf/data/repository/srdc_dbrac_collection_Sat_Jun_27_01_20_50_EDT_2020_node_all/rac2.tfa_srdc_dbrac_Sat_Jun_27_01_20_50_EDT_2020.zip
|
tfactl 에 있는 SRDC은 Service Request Data Collection 이라는 뜻임
6번 째 줄에 srdc: OS CRS DATABASE CHMOS ASM 라고 나와있음
장애가 발생한 시점을 기준으로 입력하면
해당시점 4시간 전의 데이터와 1시간 후의 데이터를 수집한다고함
나의 경우 장애 상황이 없었기 때문에 모든 시간대 로그를 수집함
34, 35 번 째 줄에 생성된 로그들이 zip 형태로 묶여있음
1번노드 zip 파일 확인
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
|
$ pwd
/home/oracle/
$ unzip /app/oracle/oracle.ahf/data/repository/srdc_dbrac_collection_Sat_Jun_27_01_20_50_EDT_2020_node_all/rac1.tfa_srdc_dbrac_Sat_Jun_27_01_20_50_EDT_2020.zip
$ ls
rac1 rac1.tfa_srdc_dbrac_Sat_Jun_27_01_20_50_EDT_2020.zip.txt rac1.zip_inventory.xml TFA.txt
$ cd rac1/
$ ls -al
total 1060
drwxr-xr-x 5 oracle dba 4096 Jun 27 01:24 .
drwxr-xr-x 3 oracle dba 127 Jun 27 01:24 ..
drwxr-xr-x 4 oracle dba 32 Jun 27 01:24 app
drwxr-xr-x 5 oracle dba 45 Jun 27 01:24 diag
-rw-r--r-- 1 oracle dba 65 Jun 27 01:21 rac1_ACTIVEVERSION
-rw-r--r-- 1 oracle dba 1795 Jun 27 01:21 rac1_afd_collection.err
-rw-r--r-- 1 oracle dba 2641 Jun 27 01:21 rac1_afd_collection.log
-rw-r--r-- 1 oracle dba 1217 Jun 27 01:21 rac1_afd_report
-rw-r--r-- 1 oracle dba 5298 Jun 27 01:21 rac1_asm_collection.err
-rw-r--r-- 1 oracle dba 4502 Jun 27 01:21 rac1_asm_collection.log
-rw-r--r-- 1 oracle dba 21343 Jun 27 01:21 rac1_asm_collection.out
-rw-r--r-- 1 oracle dba 597 Jun 27 01:21 rac1_cha_collection.log
-rw-r--r-- 1 oracle dba 184 Jun 27 01:21 rac1_CHECKCRS
-rw-r--r-- 1 oracle dba 360 Jun 27 01:21 rac1_cloudmetadata.log
-rw-r--r-- 1 oracle dba 231 Jun 27 01:21 rac1_cloudmetadata.out
-rw-r--r-- 1 oracle dba 1753 Jun 27 01:21 rac1_collection.err
-rw-r--r-- 1 oracle dba 6492 Jun 27 01:21 rac1_collection.log
-rw-r--r-- 1 oracle dba 63 Jun 27 01:21 rac1_CONFIGASM
-rw-r--r-- 1 oracle dba 44 Jun 27 01:21 rac1_CONFIGGNS
-rw-r--r-- 1 oracle dba 262 Jun 27 01:21 rac1_CONFIGSCAN
-rw-r--r-- 1 oracle dba 154 Jun 27 01:21 rac1_crs_collection.err
-rw-r--r-- 1 oracle dba 5486 Jun 27 01:21 rac1_crs_collection.log
-rw-r--r-- 1 oracle dba 66 Jun 27 01:21 rac1_crsctl_config_crs
-rw-r--r-- 1 oracle dba 108629 Jun 27 01:21 rac1_dmesg
-rw-r--r-- 1 oracle dba 287 Jun 27 01:21 rac1_DNSSERVERS
-rw-r--r-- 1 oracle dba 391 Jun 27 01:21 rac1_GETCSS
-rw-r--r-- 1 oracle dba 1886 Jun 27 01:21 rac1_gpnp_peer_profile.xml
-rw-r--r-- 1 oracle dba 1021 Jun 27 01:21 rac1_GPNPTOOL
-rw-r--r-- 1 oracle dba 152 Jun 27 01:21 rac1_INITTAB
-rw-r--r-- 1 oracle dba 122 Jun 27 01:21 rac1_IPMI
-rw-r--r-- 1 oracle dba 37795 Jun 27 01:21 rac1_LS
-rw-r--r-- 1 oracle dba 3240 Jun 27 01:21 rac1_LSMOD
-rw-r--r-- 1 oracle dba 684 Jun 27 01:21 rac1_NETSTAT
-rw-r--r-- 1 oracle dba 316 Jun 27 01:21 rac1_NODEAPPS
-rw-r--r-- 1 oracle dba 516 Jun 27 01:21 rac1_NSLOOKUP
-rw-r--r-- 1 oracle dba 1746 Jun 27 01:21 rac1_NSSWITCH_CONF
-rw-r--r-- 1 oracle dba 146 Jun 27 01:21 rac1_OCRBACKUP
-rw-r--r-- 1 oracle dba 203423 Jun 27 01:21 rac1_OCRDUMP
-rw-r--r-- 1 oracle dba 41 Jun 27 01:21 rac1_ocrloc
-rw-r--r-- 1 oracle dba 91 Jun 27 01:21 rac1_ohasdrun
-rw-r--r-- 1 oracle dba 85 Jun 27 01:21 rac1_OIFCFG
-rw-r--r-- 1 oracle dba 165863 Jun 27 01:21 rac1_OLRDUMP
-rw-r--r-- 1 oracle dba 98 Jun 27 01:21 rac1_olrloc
-rw-r--r-- 1 oracle dba 49 Jun 27 01:21 rac1_OLSNODES
-rw-r--r-- 1 oracle dba 324 Jun 27 01:21 rac1_OPATCH_CRS
-rw-r--r-- 1 oracle dba 32 Jun 27 01:21 rac1_oracle-release
-rw-r--r-- 1 oracle dba 961 Jun 27 01:21 rac1_oratab
-rw-r--r-- 1 oracle dba 3775 Jun 27 01:21 rac1_os_collection.log
-rw-r--r-- 1 oracle dba 146020 Jun 27 01:21 rac1_os_report
-rw-r--r-- 1 oracle dba 96 Jun 27 01:21 rac1_PIDS
-rw-r--r-- 1 oracle dba 769 Jun 27 01:21 rac1_PING_INFO
-rw-r--r-- 1 oracle dba 45463 Jun 27 01:21 rac1_PROCDIRINFO
-rw-r--r-- 1 oracle dba 1758 Jun 27 01:21 rac1_PS
-rw-r--r-- 1 oracle dba 381 Jun 27 01:21 rac1_QUERYVOTE
-rw-r--r-- 1 oracle dba 52 Jun 27 01:21 rac1_redhat-release
-rw-r--r-- 1 oracle dba 46704 Jun 27 01:21 rac1_RPMQA
-rw-r--r-- 1 oracle dba 66 Jun 27 01:21 rac1_RUNLEVEL
-rw-r--r-- 1 oracle dba 58 Jun 27 01:21 rac1_SOFTWAREVERSION
-rw-r--r-- 1 oracle dba 2596 Jun 27 01:21 rac1_STATRESCRS
-rw-r--r-- 1 oracle dba 14425 Jun 27 01:21 rac1_STATRESCRSFULL
-rw-r--r-- 1 oracle dba 5006 Jun 27 01:21 rac1_STATRESDEPENDENCY
-rw-r--r-- 1 oracle dba 15699 Jun 27 01:21 rac1_STATRESFULLOHAS
-rw-r--r-- 1 oracle dba 1468 Jun 27 01:21 rac1_STATRESOHAS
-rw-r--r-- 1 oracle dba 98 Jun 27 01:21 rac1_summary
-rw-r--r-- 1 oracle dba 5561 Jun 27 01:21 rac1_TOP_50_MEMORY
-rw-r--r-- 1 oracle dba 3389 Jun 27 01:21 rac1_VARTMPORACLE
-rw-r--r-- 1 oracle dba 39 Jun 27 01:20 skipped_files.txt
-rw-r--r-- 1 oracle dba 1373 Jun 27 01:20 tfa_1593235232_srdc_dbracsrdc_user_log
-rw-r--r-- 1 oracle dba 3923 Jun 27 01:20 tfa_1593235232_srdc_dbracuserenv
-rw-r--r-- 1 oracle dba 3307 Jun 27 01:20 tfa_1593235232_srdc_dbrac.xml
drwxr-xr-x 3 oracle dba 17 Jun 27 01:24 var
|
여러가지 정보들이 있음
몇가지 파일 확인
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
$ cat rac1_CHECKCRS
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
$ cat rac1_summary
GI information
==============
Oracle Clusterware active version on the cluster is [11.2.0.4.0]
$ cat rac1_QUERYVOTE
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 6b3b255e16d54ffdbfb97e35e1a54cf7 (ORCL:OCR_VOTE1) [OCR_VOTE]
2. ONLINE 09a5df48125c4fc8bf3be326452218bc (ORCL:OCR_VOTE2) [OCR_VOTE]
3. ONLINE 5f29017e8a524f79bf90d4f0580d4a52 (ORCL:OCR_VOTE3) [OCR_VOTE]
Located 3 voting disk(s).
$ cat rac1_PING_INFO
#HEADER:Output of /bin/ping -c 2 192.168.137.52
PING 192.168.137.52 (192.168.137.52) 56(84) bytes of data.
64 bytes from 192.168.137.52: icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from 192.168.137.52: icmp_seq=2 ttl=64 time=0.057 ms
--- 192.168.137.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.025/0.041/0.057/0.016 ms
#HEADER:Output of /bin/ping -c 2 192.168.137.53
PING 192.168.137.53 (192.168.137.53) 56(84) bytes of data.
64 bytes from 192.168.137.53: icmp_seq=1 ttl=64 time=0.161 ms
64 bytes from 192.168.137.53: icmp_seq=2 ttl=64 time=0.507 ms
--- 192.168.137.53 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.161/0.334/0.507/0.173 ms
|
파일 이름대로 rac 체크, db 버전 정보, votedisk 조회, 핑체크 등의 로그들이 남아있음
장애 발생시 해당 로그들을 수집 후 떨어진 zip 파일을 SR 에 첨부파일로 올리면됨
참고1
ahf를 기본 경로가아닌 특정 경로에 설치 -ahf_loc 사용
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
$ ./ahf_setup -ahf_loc /home/oracle/ahf
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_202000_20719_2020_08_12-20_44_14.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 20.2.0 Build Date: 202006260723
AHF Location : /home/oracle/ahf/oracle.ahf
AHF Data Directory : /home/oracle/ahf/oracle.ahf/data
Extracting AHF to /home/oracle/ahf/oracle.ahf
Cant open /home/oracle/ahf/oracle.ahf/data/oracle11/tfa/tfa_setup.txt
AHF is deployed at /home/oracle/ahf/oracle.ahf
ORAchk is available at /home/oracle/ahf/oracle.ahf/bin/orachk
AHF binaries are available in /home/oracle/ahf/oracle.ahf/bin
AHF is successfully installed
Moving /tmp/ahf_install_202000_20719_2020_08_12-20_44_14.log to /home/oracle/ahf/oracle.ahf/data/oracle11/diag/ahf/
|
참고2
tfactl help
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
|
$ tfactl diagcollect -help
Collect logs from across nodes in cluster
Usage : /opt/oracle.ahf/tfa/bin/tfactl diagcollect [ [component_name1] [component_name2] ... [component_nameN] | [-srdc <srdc_profile>] | [-defips]] [-sr <SR#>] [-node <all|local|n1,n2,..>] [-tag <tagname>] [-z <filename>] [-last <n><m|h|d>| -from <time> -to <time> | -for <time>] [-nocopy] [-notrim] [-silent] [-cores][-collectalldirs][-collectdir <dir1,dir2..>][-examples]
components:-ips|-database|-asm|-crsclient|-dbclient|-dbwlm|-tns|-rhp|-procinfo|-cvu|-afd|-crs|-cha|-wls|-emagent|-oms|-ocm|-emplugins|-em|-acfs|-install|-cfgtools|-os|-ashhtml|-ashtext|-awrhtml|-awrtext|-qos|-rdbms|-asm|-crsclient|-dbclient|-dbwlm|-tns|-rhp|-procinfo|-cvu|-afd|-crs|-cha|-wls|-emagent|-oms|-ocm|-emplugins|-em|-acfs|-install|-cfgtools|-os|-ips|-ashhtml|-ashtext|-awrhtml|-awrtext|-qos
-srdc Service Request Data Collection (SRDC).
-defips Include in the default collection the IPS Packages for:
ASM, CRS and Databases
-sr Enter SR number to which the collection will be uploaded
-node Specify comma separated list of host names for collection
-tag <tagname> The files will be collected into tagname directory inside
repository
-z <zipname> The collection zip file will be given this name within the
TFA collection repository
-last <n><m|h|d> Files from last 'n' [m]inutes, 'n' [d]ays or 'n' [h]ours
-since Same as -last. Kept for backward compatibility.
-from "Mon/dd/yyyy hh:mm:ss" From <time>
or "yyyy-mm-dd hh:mm:ss"
or "yyyy-mm-ddThh:mm:ss"
or "yyyy-mm-dd"
-to "Mon/dd/yyyy hh:mm:ss" To <time>
or "yyyy-mm-dd hh:mm:ss"
or "yyyy-mm-ddThh:mm:ss"
or "yyyy-mm-dd"
-for "Mon/dd/yyyy" For <date>.
or "yyyy-mm-dd"
-nocopy Does not copy back the zip files to initiating node from all nodes
-notrim Does not trim the files collected
-silent This option is used to submit the diagcollection as a background
process
-cores Collect Core files when it would normally not have been
collected
-collectalldirs Collect all files from a directory marked "Collect All"
flag to true
-collectdir Specify comma separated list of directories and collection will
include all files from these irrespective of type and time constraints
in addition to components specified
-examples Show diagcollect usage examples
For detailed help on each component use:
/opt/oracle.ahf/tfa/bin/tfactl diagcollect [component_name1] [component_name2] ... [component_nameN] -help
|
참고3
AHF 제거
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
|
# tfactl uninstall
Starting AHF Uninstall
NOTE : Uninstalling does not return all the space used by the AHF repository
AHF will be uninstalled on:
rac1
rac2
Do you want to continue with AHF uninstall ? [Y]|N : y [Y 입력]
Stopping AHF service on local node rac1...
Stopping TFA Support Tools...
Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service.
Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service.
Stopping and removing AHF in rac2...
root@rac2's password: [2번 노드 root 패스워드 입력]
Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service.
Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service.
Successfully uninstalled AHF on node rac2
Removing AHF setup on rac1:
Removing /etc/rc.d/rc0.d/K17init.tfa
Removing /etc/rc.d/rc1.d/K17init.tfa
Removing /etc/rc.d/rc2.d/K17init.tfa
Removing /etc/rc.d/rc4.d/K17init.tfa
Removing /etc/rc.d/rc6.d/K17init.tfa
Removing /etc/init.d/init.tfa...
Removing /opt/oracle.ahf/rpms
Removing /opt/oracle.ahf/jre
Removing /opt/oracle.ahf/common
Removing /opt/oracle.ahf/bin
Removing /opt/oracle.ahf/python
Removing /opt/oracle.ahf/analyzer
Removing /opt/oracle.ahf/tfa
Removing /opt/oracle.ahf/orachk
Removing /opt/oracle.ahf/ahf
Removing /app/oracle/oracle.ahf/data/rac1
Removing /opt/oracle.ahf/install.properties
|
삭제가 완료됨
참고5
기동 정지
1
2
|
#tfactl stop
#tfactl start
|
참조 : Doc. 2550798.1, Doc. 2291661.1
https://oracle-base.com/articles/misc/trace-file-analyzer-tfa
https://positivemh.tistory.com/544
https://positivemh.tistory.com/607
https://positivemh.tistory.com/645
https://positivemh.tistory.com/647
https://positivemh.tistory.com/631
https://positivemh.tistory.com/747
https://positivemh.tistory.com/791
'ORACLE > Admin' 카테고리의 다른 글
오라클 11g R2 RAC skgxn 라이브러리 (0) | 2020.07.05 |
---|---|
오라클 11g R2 스캔 리스너 제거 및 추가 (scan listener remove) (0) | 2020.07.02 |
오라클 19c $ORACLE_HOME/dbs 디렉토리의 hc_{SID}.dat 파일과 lk{SID} 파일 (3) | 2020.06.25 |
오라클 19c 버퍼캐쉬 플러쉬 테스트(buffer cache flush) (0) | 2020.06.24 |
오라클 11g R2 싱글 DB 환경에서 grid 기동 및 중지 (0) | 2020.06.24 |