Pages

Thursday, February 25, 2016

Multiplex Voting Files on Oracle RAC 12c

Voting Files are stored on ASM disk from 11GR2. Place where Clusterware stores Voting files called voting disk. Oracle Clusterware Software creates multiple Votingfiles during installation  to avoid voting disk failures. If we Install Oracle Clusterware software with External redundancy we will get 1 OCR and VOTING FILES on Diskgroup; If we check, Normal Redundancy during Installation, Oracle Clusterware Software require 3 disks for a diskgroup. Oracle Clusterware software creates Original Copy and Mirror copy of OCR inside diskgroup and 3 Voting files on diskgroup i.e 1 voting file per each disk in a diskgroup. If we Check High redundancy during installation 3 copy's of OCRS | 5 Voting Files on 5 disks inside the diskgroup;

Clusterware maintain Voting disk to know heartbeat of the nodes in a cluster. 
When ever node joins a cluster, voting file updated. When ever voting file is updated, Clusterware software starts communicating with node.i.e CRS demon on one node will communicate with CRS demon on another node. Then On each node, clusterware software starts its services and resources.  

Technically CSSD monitor will update Voting files on each disk inside the diskgroup. For every second CSSD process updates (Node availability information) to the voting files. Atleast each node has to  access  2/3 voting files. If any node is unable to access 2/3 of voting files, it evicts from cluster. OPROCD kills the node in a cluster to protect data integrity. From 11GR2, OCR will backup the votingfiles. If votingfiles are corrupted, we can recover using backup taken by OCR. 

We know that Votingfiles are stored on ASM, we can find Out using Following Query 
[root@tnc1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4bbee2aa15204fb5bf311cc8ebcedd10 (/dev/sdh2) [NEVOT]
Located 1 voting disk(s).

Currently we have only 1 Voting file on the cluster. If this Voting file is corrupted, All nodes in a cluster will evict from the cluster. 
To Stabilize RAC Environment, Multiplex Voting Files in Oracle RAC. 
To Multiplex Voting files, Create a new diskgroup using "ASMCA or SYSAM" with NORMAL Redundancy or High redundancy and Replace Voting files from old diskgroup to New diskgroup; 

OLDDISKGROUP: NEVOT 
NEWDISKGROUP: GRID

Make sure, New disk group is mounted on all instances 
SQL> select name,group_number,state from v$asm_diskgroup;








---

Replace  Voting files from One OLD ASM diskgroup (External Redundancy) to another NEW ASM diskgroup (High Redundancy). perform operation as root user from any node of the cluster 

















Successfully, Voting files are  Replaced / Multiplexed on Oracle RAC Cluster

--Nikhil Tatineni--
--12c Oracle RAC--


Oracle RAC Mirror copy of OCR onto different asm diskgroup


 
From 11gr2 OCR files  (Oracle cluster registry) and voting files are stored on ASM. When we Install Oracle Cluster-ware software by selecting Normal redundancy, we  allocate at-least 3 disks to store OCR and Voting files on ASM.  Oracle Clusterware software will create diskgroup using 3 disks and places cluster-ware files OCR and Voting files on ASM disk-group. When we go with normal redundancy cluster-ware software Places Original OCR and Mirror of OCR in same disk-group & Place voting file on each disk inside the disk-group. OCR stores all cluster-ware information, if the disk-group which is carrying OCR on cluster crashes, we loose both Original and Mirror copy of OCR. (Anyway, we take backup  of OCR using ocrconfig, In that scenario, we need to restore both OCR Files /  need downtime to recover OCR from backup) 

To stabilize the RAC Environment we mirror copy of OCR to newly created diskgroup;

Pre-req's to add a mirror copy on another diskgroup;
a) Make Sure new diskgroup is mounted on all instances
b) Database and diskgroup compatibility is greater than 11.2

 when we ignore pre-req's, we will encounter with following error


[root@tnc1 ~]# ocrconfig -add +ocnew
PROT-30: The Oracle Cluster Registry location to be added is not usable
PROC-50: The Oracle Cluster Registry location to be added is inaccessible on nodes tnc1.

failed ——

Before —
col COMPATIBILITY form a15
col DATABASE_COMPATIBILITY form a20
col NAME form a20
select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME          COMPATIBILITY   DATABASE_COMPATIBILITY
------------ -------------------- --------------- --------------------
       4 OCNEW          12.1.0.0.0            10.1.0.0.0
       1 FRA               12.1.0.0.0            10.1.0.0.0
       2 GRID             12.1.0.0.0            10.1.0.0.0
       3 SDXH             12.1.0.0.0            10.1.0.0.0
 

changing  DATABASE COMPATIBILITY of NEW disk-group "OCRNEW"

SQL> alter diskgroup ocnew set attribute 'compatible.asm'='12.1';
Diskgroup altered.
SQL> alter diskgroup ocnew set attribute 'compatible.rdbms'='12.1';
Diskgroup altered.

-After —

col COMPATIBILITY form a15
col DATABASE_COMPATIBILITY form a20
col NAME form a20
select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME          COMPATIBILITY   DATABASE_COMPATIBILI
------------ -------------------- --------------- --------------------
       4 OCNEW          12.1.0.0.0             12.1.0.0.0
 
Before —Original Copy and Mirror copy is on same diskgroup "+GRID"
[root@tnc1 ~]# grid_env
[root@tnc1 ~]#
[root@tnc1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
     Version                  :          4
     Total space (kbytes)     :     409568
     Used space (kbytes)      :       1716
     Available space (kbytes) :     407852
     ID                       :  143829273
     Device/File Name         :      +GRID
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
     Cluster registry integrity check succeeded
     Logical corruption check succeeded
Placing copy of OCR on another disk group as follows


After Meeting all pre-req's, add mirror copy of OCR  into new diskgroup "OCNEW" as follows  


—Perform Operation as root

[root@tnc1 ~]# ocrconfig -add +ocnew

CRS log as follows
2016-02-25 00:57:02.743 [OCTSSD(5273)]CRS-2408: The clock on host tnc1 has been updated by the Cluster Time Synchronization Service to be synchronous with the mean cluster time.
2016-02-25 01:24:23.365 [CRSD(5837)]CRS-1007: The OCR/OCR mirror location was replaced by +OCNEW/tnc-cluster/OCRFILE/registry.255.904699441.


After —


[root@tnc1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
     Version                  :          4
     Total space (kbytes)     :     409568
     Used space (kbytes)      :       1716
     Available space (kbytes) :     407852
     ID                       :  143829273
     Device/File Name         :      +GRID
                                    Device/File integrity check succeeded
     Device/File Name         :     +OCNEW
                                    Device/File integrity check succeeded

                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
     Cluster registry integrity check succeeded
     Logical corruption check succeeded


Finally Mirror copy is placed on Different diskgroup 

Location of OCR
[oracle@tnc2 ~]$ cat /etc/oracle/ocr.loc
#Device/file  getting replaced by device +OCNEW/tnc-cluster/OCRFILE/registry.255.904699441 
ocrconfig_loc=+GRID/tnc-cluster/OCRFILE/registry.255.903665167
ocrmirrorconfig_loc=+OCNEW/tnc-cluster/OCRFILE/registry.255.904699441

--Nikhil Tatineni--
--12c Oracle RAC Cluster--

Wednesday, February 24, 2016

Resize Online Redologs Files 12c RAC


This is the one of the Important step to stabilize your RAC environment. At least we have to maintain 3 Online redolog groups for each Instance and each group consists of 2 members each & each member in different diskgroups

[oracle@tnc1 ~]$ db
[oracle@tnc1 ~]$ sql

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 24 00:09:06 2016
Copyright (c) 1982, 2014, Oacle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> show parameter db_name;
NAME        TYPE   VALUE
------------------------------------ ----------- ------------------------------
db_name      string   p1tncd


script # Use following Query to get size of the redolog file GROUPS # 

SQL> 
set echo off
set feedback off
set linesize 120
set pagesize 35
set trim on
set trims on
set lines 120
col group# format 999
col thread# format 999
col member format a50 wrap
col status format a10
col archived format a10
col fsize format 999 heading "Size (MB)"

select l.group#, l.thread#,
f.member,
l.archived,
l.status,
(bytes/1024/1024) fsize
from
v$log l, v$logfile f
where f.group# = l.group#
order by 1,2
/

output #

Currently On RAC database, IF YOU have 2 instances / threads and each thread have only 2 redolog groups / 2 members on different diskgroups.At the same time, we are increasing size of the redolog files and adding additional redologfile group for each instance/thread as follows 

SQL> alter database add logfile thread 1 group 5 ('+FRA/P1TNCD/ONLINELOG/redo05.log','+SDXH/P1TNCD/ONLINELOG/redo05.log') size 100m;
SQL> alter database add logfile thread 1 group 6 ('+FRA/P1TNCD/ONLINELOG/redo06.log','+SDXH/P1TNCD/ONLINELOG/redo06.log') size 100m;
SQL> alter database add logfile thread 1 group 7 ('+FRA/P1TNCD/ONLINELOG/redo07.log','+SDXH/P1TNCD/ONLINELOG/redo07.log') size 100m;
SQL> alter database add logfile thread 2 group 8 ('+FRA/P1TNCD/ONLINELOG/redo08.log','+SDXH/P1TNCD/ONLINELOG/redo08.log') size 100m;
SQL> alter database add logfile thread 2 group 9 ('+FRA/P1TNCD/ONLINELOG/redo09.log','+SDXH/P1TNCD/ONLINELOG/redo09.log') size 100m;
SQL> alter database add logfile thread 2 group 10 ('+FRA/P1TNCD/ONLINELOG/redo10.log','+SDXH/P1TNCD/ONLINELOG/redo10.log') size 100m;

Make checkpoint global and make status of online redolog groups are "INACTIVE" and drop them on database immediately 

SQL> alter system checkpoint global;

SQL> alter database drop logfile group 1;
SQL> alter database drop logfile group 2;
SQL> alter database drop logfile group 3;
SQL> alter database drop logfile group 4;

--Nikhil Tatineni--
--12C RAC CLUSTER--

Saturday, February 20, 2016

Applying Database Rolling Patch on 12c RAC Cluster

Applying Rolling Patch "21948354" on RAC ORACLE_HOME's

a) On Node1: Stop CRS on Node 1, which failover all services running from Node 1  to Node 2. Make sure management database is Failed Over to Node 2 
#crsctl stop crs 

b) Update OPatch Version if it is less than 12.1.0.1.7 by using patch 6880880 by selecting the 12.1.0.1.0 release


c) Set appropriate environment variables 
export ORACLE_BASE=/u01/app/oracle
export DB_HOME=$ORACLE_BASE/product/12.1.0.2/db_1
export PATH=$ORACLE_HOME/OPatch:$PATH:.

d) Take Backup Of ORACLE_HOME 

$ cp -r db_1 db_1_backup

e) Check for previous patches conflicts with New Rolling  PSU Patch "21948354"

$ cd 21948354
$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./

f) When ever above script completed without any conflicts go ahead and apply rolling patch 
$ opatch apply -local 

g) When Patching completed on ORACLE_HOME, go ahead and bringup CRS on NODE1 and make sure all services are running on NODE 1

Follow the same steps and Apply patch on Node 2
After applying Patch on both ORACLE_HOMES / Nodes of a cluster, make sure all resources are up and running 

[oracle@tnc1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.FRA.dg
               ONLINE  ONLINE       tnc1                     STABLE
               ONLINE  ONLINE       tnc2                     STABLE
ora.GRID.dg
               ONLINE  ONLINE       tnc1                     STABLE
               ONLINE  ONLINE       tnc2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       tnc1                     STABLE
               ONLINE  ONLINE       tnc2                     STABLE
ora.SDXH.dg
               ONLINE  ONLINE       tnc1                     STABLE
               ONLINE  ONLINE       tnc2                     STABLE
ora.asm
               ONLINE  ONLINE       tnc1                     Started,STABLE
               ONLINE  ONLINE       tnc2                     Started,STABLE
ora.net1.network
               ONLINE  ONLINE       tnc1                     STABLE
               ONLINE  ONLINE       tnc2                     STABLE
ora.ons
               ONLINE  ONLINE       tnc1                     STABLE
               ONLINE  ONLINE       tnc2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tnc2                     STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       tnc1                     169.254.248.60,STABLE                                
ora.cvu
      1        ONLINE  ONLINE       tnc2                     STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       tnc1                     Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       tnc1                     STABLE
ora.p1tncd.db
      1        ONLINE  ONLINE       tnc2                     Open,STABLE
      2        ONLINE  ONLINE       tnc1                     Open,STABLE
ora.p1tncd.srvnsp.svc
      1        ONLINE  ONLINE       tnc2                     STABLE
      2        ONLINE  ONLINE       tnc1                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tnc2                     STABLE
ora.tnc1.vip
      1        ONLINE  ONLINE       tnc1                     STABLE
ora.tnc2.vip
      1        ONLINE  ONLINE       tnc2                     STABLE
--------------------------------------------------------------------------------

When all resources are online upgrade the database using 12c New OPTION datapatch as follows 

[oracle@tnc2 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Sat Feb 20 07:15:30 2016
Copyright (c) 2015, Oracle.  All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_21603_2016_02_20_07_15_30/sqlpatch_invocation.log
Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of SQL patches:
Bundle series PSU:
  ID 160119 in the binary registry and not installed in any PDB
Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED CBN
    Nothing to roll back
    The following patches will be applied:
      21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))
Installing patches...
Patch installation complete.  Total patches installed: 3
Validating logfiles...
Patch 21948354 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_P1TNCD_CDBROOT_2016Feb20_07_18_04.log (no errors)
Patch 21948354 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_P1TNCD_PDBSEED_2016Feb20_07_18_24.log (no errors)
Patch 21948354 apply (pdb CBN): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_P1TNCD_CBN_2016Feb20_07_18_25.log (no errors)

Database is Upgraded Successfully :) 
Check what are the patches Installed on DATABASE_HOME 


[oracle@tnc1 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.10
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/12.1.0.2/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.2/db_1/oraInst.loc
OPatch version    : 12.1.0.1.10
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatch/opatch2016-02-20_19-21-31PM_1.log
Lsinventory Output file location : /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatch/lsinv/lsinventory2016-02-20_19-21-31PM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: tnc1.localdomain
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1): 
Oracle Database 12c                                                  12.1.0.2.0
There are 1 products installed in this Oracle Home.
Interim patches (1) :
Patch  21948354     : applied on Sat Feb 20 05:12:02 EST 2016
Unique Patch ID:  19553095
Patch description:  "Database Patch Set Update : 12.1.0.2.160119 (21948354)"
   Created on 20 Dec 2015, 23:39:33 hrs PST8PDT
Sub-patch  21359755; "Database Patch Set Update : 12.1.0.2.5 (21359755)"
Sub-patch  20831110; "Database Patch Set Update : 12.1.0.2.4 (20831110)"
Sub-patch  20299023; "Database Patch Set Update : 12.1.0.2.3 (20299023)"
Sub-patch  19769480; "Database Patch Set Update : 12.1.0.2.2 (19769480)"
   Bugs fixed:
     19189525, 19075256, 19141838, 19865345, 19791273, 19280225, 18845653
     20951038, 19243521, 19248799, 21756699, 18988834, 19238590, 21281532
     20245930, 18921743, 18799063, 19134173, 19571367, 20476175, 20925795
     19018206, 20509482, 20387265, 20588502, 19149990, 18849537, 18886413
     17551063, 19183343, 19703301, 19001390, 18202441, 19189317, 19644859
     19358317, 19390567, 19279273, 19706965, 19068970, 19619732, 20348653
     18607546, 18940497, 19670108, 19649152, 18948177, 19315691, 19676905
     18964978, 19035573, 20165574, 19176326, 20413820, 20558005, 19176223
     19532017, 20134339, 19074147, 18411216, 20361671, 20425790, 18966843
     20294666, 19307662, 19371175, 19195895, 19154375, 19468991, 19174521
     19520602, 19382851, 21875360, 19326908, 19658708, 20093776, 20618595
     21787056, 17835294, 19791377, 19068610, 20048359, 20746251, 19143550
     19185876, 19627012, 20281121, 19577410, 22092979, 19001359, 19518079
     18610915, 19490948, 18674024, 18306996, 19309466, 19081128, 19915271
     20122715, 21188532, 20284155, 18791688, 20890311, 21442094, 18973548
     19303936, 19597439, 20235511, 18964939, 19430401, 19044962, 19409212
     19879746, 20657441, 19684504, 19024808, 18799993, 20877664, 19028800
     19065556, 19723336, 19077215, 19604659, 21421886, 19524384, 19048007
     18288842, 19689979, 20446883, 18952989, 16870214, 19928926, 21526048
     19180770, 19197175, 19902195, 20318889, 19730508, 19012119, 19067244
     20074391, 19512341, 19841800, 14643995, 20331945, 19587324, 19547370
     19065677, 19637186, 21225209, 20397490, 18967382, 19174430, 18674047
     19054077, 19536415, 19708632, 19289642, 20869721, 19335438, 17365043
     18856999, 19869255, 20471920, 19468347, 21620471, 16359751, 18990693
     17890099, 19439759, 19769480, 19272708, 19978542, 20101006, 21300341
     20402832, 19329654, 19873610, 21668627, 21517440, 19304354, 19052488
     20794034, 19291380, 18681056, 19896336, 19076343, 19561643, 18618122
     20440930, 18456643, 19699191, 18909599, 19487147, 18250893, 19016730
     18743542, 20347562, 16619249, 18354830, 19687159, 19174942, 20424899
     19989009, 20688221, 20441797, 19157754, 19032777, 19058490, 19399918
     18885870, 19434529, 19018447, 18417036, 20919320, 19022470, 19284031
     20474192, 20173897, 22062026, 19385656, 19501299, 17274537, 20899461
     19440586, 16887946, 19606174, 18436647, 17655240, 19023822, 19178851
     19124589, 19805359, 19597583, 19155797, 19393542, 19050649
Rac system comprising of multiple nodes
  Local node = tnc1
  Remote node = tnc2
--------------------------------------------------------------------------------
OPatch succeeded.

--Nikhil Tatineni--
--12c RAC Cluster---

Redolog File Advisor

When we set "fast_start_mttr_target" parameter MTTR Advisor  and Redolog file size advisor is also enabled on database 

Use following Query to check size of the Redolog group & location on database
set linesize 300
column REDOLOG_FILE_NAME format a50
SELECT
    a.GROUP#,
    a.THREAD#,
    a.SEQUENCE#,
    a.ARCHIVED,
    a.STATUS,
    b.MEMBER    AS REDOLOG_FILE_NAME,
    (a.BYTES/1024/1024) AS SIZE_MB
FROM v$log a
JOIN v$logfile b ON a.Group#=b.Group# 
ORDER BY a.GROUP# ASC;

GROUP# THREAD#  SEQUENCE# ARC STATUS       REDOLOG_FILE_NAME                    SIZE_MB

---------- ---------- ---------- --- ---------------- ---------------------------------
  1     1       29 NO  INACTIVE       +FRA/P1TNCD/ONLINELOG/group_1.257.903695363    50
  1     1       29 NO  INACTIVE       +SDXH/P1TNCD/ONLINELOG/group_1.262.903695363  50
  2     1       30 NO  CURRENT       +FRA/P1TNCD/ONLINELOG/group_2.258.903695363    50
  2     1       30 NO  CURRENT       +SDXH/P1TNCD/ONLINELOG/group_2.263.903695363  50
  3     2       23 NO  INACTIVE       +SDXH/P1TNCD/ONLINELOG/group_3.269.903695731  50
  3     2       23 NO  INACTIVE       +FRA/P1TNCD/ONLINELOG/group_3.259.903695731    50
  4     2       24 NO  CURRENT       +SDXH/P1TNCD/ONLINELOG/group_4.270.903695731  50
  4     2       24 NO  CURRENT       +FRA/P1TNCD/ONLINELOG/group_4.260.903695731    50

when you look at column "v$instance_recovery.OPTIMAL_LOGFILE_SIZE" this gives optimal size of the redolog on current workload on database. 

desc v$instance_recovery;
SQL> select  OPTIMAL_LOGFILE_SIZE,  TARGET_MTTR from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE TARGET_MTTR
-------------------- -----------
236       13

For current database it is suggested that optimal size of the redolog's is 236 MB.
current size of the redolog on database is 50 MB drop and recreate to 236 MB, which is recommended by redolog size advisory


####

select
   Start_Date,

   Start_Time,
   Num_Logs,
   Round(Num_Logs * (Vl.Bytes / (1024 * 1024)), 2) AS Mbytes,
   Vdb.NAME AS Dbname
FROM
   (SELECT To_Char(Vlh.First_Time, 'YYYY-MM-DD') AS Start_Date, To_Char(Vlh.First_Time, 
'HH24') || ':00' AS Start_Time,
   COUNT(Vlh.Thread#) Num_Logs

FROM
   V$log_History Vlh

GROUP BY
   To_Char(Vlh.First_Time, 'YYYY-MM-DD'),

   To_Char(Vlh.First_Time, 'HH24') || ':00') Log_Hist,
   V$log Vl,
   V$database Vdb
WHERE
   Vl.Group# = 1

ORDER BY
   Log_Hist.Start_Date,

   Log_Hist.Start_Time;


START_DATE START   NUM_LOGS MBYTES DBNAME
---------- ----- ---------- ---------- ---------
2016-02-13    10:00   8               400 P1TNCD
2016-02-13    11:00   1                 50 P1TNCD
2016-02-13    14:00   1                 50 P1TNCD
2016-02-13    20:00   2               100 P1TNCD
2016-02-13    23:00   1                 50 P1TNCD
2016-02-14    00:00   2               100 P1TNCD
2016-02-14    13:00   5               250 P1TNCD
2016-02-14    14:00   1                 50 P1TNCD
2016-02-14    17:00   2               100 P1TNCD
2016-02-14    20:00   1                 50 P1TNCD
2016-02-14    21:00   1                 50 P1TNCD

Oracle Database Checkpoint Tuning

Checkpoint Tuning
We know that when checkpoint fires data, It will write data from memory to disk and also checkpoint helps / performs instance recovery.

Enabled MTTR Advisory On database
SQL> alter system set fast_start_mttr_target=1800 scope=both;
System altered.

SQL> show parameter statistics
NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
statistics_level     string TYPICAL

SQL> show parameter fast_start_mttr_target
NAME                              TYPE                 VALUE
------------------------------------ ----------- ------------------------------
fast_start_mttr_target     integer              1800

fast_start_mttr_target: Number of seconds database takes to perform crash recovery. i.e The number of seconds DBWR wait to write data from memory

SQL> select TARGET_MTTR, ESTIMATED_MTTR from v$instance_recovery;
TARGET_MTTR    ESTIMATED_MTTR
----------- ----------------------------------------
        13                                      0 
i.e Here TARGET_MTTR < FAST_START_MTTR_TARGET 

SQL> show parameter log_checkpoint
NAME                               TYPE              VALUE
------------------------------------ ----------- ------------------------------
log_checkpoint_interval      integer              0
log_checkpoint_timeout     integer             1800
log_checkpoints_to_alert     boolean          FALSE

SQL> show parameter fast
NAME                                TYPE      VALUE
------------------------------------ ----------- ------------------------------
fast_start_io_target             integer                0
fast_start_mttr_target          integer          1800
fast_start_parallel_rollback     string          LOW


SQL> select MTTR_TARGET_FOR_ESTIMATE,ADVICE_STATUS from v$MTTR_TARGET_ADVICE;

MTTR_TARGET_FOR_ESTIMATE / ADVICE_STATUS
------------------------------------------------------------
      6    ON
      13  ON
      8    ON
      10  ON
      4    ON

NOTE:
we are setting "fast_start_mttr_target"  to set limit instance recovery time 
"log_checkpoint_timeout" should be greater than "fast_start_mttr_target"
"log_checkpoint_timeout" is the maximum amount of time that dirty buffer can stay in memory 
Don't forget that longer and longer checkpoint time takes longer time to perform instance recover and ViceVersa Faster checkpoints gives overhead to database performance. While tuning, make set Optimal time for " log_checkpoint_timeout" on database
But Oracle Recommend us to set "log_checkpoint_interval" than parameter " log_checkpoint_timeout"


--Nikhil Tatineni--
--Performance Tuning--