Wednesday, May 26, 2010

Sizing of Standby redo log in Oracle 10g Dataguard

In Oracle 10g Dataguard replication, Standby redo log helps you to have committed records propagated from primary to physical standby database without having to wait for log shipping. But you need to make sure that standby redo logs are of same size as of online redo logs. For performance reasons, don’t multiplex standby redo log and make one extra standby redo log group.


If standby redo log is of different size in physical standby, you won’t see any error in alert log. But new records won’t get appear in standby and quering v$managed_standby view will show as WAIT_FOR_LOG for MRP0 process in standby instance.

SQL> select process, status, sequence#,block#,blocks,delay_mins from v$managed_standby;
PROCESS STATUS SEQUENCE# BLOCK# BLOCKS DELAY_MINS
--------- ------------ ---------- ---------- ----------
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
MRP0 WAIT_FOR_LOG 2255 0 0 0
RFS IDLE 2255 133468 1 0
RFS IDLE 0 0 0 0

In my case initially I created 100 MB standby redo logs and my online redo logs were 200 MB in size. Because of this difference, real-time propagation was not working. So I dropped initial 100 MB standby redo logs and created 200 MB ones in primary instance.
SQL> alter database drop logfile group 5;
SQL> alter database drop logfile group 6;
SQL> alter database drop logfile group 7;
SQL> alter database drop logfile group 8;
SQL> alter database add standby logfile group 5 ('/redo1/ptnt/ptnt_srl_01a.log') size 200M reuse ;
SQL> alter database add standby logfile group 6 ('/redo2/ptnt/ptnt_srl_02a.log') size 200M reuse;
SQL> alter database add standby logfile group 7 ('/redo1/ptnt/ptnt_srl_03a.log') size 200M reuse;
SQL> alter database add standby logfile group 8 ('/redo2/ptnt/ptnt_srl_04a.log') size 200M reuse;
SQL> alter database add standby logfile group 9 ('/redo2/ptnt/ptnt_srl_05a.log') size 200M;

Do the same thing on ptnt_stndby - standby instanceSQL> select process, status, sequence#,block#,blocks,delay_mins from v$managed_standby;

Don't forget to cancel out current managed recovery in standby instance before adding standby redo logs. If not you will get following error.

SQL> alter database add standby logfile group 5 ('/redo1/ptnt_stndby/ptnt_srl_01a.log') size 200M reuse;
*

ERROR at line 1:
ORA-01156: recovery in progress may need access to files
SQL> alter database recover managed standby database cancel;
Database altered.
SQL> alter database add standby logfile group 5 ('/redo1/ptnt_stndby/ptnt_srl_01a.log') size 200M reuse;

Repeat the steps for other standby redo log files too.

Put standby database instance in recovery mode

SQL> alter database recover managed standby database using current logfile disconnect;
Database altered.

Modify primary and standby instances with following
On primarySQL> alter system set log_archive_dest_2='SERVICE=ptnt_standby.unix.gsm1900.org LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=ptnt_standby' scope=both;
System altered.


On secondary
SQL> alter system set log_archive_dest_2='SERVICE=ptnt.unix.gsm1900.org LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=ptnt' scope=both;
System altered.

After recreating standby redo logs with same size, things started propagating to physical standby in near real-time.



PROCESS STATUS SEQUENCE# BLOCK# BLOCKS DELAY_MINS
--------- ------------ ---------- ---------- ----------
ARCH CONNECTED 0 0 0 0
ARCH CLOSING 2270 397313 1636 0
MRP0 APPLYING_LOG 2271 61303 409600 0
RFS IDLE 2271 61306 5 0
RFS IDLE 0 0 0 0

Friday, May 7, 2010

Managing datapump export jobs

Datapump is really a versatile tool for managing schema level backups from a DBA perspective. Now DBAs can easily create duplicate schemas with less effort if you compare the job needed to perform same with old export/import. But there is a small learning curve involved with datapump implementation especially if you kill a datapump export script due to some space issues or any other reasons.
Even if job is killed and corresponding dmp files are deleted, you won't start seeing the free space from file system level. If job is running and you wish to abort it, best way to perform is through datapump commands.


So first check datapump job status

SQL> select * from dba_datapump_jobs;
OWNER_NAME  JOB_NAME  OPERATION   JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS

OPS$ORACLE SYS_EXPORT_FULL_01EXPORTFULL NOT RUNNING 0 0 0

Now if you wish to stop or kill this job, attach to this job session using following command

[oracle@prdpxdb003:/backup/mad1/exp] $ expdp / attach=SYS_EXPORT_FULL_01
Export: Release 10.2.0.4.0 - 64bit Production on Thursday, 06 May, 2010 9:11:50
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_01 Owner: OPS$ORACLE Operation: EXPORT Creator Privs: FALSE GUID: 858D4FB8E1B280C4E0430A9A1D5980C4 Start Time: Thursday, 06 May, 2010 9:11:53 Mode: FULL Instance: pjda1 Max Parallelism: 1 EXPORT Job Parameters: Parameter Name Parameter Value: CLIENT_COMMAND parfile=/********backup/********pjda1/********exp/********pjda1_dp_export_05012010_111300.par State: IDLING Bytes Processed: 102,211,472,336 Percent Done: 79 Current Parallelism: 1 Job Error Count: 0 Dump File: /backup/mad1/exp/mad1_dp_export_05012010_111300.dmp bytes written: 105,503,084,544 Worker 1 Status: State: UNDEFINED Object Schema: JDAINTFC Object Name: INITIALLOAD_HIST Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Completed Objects: 7 Total Objects: 3,760 Completed Rows: 49,405,939 Worker Parallelism: 1


Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes


This will delete datapump dmp file created as part of expdp and will free up disk space.
In case you did not follow this method and manually killed running script and delete dump files, you will not be able to reclaim disk space until you delete datapump related tables as follows.


Here are the steps to perform that
1) Query dba_datapump_jobs table to find job_name

select a.OWNER_NAME,a.JOB_NAME,b.sid,b.serial#,username from dba_datapump_sessions a, v$session b where a.saddr=b.saddr;
OWNER_NAME JOB_NAME SID SERIAL# USERNAME

OPS$ORACLE SYS_EXPORT_FULL_06 465 54925 OPS$ORACLEOPS$ORACLE SYS_EXPORT_FULL_06 478 50840 OPS$ORACLE


select * from dba_objects where object_name='SYS_EXPORT_FULL_06';
OWNER OBJECT_NAME  SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_ TIMESTAMP STATUS T G S

OPS$ORACLE SYS_EXPORT_FULL_06 76558 76558 TABLE 30-APR-10 30-APR-10 2010-04-30:11:13:07 VALID N N N

2) For every job_name there is associated table in oracle dictionary. Drop that table.
SQL> drop table SYS_EXPORT_FULL_06 purge;
Table dropped.