Sunday, February 13, 2011

ORA-39014: One or more workers have prematurely exited error message with Oracle datapump export

Last week I came across an interesting problem in one 10gR2 10.2.0.4 database instance on AIX platform. I have a daily full datapump export which is running for last one year without any errors. But when I tried to export a couple of tables from one schema, I got ORA-39014 error. Here is the full error message.

Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW01" prematurely terminated
ORA-31672: Worker process DW01 died unexpectedly.
Job "OPS$ORACLE"."SYS_EXPORT_TABLE_01" stopped due to fatal error at 14:05:45
ORA-39014: One or more workers have prematurely exited.

My export par file was as follows

$ cat special_dp_export_pjda1_tables.par
userid=/
DIRECTORY=EXPDP_DIR
DUMPFILE=special_dp_export_pjda1_tables_02082011.dmp
LOGFILE=special_dp_export_pjda1_tables_02082011.log
TABLES=STSC.ITEM,
STSC.DMDUNIT

And datapump export command was

$ expdp parfle=special_dp_export_pjda1_tables.par

I did some Metalink search and found out that this is a bug in 10.2.0.4 and their solution was to upgrade to 11g which is not possible for me to do due to production nature of this instance.

I was able to have a successful datapump export by modifying my par file as follows

userid='/ as sysdba'
DIRECTORY=EXPDP_DIR
DUMPFILE=special_dp_export_pjda1_tables_02082011.dmp
LOGFILE=special_dp_export_pjda1_tables_02082011.log
TABLES=STSC.ITEM,
STSC.DMDUNIT

In the beginning I mentioned that I have a daily full datapump export which is running without any error for last one year with userid=/ and not with userid='/ as sysdba'. What a mystery. I was also able to reproduce this error in another 10.2.0.4 Oracle instance. Does anyone has any answer why I don’t get this error with full export?

Friday, February 4, 2011

ORA-24247 when trying To send email using UTL_SMTP in Oracle 11gR2

If any of your applications use SYS.UTL_SMTP package to send emails in Oracle 11gR2, you will come across ORA-24247 in Oracle 11gR2. Following notes is explaining the problem and solution.

I have a procedure to use SYS.UTL_SMTP package to send mail from my sql scripts in Oracle 10gR2 and was running fine. Here is the simplified version of this code.


create or replace procedure mail_send_test
as
smtp_server varchar2(50);
c utl_smtp.connection;
begin
smtp_server := 'mailserver.domain.name';
c := utl_smtp.open_connection(smtp_server,25);
dbms_output.put_line ('connection success');
end;
/


I started getting following error after upgrading my database instance to 11gR2.
Here is the full work log


[oracle@prdcxx002:/home/oracle] $ sqlplus /
SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 2 15:40:14 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @smtp_test.sql;
Procedure created.
SQL> exec mail_send_test;
BEGIN mail_send_test; END;
*
ERROR at line 1:
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 17
ORA-06512: at "SYS.UTL_TCP", line 267
ORA-06512: at "SYS.UTL_SMTP", line 161
ORA-06512: at "SYS.UTL_SMTP", line 197
ORA-06512: at "OPS$ORACLE.MAIL_SEND_TEST", line 10
ORA-06512: at line 1


After reading Oracle Metalink and Oracle 11g manual, I understood that this is an expected behavior as Oracle started implementing Access Control List (ACL) in sensitive packages like UTL_SMTP, UTL_HTTP etc to take care of security loop holes. We need to start granting privileges for users who needs to use these packages via SYS.DBMS_NETWORK_ACL_ADMIN procedure. DBMS_NETWORK_ACL_ADMIN procedure provides security for network related PL/SQL packages UTL_TCP, UTL_HTTP, UTL_SMTP, UTL_MAIL, and UTL_INADDR. Here are the steps to grant them. You need XDB to be installed for ACL to work.


Solution is to execute following sql procedures after connecting as SYSDBA
1) Creates an access control list (ACL) with an initial privilege setting


begin
dbms_network_acl_admin.create_acl (
acl => 'mailserver_acl.xml',
description => 'ACL for user to connect to SMTP Server',
principal => 'ORACLE_USERNAME',
is_grant => TRUE,
privilege => 'connect');
end;
/
commit;


2) Assigns an access control list (ACL) to a network host, and optionally specific to a TCP port range


begin
dbms_network_acl_admin.assign_acl (
acl => 'mailserver_acl.xml',
host => 'mailservername.domain.name',
lower_port => 25);
end;
/
commit;


3) Adds a privilege to grant or deny the network access to the user in an access control list (ACL)


begin
dbms_network_acl_admin.add_privilege (
acl => 'mailserver_acl.xml',
principal => 'OPS$ORACLE',
is_grant => TRUE,
privilege => 'connect');
end;
/
commit;


After these 3 steps, I logged in as Oracle on prdcmds002 and executed the procedure which connects to mail server and it worked without any ORA errors.


SQL> set serveroutput on
SQL> exec mail_send_test;
connection success
PL/SQL procedure successfully completed.


Please note that due to some bugs, we need to grant these privileges to individual users and not to a role. See Bug 7010711: ACL PRIVILEGES GRANTED TO ROLES ARE NOT CASCADED TO THE USERS


References
=======
1. See Oracle Metalink Doc: 754909.1 - ORA-24247 Calling UTL_SMTP or UTL_HTTP or UTL_TCP in a Stored Procedure [ID 754909.1]
2. See Oracle Metalink Doc: 557070.1 - ORA-24247 Trying To Send Email Using UTL_SMTP from 11gR1 (11.1.0.6) or higher

We need to set ACL for UTL_INADDR package also in Oracle 11g. We could check this by issuing
SELECT utl_inaddr.get_host_name FROM dual;If you get ORA-24247, create a acl with privilege=resource.

Here is the sql I used.
-- rem Create ACL for using UTL_INADDR
begin
dbms_network_acl_admin.create_acl (
acl => 'resolve_ipaddr_acl.xml',
description => 'ACL for user to resolv Network names and ip address',
principal => 'ETRADM',
is_grant => TRUE,
privilege => 'resolve');
end;
/
commit;


-- rem Assign ACL to one or more network hosts
begin
dbms_network_acl_admin.assign_acl (
acl => 'resolve_ipaddr_acl.xml',
host => '*');
end;
/
commit;


You need XDB to be installed for ACL to work. So if you get following error, verify XDB is installed.

Duplicate Filename Found for Submission_id: 6011
Standard exception
VALIDATE Submission: 6011 Step: Duplicate Submission Ch
Exception CODE: 1
Exception ERRM: User-Defined Exception
Stop Time: 21-MAR-2011 08:15:04 AM
DECLARE
*
ERROR at line 1:
ORA-24248: XML DB extensible security not installed
ORA-06512: at "SYS.UTL_TCP", line 17
ORA-06512: at "SYS.UTL_TCP", line 246
ORA-06512: at "SYS.UTL_SMTP", line 127
ORA-06512: at "SYS.UTL_SMTP", line 150
ORA-06512: at "FFLADM.EMAIL_HANDLER", line 36
ORA-06512: at "FFLADM.REBATE_PROCESS", line 515
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at line 5

can use following script to install XDB

SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /orasw/app/oracle/admin/scripts/xdb_protocol.log append
@/orasw/app/oracle/product/11.2.0/db/rdbms/admin/catqm.sql change_on_install SYSAUX TEMP YES;
connect "SYS"/"&&sysPassword" as SYSDBA
@/orasw/app/oracle/product/11.2.0/db/rdbms/admin/catxdbj.sql;
@/orasw/app/oracle/product/11.2.0/db/rdbms/admin/catrul.sql;
spool off

You can use following views to see current ACL settings.
 
SQL>  select acl,principal from dba_network_acl_privileges;

ACL                                 PRINCIPAL
----------------------------------- -------------------------
/sys/acls/mailserver_acl.xml        OPS$ORACLE
/sys/acls/mailserver_acl.xml        ETRADM
/sys/acls/resolve_ipaddr_acl.xml    ETRADM

SQL> select * from DBA_NETWORK_ACLS;

HOST                                     LOWER_PORT UPPER_PORT ACL                                 ACLID
---------------------------------------- ---------- ---------- ----------------------------------- --------------------------------
myserver.unix.tarikida.org                          25         25 /sys/acls/mailserver_acl.xml        9B6B9AF666988026E0430A9A1D188026
*                                                              /sys/acls/resolve_ipaddr_acl.xml    9B7AC2AFB00B202AE0430A9A1D18202A

Wednesday, May 26, 2010

Sizing of Standby redo log in Oracle 10g Dataguard

In Oracle 10g Dataguard replication, Standby redo log helps you to have committed records propagated from primary to physical standby database without having to wait for log shipping. But you need to make sure that standby redo logs are of same size as of online redo logs. For performance reasons, don’t multiplex standby redo log and make one extra standby redo log group.


If standby redo log is of different size in physical standby, you won’t see any error in alert log. But new records won’t get appear in standby and quering v$managed_standby view will show as WAIT_FOR_LOG for MRP0 process in standby instance.

SQL> select process, status, sequence#,block#,blocks,delay_mins from v$managed_standby;
PROCESS STATUS SEQUENCE# BLOCK# BLOCKS DELAY_MINS
--------- ------------ ---------- ---------- ----------
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
MRP0 WAIT_FOR_LOG 2255 0 0 0
RFS IDLE 2255 133468 1 0
RFS IDLE 0 0 0 0

In my case initially I created 100 MB standby redo logs and my online redo logs were 200 MB in size. Because of this difference, real-time propagation was not working. So I dropped initial 100 MB standby redo logs and created 200 MB ones in primary instance.
SQL> alter database drop logfile group 5;
SQL> alter database drop logfile group 6;
SQL> alter database drop logfile group 7;
SQL> alter database drop logfile group 8;
SQL> alter database add standby logfile group 5 ('/redo1/ptnt/ptnt_srl_01a.log') size 200M reuse ;
SQL> alter database add standby logfile group 6 ('/redo2/ptnt/ptnt_srl_02a.log') size 200M reuse;
SQL> alter database add standby logfile group 7 ('/redo1/ptnt/ptnt_srl_03a.log') size 200M reuse;
SQL> alter database add standby logfile group 8 ('/redo2/ptnt/ptnt_srl_04a.log') size 200M reuse;
SQL> alter database add standby logfile group 9 ('/redo2/ptnt/ptnt_srl_05a.log') size 200M;

Do the same thing on ptnt_stndby - standby instanceSQL> select process, status, sequence#,block#,blocks,delay_mins from v$managed_standby;

Don't forget to cancel out current managed recovery in standby instance before adding standby redo logs. If not you will get following error.

SQL> alter database add standby logfile group 5 ('/redo1/ptnt_stndby/ptnt_srl_01a.log') size 200M reuse;
*

ERROR at line 1:
ORA-01156: recovery in progress may need access to files
SQL> alter database recover managed standby database cancel;
Database altered.
SQL> alter database add standby logfile group 5 ('/redo1/ptnt_stndby/ptnt_srl_01a.log') size 200M reuse;

Repeat the steps for other standby redo log files too.

Put standby database instance in recovery mode

SQL> alter database recover managed standby database using current logfile disconnect;
Database altered.

Modify primary and standby instances with following
On primarySQL> alter system set log_archive_dest_2='SERVICE=ptnt_standby.unix.gsm1900.org LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=ptnt_standby' scope=both;
System altered.


On secondary
SQL> alter system set log_archive_dest_2='SERVICE=ptnt.unix.gsm1900.org LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=ptnt' scope=both;
System altered.

After recreating standby redo logs with same size, things started propagating to physical standby in near real-time.



PROCESS STATUS SEQUENCE# BLOCK# BLOCKS DELAY_MINS
--------- ------------ ---------- ---------- ----------
ARCH CONNECTED 0 0 0 0
ARCH CLOSING 2270 397313 1636 0
MRP0 APPLYING_LOG 2271 61303 409600 0
RFS IDLE 2271 61306 5 0
RFS IDLE 0 0 0 0

Friday, May 7, 2010

Managing datapump export jobs

Datapump is really a versatile tool for managing schema level backups from a DBA perspective. Now DBAs can easily create duplicate schemas with less effort if you compare the job needed to perform same with old export/import. But there is a small learning curve involved with datapump implementation especially if you kill a datapump export script due to some space issues or any other reasons.
Even if job is killed and corresponding dmp files are deleted, you won't start seeing the free space from file system level. If job is running and you wish to abort it, best way to perform is through datapump commands.


So first check datapump job status

SQL> select * from dba_datapump_jobs;
OWNER_NAME  JOB_NAME  OPERATION   JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS

OPS$ORACLE SYS_EXPORT_FULL_01EXPORTFULL NOT RUNNING 0 0 0

Now if you wish to stop or kill this job, attach to this job session using following command

[oracle@prdpxdb003:/backup/mad1/exp] $ expdp / attach=SYS_EXPORT_FULL_01
Export: Release 10.2.0.4.0 - 64bit Production on Thursday, 06 May, 2010 9:11:50
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_01 Owner: OPS$ORACLE Operation: EXPORT Creator Privs: FALSE GUID: 858D4FB8E1B280C4E0430A9A1D5980C4 Start Time: Thursday, 06 May, 2010 9:11:53 Mode: FULL Instance: pjda1 Max Parallelism: 1 EXPORT Job Parameters: Parameter Name Parameter Value: CLIENT_COMMAND parfile=/********backup/********pjda1/********exp/********pjda1_dp_export_05012010_111300.par State: IDLING Bytes Processed: 102,211,472,336 Percent Done: 79 Current Parallelism: 1 Job Error Count: 0 Dump File: /backup/mad1/exp/mad1_dp_export_05012010_111300.dmp bytes written: 105,503,084,544 Worker 1 Status: State: UNDEFINED Object Schema: JDAINTFC Object Name: INITIALLOAD_HIST Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Completed Objects: 7 Total Objects: 3,760 Completed Rows: 49,405,939 Worker Parallelism: 1


Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes


This will delete datapump dmp file created as part of expdp and will free up disk space.
In case you did not follow this method and manually killed running script and delete dump files, you will not be able to reclaim disk space until you delete datapump related tables as follows.


Here are the steps to perform that
1) Query dba_datapump_jobs table to find job_name

select a.OWNER_NAME,a.JOB_NAME,b.sid,b.serial#,username from dba_datapump_sessions a, v$session b where a.saddr=b.saddr;
OWNER_NAME JOB_NAME SID SERIAL# USERNAME

OPS$ORACLE SYS_EXPORT_FULL_06 465 54925 OPS$ORACLEOPS$ORACLE SYS_EXPORT_FULL_06 478 50840 OPS$ORACLE


select * from dba_objects where object_name='SYS_EXPORT_FULL_06';
OWNER OBJECT_NAME  SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_ TIMESTAMP STATUS T G S

OPS$ORACLE SYS_EXPORT_FULL_06 76558 76558 TABLE 30-APR-10 30-APR-10 2010-04-30:11:13:07 VALID N N N

2) For every job_name there is associated table in oracle dictionary. Drop that table.
SQL> drop table SYS_EXPORT_FULL_06 purge;
Table dropped.