It's All About ORACLE

Oracle - The number one Database Management System. Hope this Blog will teach a lot about oracle.

Autonomous Transactions

In Oracle PL/SQL, PRAGMA refers to a compiler directive or "hint" it is used to provide an instruction to the compiler. The directive restricts member subprograms to query or modify database tables and packaged variables. Pragma directives are processed at compile time where they pass necessary information to the compiler; they are not processed at runtime.

Autonomous transactions allow you to leave the context of the calling transaction, perform an independent transaction, and return to the calling transaction without affecting it's state. The autonomous transaction has no link to the calling transaction, so only committed data can be shared by both transactions.
The following types of PL/SQL blocks can be defined as autonomous transactions:
  • Stored procedures and functions.
  • Local procedures and functions defined in a PL/SQL declaration block.
  • Packaged procedures and functions.
  • Type methods.
  • Top-level anonymous blocks.
The easiest way to understand autonomous transactions is to see them in action. To do this, we create a test table and populate it with two rows. Notice that the data is not committed.
CREATE TABLE at_test (
  id           NUMBER       NOT NULL,
  description  VARCHAR2(50) NOT NULL
);

INSERT INTO at_test (id, description) VALUES (1, 'Description for 1');
INSERT INTO at_test (id, description) VALUES (2, 'Description for 2');

SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         1 Description for 1
         2 Description for 2

2 rows selected.

SQL>
Next, we insert another 8 rows using an anonymous block declared as an autonomous transaction, which contains a commit statement.
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION; BEGIN FOR i IN 3 .. 10 LOOP INSERT INTO at_test (id, description) VALUES (i, 'Description for ' || i); END LOOP; COMMIT; END; / PL/SQL procedure successfully completed. SELECT * FROM at_test; ID DESCRIPTION ---------- -------------------------------------------------- 1 Description for 1 2 Description for 2 3 Description for 3 4 Description for 4 5 Description for 5 6 Description for 6 7 Description for 7 8 Description for 8 9 Description for 9 10 Description for 10 10 rows selected. SQL>
As expected, we now have 10 rows in the table. If we now issue a rollback statement we get the following result.
ROLLBACK;
SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         3 Description for 3
         4 Description for 4
         5 Description for 5
         6 Description for 6
         7 Description for 7
         8 Description for 8
         9 Description for 9
        10 Description for 10

8 rows selected.

SQL>
The 2 rows inserted by our current session (transaction) have been rolled back, while the rows inserted by the autonomous transactions remain. The presence of the PRAGMA AUTONOMOUS_TRANSACTION compiler directive made the anonymous block run in its own transaction, so the internal commit statement did not affect the calling session. As a result rollback was still able to affect the DML issued by the current statement.
Autonomous transactions are commonly used by error logging routines, where the error messages must be preserved, regardless of the the commit/rollback status of the transaction. For example, the following table holds basic error messages.
CREATE TABLE error_logs (
  id             NUMBER(10)     NOT NULL,
  log_timestamp  TIMESTAMP      NOT NULL,
  error_message  VARCHAR2(4000),
  CONSTRAINT error_logs_pk PRIMARY KEY (id)
);

CREATE SEQUENCE error_logs_seq;
We define a procedure to log error messages as an autonomous transaction.
CREATE OR REPLACE PROCEDURE log_errors (p_error_message  IN  VARCHAR2) AS
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  INSERT INTO error_logs (id, log_timestamp, error_message)
  VALUES (error_logs_seq.NEXTVAL, SYSTIMESTAMP, p_error_message);
  COMMIT;
END;
/
The following code forces an error, which is trapped and logged.
BEGIN
  INSERT INTO at_test (id, description)
  VALUES (998, 'Description for 998');

  -- Force invalid insert.
  INSERT INTO at_test (id, description)
  VALUES (999, NULL);
EXCEPTION
  WHEN OTHERS THEN
    log_errors (p_error_message => SQLERRM);
    ROLLBACK;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test WHERE id >= 998;

no rows selected

SELECT * FROM error_logs;

        ID LOG_TIMESTAMP
---------- ---------------------------------------------------------------------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
         1 28-FEB-2006 11:10:10.107625
ORA-01400: cannot insert NULL into ("TIM_HALL"."AT_TEST"."DESCRIPTION")


1 row selected.

SQL>
From this we can see that the LOG_ERRORS transaction was separate to the anonymous block. If it weren't, we would expect the first insert in the anonymous block to be preserved by the commit statement in the LOG_ERRORS procedure.
Be careful how you use autonomous transactions. If they are used indiscriminately they can lead to deadlocks, and cause confusion when analyzing session trace.

The Mutation error in Oracle Database Triggers

Most of us who have worked in Oracle have encountered ORA-04091 (table xxx is mutating. Trigger/function might not see it) at some time or the other during the development process.  In this blog post, we will cover why this error occurs and how we can resolve it using different methodology.


Case 1: When Trigger on table refers the same table: 
----------------------------------------------------------------- 
OPERATION       TYPE                        MUTATING? 
----------------------------------------------------------------- 
insert          before/statement-level      No 
insert          after/statement-level       No 
update          before/statement-level      No 
update          after/statement-level       No 
delete          before/statement-level      No 
delete          after/statement-level       No 

insert          before/row-level            Single row   Multi-row 
                                            No           Yes 
insert          after/row-level             Yes 
update          before/row-level            Yes 
update          after/row-level             Yes 
delete          before/row-level            Yes 
delete          after/row-level             Yes 
----------------------------------------------------------------- 

Mutating error normally occurs when we are performing some DML operations and we are trying to select the affected record from the same trigger. So basically we are trying to select records in the trigger from the table that owns the trigger. This creates inconsistency and Oracle throws a mutating error. Let us take a simple scenario in which we have to know total number of invalid objects after any object status is updated to ‘INVALID’. We will see it with an example. First let us create a table and then trigger.

SQL> CREATE TABLE TEST
2  AS SELECT * FROM USER_OBJECTS;

Table created.

CREATE OR REPLACE TRIGGER TUA_TEST
AFTER UPDATE OF STATUS ON TEST
FOR EACH ROW
DECLARE
v_Count NUMBER;
BEGIN

SELECT count(*)

INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);

END;
/

Now if we try to change the status of any object to ‘INVALID’, we will run into mutating error as we are trying to update the record and trigger is trying to select total number of records in ‘INVALID’ status from the same table.

SQL> update test
2  set status = 'INVALID'
3  where object_name = 'TEST1';
update test
*
ERROR at line 1:
ORA-04091: table SCOTT.TEST is mutating, trigger/function may not see it


Having said that there are different ways we can handle mutating table errors. Let us start taking one by one scenario.

First one is to create statement level trigger instead of row level. If we omit the ‘for each row’ clause from above trigger, it will become statement level trigger. Let us create a new statement level trigger.

CREATE OR REPLACE TRIGGER TUA_TEST
AFTER UPDATE OF STATUS ON TEST
DECLARE
v_Count NUMBER;
BEGIN

SELECT count(*)

INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);

END;

Now let us fire the same update statement again.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name = 'TEST1';

Total Invalid Objects are 6


When we defined statement level trigger, update went through fine and it displayed the total number of invalid objects.

Why this is a problem when we are using ‘FOR EACH ROW’ clause? As per Oracle documentation, the session, which issues a triggering statement on the table, cannot query the same table so that trigger cannot see inconsistent data. This restriction applies to all the row level triggers and hence we run into mutating table error.

Second way of dealing with the mutating table issue is to declare row level trigger as an autonomous transaction so that it is not in the same scope of the session issuing DML statement. Following is the row level trigger defined as pragma autonomous transaction.


CREATE OR REPLACE TRIGGER TUA_TEST
AFTER UPDATE OF STATUS ON TEST
FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
v_Count NUMBER;
BEGIN

SELECT count(*)
INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);
END;


Now let is issue the update statement again and observe the results.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name = 'TEST1';

Total Invalid Objects are 5

1 row updated.

If you closely look at the output, you will see only 5 objects shown in invalid status while statement level trigger showed 6 objects in invalid status. Let us try to update multiple objects at the same time.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name IN ('T1','T2');

Total Invalid Objects are 6
Total Invalid Objects are 6

2 rows updated.


By defining row level trigger as an autonomous transaction, we got rid of mutating table error but result is not correct. The latest updates are not getting reflected in our result set as oppose to statement level trigger. So one has to be very careful when using this approach.


In version 11g, Oracle made it much easier with introduction of compound triggers. We have covered compound triggers in a previous blog post. Let us see in this case how a compound trigger can resolve mutating table error. Let’s create a compound trigger first:

CREATE OR REPLACE TRIGGER TEST_TRIG_COMPOUND
FOR UPDATE
ON TEST
COMPOUND TRIGGER

/* Declaration Section*/
v_count NUMBER;

AFTER EACH ROW IS
BEGIN

dbms_output.put_line(‘Update is done’);

END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN

SELECT count(*)
INTO v_count
FROM TEST
WHERE status = ‘INVALID’;

dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);

END AFTER STATEMENT;

END TEST_TRIG_COMPOUND;
/

Now let us check how many objects are invalid in the test table.

SQL> select count(*) from test where status = 'INVALID';

COUNT(*)
———-
6

Here is the update statement followed by an output.

SQL> UPDATE TEST
2     SET status = 'INVALID'
3   WHERE object_name = 'T2';

Update is done
Total Invalid Objects are 7

1 row updated.

Here we get correct result without getting mutating table error. This is also one very good advantage of compound triggers. There are other ways also to resolve mutating table error using temporary tables but we have discussed common ones in this blog post.







Conventional and Direct Path Loads


This chapter describes SQL*Loader's conventional and direct path load methods. The following topics are covered:


  • Data Loading Methods
  • Conventional Path Load
  • Direct Path Load
  • Using Direct Path Load
  • Optimizing Performance of Direct Path Loads
  • Optimizing Direct Path Loads on Multiple-CPU Systems
  • Avoiding Index Maintenance
  • Direct Loads, Integrity Constraints, and Triggers
  • Parallel Data Loading Models
  • General Performance Improvement Hints
Data Loading Methods

SQL*Loader provides two methods for loading data:
A conventional path load executes SQL INSERT statements to populate tables in an Oracle database. A direct path load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. A direct load does not compete with other users for database resources, so it can usually load data at near disk speed. Considerations inherent to direct path loads, such as restrictions, security, and backup implications, are discussed in this chapter.
The tables to be loaded must already exist in the database. SQL*Loader never creates tables. It loads existing tables that either already contain data or are empty.
The following privileges are required for a load:
  • You must have INSERT privileges on the table to be loaded.
  • You must have DELETE privileges on the table to be loaded, when using the REPLACE or TRUNCATE option to empty old data from the table before loading the new data in its place.
Figure 11-1 shows how conventional and direct path loads perform database writes.
Figure 11-1 Database Writes on SQL*Loader Direct Path and Conventional Path
Description of Figure 11-1 follows
Description of "Figure 11-1 Database Writes on SQL*Loader Direct Path and Conventional Path"

Conventional Path Load

Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables. This method is used by all Oracle tools and applications.
When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to Oracle, and executed.
The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate during normal use, this can slow bulk loads dramatically.

Conventional Path Load of a Single Partition
By definition, a conventional path load uses SQL INSERT statements. During a conventional path load of a single partition, SQL*Loader uses the partition-extended syntax of the INSERT statement, which has the following form:
INSERT INTO TABLE T PARTITION (P) VALUES ... 

The SQL layer of the Oracle kernel determines if the row being inserted maps to the specified partition. If the row does not map to the partition, the row is rejected, and the SQL*Loader log file records an appropriate error message.

When to Use a Conventional Path Load
If load speed is most important to you, you should use direct path load because it is faster than conventional path load. However, certain restrictions on direct path loads may require you to use a conventional path load. You should use a conventional path load in the following situations:
  • When accessing an indexed table concurrently with the load, or when applying inserts or updates to a non-indexed table concurrently with the load
    To use a direct path load (with the exception of parallel loads), SQL*Loader must have exclusive write access to the table and exclusive read/write access to any indexes.
  • When loading data into a clustered table
    A direct path load does not support loading of clustered tables.
  • When loading a relatively small number of rows into a large indexed table
    During a direct path load, the existing index is copied when it is merged with the new index keys. If the existing index is very large and the number of new keys is very small, then the index copy time can offset the time saved by a direct path load.
  • When loading a relatively small number of rows into a large table with referential and column-check integrity constraints. 
    Because these constraints cannot be applied to rows loaded on the direct path, they are disabled for the duration of the load. Then they are applied to the whole table when the load completes. The costs could outweigh the savings for a very large table and a small number of new rows.
  • When loading records and you want to ensure that a record is rejected under any of the following circumstances:
    • If the record, upon insertion, causes an Oracle error
    • If the record is formatted incorrectly, so that SQL*Loader cannot find field boundaries
    • If the record violates a constraint or tries to make a unique index non-unique.

Direct Path Load
Instead of filling a bind array buffer and passing it to the Oracle database with a SQL INSERT statement, a direct path load uses the direct path API to pass the data to be loaded to the load engine in the server. The load engine builds a column array structure from the data passed to it.
The direct path load engine uses the column array structure to format Oracle data blocks and build index keys. The newly formatted database blocks are written directly to the database (multiple blocks per I/O request using asynchronous writes if the host platform supports asynchronous I/O).
Internally, multiple buffers are used for the formatted blocks. While one buffer is being filled, one or more buffers are being written if asynchronous I/O is available on the host platform. Overlapping computation with I/O increases load performance

Data Conversion During Direct Path Loads
During a direct path load, data conversion occurs on the client side rather than on the server side. This means that NLS parameters in the initialization parameter file (server-side language handle) will not be used. To override this behavior, you can specify a format mask in the SQL*Loader control file that is equivalent to the setting of the NLS parameter in the initialization parameter file, or set the appropriate environment variable. For example, to specify a date format for a field, you can either set the date format in the SQL*Loader control file as shown in Example 11-1 or set an NLS_DATE_FORMAT environment variable as shown in Example 11-2.
Example 11-1 Setting the Date Format in the SQL*Loader Control File
LOAD DATA
INFILE 'data.dat'
INSERT INTO TABLE emp
FIELDS TERMINATED BY "|"
(
EMPNO NUMBER(4) NOT NULL,
ENAME CHAR(10),
JOB CHAR(9),
MGR NUMBER(4),
HIREDATE DATE 'YYYYMMDD',
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)
Example 11-2 Setting an NLS_DATE_FORMAT Environment Variable
On UNIX Bourne or Korn shell:
% NLS_DATE_FORMAT='YYYYMMDD'
% export NLS_DATE_FORMAT
On UNIX csh:
%setenv NLS_DATE_FORMAT='YYYYMMDD'

Advantages of a Direct Path Load
A direct path load is faster than the conventional path for the following reasons:
  • Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed.
  • SQL*Loader need not execute any SQL INSERT statements; therefore, the processing load on the Oracle database is reduced.
  • A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load is finished. A conventional path load calls Oracle once for each array of rows to process a SQL INSERT statement.
  • A direct path load uses multiblock asynchronous I/O for writes to the database files.
  • During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer cache. This minimizes contention with other Oracle users.
  • The sorted indexes option available during direct path loads enables you to presort data using high-performance sort routines that are native to your system or installation.
  • When a table to be loaded is empty, the presorting option eliminates the sort and merge phases of index-building. The index is filled in as data arrives.
  • Protection against instance failure does not require redo log file entries during direct path loads. Therefore, no time is required to log the load when:
    • The Oracle database has the SQL NOARCHIVELOG parameter enabled
    • The SQL*Loader UNRECOVERABLE clause is enabled
    • The object being loaded has the SQL NOLOGGING parameter set

Restrictions on Using Direct Path Loads
The following conditions must be satisfied for you to use the direct path load method:
  • Tables are not clustered.
  • Tables to be loaded do not have any active transactions pending.
    To check for this condition, use the Oracle Enterprise Manager command MONITOR TABLE to find the object ID for the tables you want to load. Then use the command MONITOR LOCK to see if there are any locks on the tables.
  • For versions of the database prior to Oracle9i, you can perform a SQL*Loader direct path load only when the client and server are the same version. This also means that you cannot perform a direct path load of Oracle9i data into a database of an earlier version. For example, you cannot use direct path load to load data from a release 9.0.1 database into a release 8.1.7 database.
    Beginning with Oracle9i, you can perform a SQL*Loader direct path load when the client and server are different versions. However, both versions must be at least release 9.0.1 and the client version must be the same as or lower than the server version. For example, you can perform a direct path load from a release 9.0.1 database into a release 9.2 database. However, you cannot use direct path load to load data from a release 10.0.0 database into a release 9.2 database.
  • Tables to be loaded in direct path mode do not have VPD policies active on INSERT.
The following features are not available with direct path load:
  • Loading a parent table together with a child table
  • Loading BFILE columns
  • Use of CREATE SEQUENCE during the load. This is because in direct path loads there is no SQL being generated to fetch the next value since direct path does not generate INSERT statements.

When to Use a Direct Path Load

If none of the previous restrictions apply, you should use a direct path load when:
  • You have a large amount of data to load quickly. A direct path load can quickly load and index large amounts of data. It can also load data into either an empty or nonempty table.
  • You want to load data in parallel for maximum performance.

Integrity Constraints
All integrity constraints are enforced during direct path loads, although not necessarily at the same time. NOT NULL constraints are enforced during the load. Records that fail these constraints are rejected.
UNIQUE constraints are enforced both during and after the load. A record that violates a UNIQUE constraint is not rejected (the record is not available in memory when the constraint violation is detected).
Integrity constraints that depend on other rows or tables, such as referential constraints, are disabled before the direct path load and must be reenabled afterwards. If REENABLE is specified, SQL*Loader can reenable them automatically at the end of the load. When the constraints are reenabled, the entire table is checked. Any rows that fail this check are reported in the specified error log.

Field Defaults on the Direct Path

Default column specifications defined in the database are not available when you use direct path loading. Fields for which default values are desired must be specified with the DEFAULTIF clause. If a DEFAULTIF clause is not specified and the field is NULL, then a null value is inserted into the database.

Indexes Left in an Unusable State
SQL*Loader leaves indexes in an Index Unusable state when the data segment being loaded becomes more up-to-date than the index segments that index it.
Any SQL statement that tries to use an index that is in an Index Unusable state returns an error. The following conditions cause a direct path load to leave an index or a partition of a partitioned index in an Index Unusable state:
  • SQL*Loader runs out of space for the index and cannot update the index.
  • The data is not in the order specified by the SORTED INDEXES clause.
  • There is an instance failure, or the Oracle shadow process fails while building the index.
  • There are duplicate keys in a unique index.
  • Data savepoints are being used, and the load fails or is terminated by a keyboard interrupt after a data savepoint occurred.
To determine if an index is in an Index Unusable state, you can execute a simple query:
SELECT INDEX_NAME, STATUS
   FROM USER_INDEXES 
   WHERE TABLE_NAME = 'tablename';
If you are not the owner of the table, then search ALL_INDEXES or DBA_INDEXES instead of USER_INDEXES.
To determine if an index partition is in an unusable state, you can execute the following query:
SELECT INDEX_NAME, 
       PARTITION_NAME,
       STATUS FROM USER_IND_PARTITIONS
       WHERE STATUS != 'VALID';
If you are not the owner of the table, then search ALL_IND_PARTITIONS and DBA_IND_PARTITIONS instead of USER_IND_PARTITIONS.

Oracle Exp/Imp Utility

Here are the main topics for this article:
  1.  What is the Import/ Export Utility ?
  2.  Which are the Import/ Export modes ?
  3.  Is it possible to exp/ imp to multiple files ?
  4.  How we can use exp/ imp when we have 2 different Oracle database versions?
  5. What I have to do before importing database objects ?
  6.  Is it possible to import a table in a different tablespace ?
  7.  In which cases imp/exp is used ?
  8.  How we can improve the EXP performance ?
  9.  How we can improve the IMP performance ?
10. Which are the EXP options ? 
11. Which are the IMP options ? 
12. Which are the common IMP/EXP problems ? 

1.  What is the Import/ Export Utility ?

Export (exp), Import (imp) are Oracle utilities which allow you to write data in an ORACLE-binary format from the database into operating system files and to read data back from those operating system files.

2.  Which are the Import/ Export modes ?

a) Full export/export
       The EXP_FULL_DATABASE & IMP_FULL_DATABASE, respectively, are needed to perform a full export. Use the full export parameter for a full export. 
b) TablespaceUse the tablespaces export parameter for a tablespace export.

c) User
This mode can be used to export and import all objects that belong to a user. Use the owner export parameter and the fromuser import parameter for a user (owner) export-import. 

d) TableSpecific tables (or partitions) can be exported/imported with table export mode. Use the tables export parameter for a table export/ import mode. 


3. Is it possible to exp/ imp to multiple files ?

Yes, is possible. Here is an example:  
exp SCOTT/TIGER FILE=C:\backup\File1.dmp,C:\backup\File2.dmp LOG=C:\backup\scott.log


4.  How we can use exp/ imp when we have 2 different Oracle database versions?
  • exp must be of the lower version
  • imp must match the target version

5. What I have to do before importing database objects ?

Before importing database objects, we have to drop or truncate the objects, if not, the data will be added to the objects. If the sequences are not dropped, the sequences will generate inconsistent values.  If there are any constraints on the target table, the constraints should be disabled during the import andenabled after import.
  


6.  Is it possible to import a table in a different tablespace ?

By default, NO. Because is no tablespace parameter for the import operation.
However this could be done in the following manner:
  • (re)create the table in another tablespace (the table will be empty)
  • import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated)
  • modify this script to create the indexes in the tablespace we want
  • import the table using IGNORE=y option (because the table exists)
  • recreate the indexes
Here is an example of INDEXFILE:

Oracle export / import


7.  In which cases imp/exp is used ?
  • Eliminate database fragmentation
  • Schema refresh (move the schema from one database to another)
  • Detect database corruption. Ensure that all the data can be read (if the data can be read that means there is no block corruption)
  • Transporting tablespaces between databases
  • Backup database objects

8.  How we can improve the EXP performance ?
  • Set the BUFFER parameter to a high value (e.g. 2M)
  • If you run multiple export sessions, ensure they write to different physical disks. 

9.  How we can improve the IMP performance ?
  • Import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated), import the data and recreate the indexes
  • Store the dump file to be imported on a separate physical disk from the oracle data files
  • If there are any constraints on the target table, the constraints should be disabled during the import and enabled after import
  • Set the BUFFER parameter to a high value (ex. BUFFER=30000000 (~30MB)  ) and COMMIT =y  or set COMMIT=n (is the default behavior: import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables.)
  • use the direct path to import the data (DIRECT=y)
  • (if possible) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init.ora file
  • (if possible) Set the LOG_BUFFER to a big value and restart oracle.

10.  Which are the EXP options ? 

  EXP OptionDefault value  Description
bufferSpecifies the size, in bytes, of the buffer (array) used to insert the data
compressNWhen “Y”, export will mark the table to be loaded as one extent for the import utility.  If “N”, the current storage options defined for the table will be used.  Although this option is only implemented on import, it can only be specified on export. 
consistentNSpecifies the set transaction read only statement for export, ensuring data consistency.  This option should be set to “Y” if activity is anticipated while the exp command is executing.  If ‘Y’ is set, confirm that there is sufficient undo segment space to avoid the export session getting the ORA-1555 Snapshot too old error.
constraintsYSpecifies whether table constraints should be exported with table data.
directNDetermines whether to use direct or conventional path export.  Direct path exports bypass the SQL command, thereby enhancing performance.
feedback0Determines how often feedback is displayed.  A value of feedback=n displays a dot for every rows processed.  The display shows all tables exported not individual ones. 
fileThe name of the export file. Multiple files can be listed, separated by commas.  When export fills thefilesize, it will begin writing to the next file in the list.
filesizeThe maximum file size, specified in bytes. 
flashback_scnThe system change number (SCN) that export uses to enable flashback.
flashback_timeExport will discover the SCN that is closest to the specified time.  This SCN is used to enable flashback. 
fullThe entire database is exported.
grantsYSpecifies object grants to export.
helpShows command line options for export.
indexesYDetermines whether index definitions are exported.  The index data is never exported.
logThe filename used by export to write messages.
object_consistentNSpecifies whether export uses SET TRANSACTION READ ONLY to ensure that the data being exported is consistent. 
ownerOnly the owner’s objects will be exported.
parfileThe name of the file that contains the export parameter options.  This file can be used instead of specifying all the options on the command line for each export.
queryAllows a subset of rows from a table to be exported, based on a SQL where clause.
recordlengthSpecifies the length of the file record in bytes.  This parameter affects the amount of data that accumulates before it is written to disk.  If not specified, this parameter defaults to the value specific to that platform.  The highest value is 64KB.  
resumableNEnables and disables resumable space allocation.  When “Y”, the parameters resumable_name andresumable_timeout are utilized. 
resumable_nameUser defined string that helps identify a resumable statement that has been suspended.  This parameter is ignored unless resumable = Y.
resumable_timeout2hThe time period in which an export error must be fixed.  This parameter is ignored unless resumable = Y.
rowsYIndicates whether or not the table rows should be exported.
statisticsESTIMATEIndicates the level of statistics generated when the data is imported.  Other options include COMPUTE and NONE.
tablesIndicates that the type of export is table-mode and lists the tables to be exported.  Table partitions and sub partitions can also be specified. 
tablespacesIndicates that the type of export is tablespace-mode, in which all tables assigned to the listed tablespaces will be exported.  This option requires the EXP_FULL_DATABASE role.
transport_tablespaceNEnables the export of metadata needed for transportable tablespaces.
triggersYIndicates whether triggers defined on export tables will also be exported.
tts_full_checkFALSEWhen TRUE, export will verify that when creating a transportable tablespace, a consistent set of objects is exported.
useridSpecifies the userid/password of the user performing the export.
volsizeSpecifies the maximum number of bytes in an export file on each tape volume. 

Example: exp system/s              file=C:\emp.dmp tables=scott.emp log=C:\emp.log   (Windows)
      or      exp userid=system/s file=C:\emp.dmp tables=scott.emp log=C:\emp.log   (Windows)
USERID must be the first parameter on the command line.


11.  Which are the IMP options ?

 IMP Option
Default value
  Description
bufferSpecifies the size, in bytes, of the buffer (array) used to insert the data
commitNSpecifies whether import should commit after each array insert. By default, import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables.
compileYTells import to compile procedural objects when they are imported.
constraintsYSpecifies whether table constraints should also be imported with table data.
datafiles
(only with transport_tablespace)
This parameter lists data files to be transported to the database.
destroyNOverwrite tablespace data file
feedback0Determines how often feedback is displayed. A value of feedback=100 displays a dot for every 100 rows processed. This option applies to the total tables imported, not individual ones. Another way to measure the number of rows that have been processed is to execute the following query while the import is active:


SELECT rows_processed

   FROM v$sqlarea

   WHERE sql_text like 'INSERT %INTO "%'

       AND command_type = 2

       AND open_versions > 0;
fileThe name of the export file to import. Multiple files can be listed, separated by commas. When export reaches the filesize it will begin writing to the next file in the list.
filesizeMaximum size of each dump file
fromuserA comma delimited list of schemas from which to import. If the export file contains many users or even the entire database, the fromuser option enables only a subset of those objects (and data) to be imported.
fullThe entire export file is imported.
grantsYSpecifies to import object grants.
helpShows command line options for importimp -help   or   imp help=y
ignoreNSpecifies how object creation errors should be handled. If a table already exists and ignore=y, then the rows are imported to the existing tables, otherwise errors will be reported and no rows are loaded into the table.
indexesYDetermines whether indexes are imported.
indexfileSpecifies a filename that contains index creation statements. This file can be used to build the indexes after the import has completed.
logThe filename used by import to write messages.
parfileThe name of the file that contains the import parameter options. This file can be used instead of specifying all the options on the command line.
recordlengthSpecifies the length of the file record in bytes. This parameter is only used when transferring export files between operating systems that use different default values.
resumableNWhen “Y”, the parameters resumable_name and resumable_timeout are utilized.
resumable_nameUser defined string that helps identify a resumable statement that has been suspended. This parameter is ignored unless resumable = Y.
resumable_timeout2hThe time period in which an error must be fixed. This parameter is ignored unless resumable=Y.
rowsYIndicates whether or not the table rows should be imported.
showNWhen show=y, the DDL within the export file is displayed.
skip_unusable_indexesNDetermines whether import skips the building of indexes that are in an unusable state.
statisticsALWAYSDetermines the level of optimizer statistics that are generated on import. The options include ALWAYS, NONE, SAFE and RECALCULATE. ALWAYS imports statistics regardless of their validity. NONE does not import or recalculate any optimizer statistics. SAFE will import the statistics if they appear to be valid, otherwise they will be recomputed after import. RECALCULATE always generates new statistics after import.
streams_configurationYDetermines whether or not any streams metadata present in the export file will be imported.
streams_instantiationNSpecifies whether or not to import streams instantiation metadata present in the export file
tablesIndicates that the type of export is table-mode and lists the tables to be exported. Table partitions and sub partitions can also be specified.
tablespacesWhen transport_tablespace=y, this parameter provides a list of tablespaces.
to_userSpecifies a list of user schemas that will be targets for imports.
transport_tablespaceNWhen Y, transportable tablespace metadata will be imported from the export file.
tts_ownersWhen transport_tablespace=Y, this parameter lists the users who own the data in the transportable tablespace set.
useridSpecifies the userid/password of the user performing the import.

Example:  imp system/manager file=/APPS/x.dmp tables=x fromuser=cs touser=cs     (Unix)
   or          imp userid=system/manager file=/APPS/x.dmp tables=x fromuser=cs touser=cs   (Unix)
Note: USERID must be the first parameter on the command line.


12. Which are the common IMP/EXP problems?
  • ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..
  • IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.
  • ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).
  • ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter ) while importing.

Difference between Rollback & Undo Segments

Till Oracle 8i the undo that was generated, used to be handled by the Rollback tablespace, which was dictionary managed. In this case we have to first create a Rollback tablespace, then (according to our needs) create rollback segments and assign it to the rollback tablespace.
Now, from Oracle 9i the new concept of undo tablespace is introduced, which helps the DBAs in the following ways:

1) It is locally managed.
2) The undo segments are created by oracle itself ( whereas in the case of rollback segments we create it).
3) The number of undo segments are generated by oracle itself.

The purpose of undo segment and rollback segment is the same except the creation and maintenance part. 
Both will do the same functionality,but in oracle 9i they made it simplify, because if rollback segments are used, we need to put them online in init.ora and needs to take care space management etc.

So in oracle 9i they introduced undo tablespaces on that we can create undo segments and space management will be take care by oracle managed files ,so its reduces the burden of DBA.

You Might Also Like

Related Posts with Thumbnails

Pages