Oracle import dmp file command
Home » Oracle Database Administration » Oracle impdp. Summary : in this tutorial, you will learn how to use the Oracle Data Pump Import to load an export dump file set into a target Oracle Database system. Got an oracle dump and we have to import. Extremely counterintuitive compared to, well, virtually any other database system — namezero.
Add a comment. Active Oldest Votes. Improve this answer. Andrew Andrew 3 3 silver badges 4 4 bronze badges. ORA invalid common user or role name — Kvasi. You should create that user before import, then you should import data as that user, This should eliminate the error because exported file contains permissions and other informations related to user ABCDE that does not exist in database. Community Bot 1. Nick Pierpoint Nick Pierpoint 2 2 silver badges 7 7 bronze badges. Justin Cave Justin Cave I have a related question.
Are you willing to comment on it? Step 2: Create Directory Object and grant mandatory privileges. Step 3: Create a parameter file. In this tutorial you will learn how to export schema using expdp data pump in Oracle Database. Step1: Create a Directory. Create a directory anywhere in your system and name it whatever you want. Step 2: Create Directory Object and grant it mandatory privilege.
Some prompts show a default answer. If the default is acceptable, press Enter. Entering a null table list causes all tables in the schema to be imported. You can specify only one schema at a time when you use the interactive method. This section describes the different types of messages issued by Import and how to save them in a log file.
You can capture all Import messages in a log file, either by using the LOG parameter or, for those systems that permit it, by redirecting Import's output to a file. The Import utility writes a log of detailed information about successful loads and any errors that may occur.
Import does not terminate after recoverable errors. For example, if an error occurs while importing a table, Import displays or logs an error message, skips to the next table, and continues processing. These recoverable errors are known as warnings. For example, if a nonexistent table is specified as part of a table-mode import, the Import utility imports all other tables.
Then it issues a warning and terminates successfully. Some errors are nonrecoverable and terminate the Import session. These errors typically occur because of an internal problem or because a resource, such as memory, is not available or has been exhausted. If one or more recoverable errors occurs but Import is able to continue to completion, Import displays the following message:.
If a nonrecoverable error occurs, Import terminates immediately and displays the following message:. Oracle9i Database Error Messages and your Oracle operating system-specific documentation. Import provides the results of an import operation immediately upon completion. Depending on the platform, Import may report the outcome in a process exit code as well as recording the results in the log file.
This enables you to check the outcome from the command line or script. Table shows the exit codes that are returned for various results. If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table. A row error is generated if a row violates one of the integrity constraints in force on your system, including:.
Row errors can also occur when the column definition for a table in a database is different from the column definition in the export file. The error is caused by data that is too long to fit into a new table's columns, by invalid datatypes, or by any other INSERT error.
Errors can occur for many reasons when you import database objects, as described in this section. When these errors occur, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file. If a database object to be imported already exists in the database, an object creation error occurs. The current database object is not replaced.
For tables, this behavior means that rows contained in the export file are not imported. The database object is not replaced. If the object is a table, rows are imported into it.
Note that only object creation errors are ignored; all other errors such as operating system, database, and SQL errors are reported and processing may stop. This could occur, for example, if Import were run twice. If sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences. If a sequence is not dropped before the import, it is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists.
Resource limitations can cause objects to be skipped. When you are importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted. If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table. If not, a rollback of the current table occurs before Import continues. For each specified table, table-level Import imports all rows of the table. With table-level Import:.
If the table does not exist, and if the exported table was partitioned, table-level Import creates a partitioned table. If the table creation is successful, table-level Import reads all source data from the export file into the target table. After Import, the target table contains the partition definitions of all partitions and subpartitions associated with the source table in the Export file. This operation ensures that the physical and logical attributes including partition bounds of the source partitions are maintained on Import.
Partition-level Import can only be specified in table mode. It lets you selectively load data from specified partitions or subpartitions in an export file. Keep the following guidelines in mind when using partition-level import.
If you specify a partition name for a composite partition, all subpartitions within the composite partition are used as the source. In the following example, the partition specified by the partition-name is a composite partition.
All of its subpartitions will be imported:. The following example causes row data of partitions qc and qd of table scott.
If table e does not exist in the Import target database, it is created and data is inserted into the same partitions. If table e existed on the target system before Import, the row data is inserted into the partitions whose range allows insertion. The row data can end up in partitions of names other than qc and qd. This section describes the behavior of Import with respect to index creation and maintenance. Import provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data.
Performing index creation, re-creation, or maintenance after Import completes is generally faster than updating the indexes for each row inserted by Import.
Index creation can be time consuming, and therefore can be done more efficiently after the import of all other objects has completed. The index-creation statements that would otherwise be issued by Import are instead stored in the specified file. This approach saves on index updates during import of existing tables. Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index.
For example, assume that partitioned table t with partitions p1 and p2 exists on the Import target system. Assume that partition p1 contains a much larger amount of data in the existing table t , compared with the amount of data to be inserted by the Export file expdat. Assume that the reverse is true for p2. A database with many noncontiguous, small blocks of free space is said to be fragmented.
A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export and import as follows:. Oracle9i Database Administrator's Guide for more information about creating databases. This section describes factors to take into account when using Export and Import across a network.
Because the export file is in binary format, use a protocol that supports binary transfers to prevent corruption of the file when you transfer it across a network. For example, use FTP or a similar file transfer protocol to transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported. With Oracle Net, you can perform exports and imports over a network. For example, if you run Export locally, you can write data from a remote Oracle database into a local export file.
If you run Import locally, you can read data into a remote Oracle database. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol. This section describes the character set conversions that can take place during export and import operations.
The following sections describe character conversion as it applies to user data and DDL. If the character sets of the source database are different than the character sets of the import database, a single conversion is performed. To minimize data loss due to character set conversions, ensure that the export database, the export user session, the import user session, and the import database all use the same character set. Some 8-bit characters can be lost that is, converted to 7-bit equivalents when you import an 8-bit character set export file.
Most often, this is apparent when accented characters lose the accent mark. During character set conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character.
The default character is defined by the target character set. Oracle9i Database Globalization Support Guide. The following sections describe points you should consider when you import particular database objects. The Oracle database server assigns object identifiers to uniquely identify object types, object tables, and rows in object tables.
These object identifiers are preserved by Import. To do this, Import compares the types's unique identifier TOID with the identifier stored in the export file. If those match, Import then compares the type's unique hashcode with that stored in the export file. Import will not import table rows if the TOIDs or hashcodes do not match. Be sure you are confident of your knowledge of type validation and how it works before attempting to perform an import operation with this feature disabled.
Import uses the following criteria to decide how to handle object types, object tables, and rows in object tables:. Users frequently create tables before importing data to reorganize tablespace usage or to change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format except for storage parameters.
For object tables and tables that contain columns of object types, format compatibilities are more restrictive. For object tables and for tables containing columns of objects, each object the table references has its name, structure, and version information written out to the Export file. Export also includes object type information from different schemas, as needed.
Import verifies the existence of each object type required by a table prior to importing the table data. This verification consists of a check of the object type's name followed by a comparison of the object type's structure and version from the import system with that found in the Export file.
If an object type name is found on the import system, but the structure or version do not match that from the Export file, an error message is generated and the table data is not imported. Inner nested tables are exported separately from the outer table.
Therefore, situations may arise where data in an inner nested table might not be properly imported:. You should always carefully examine the log file for errors in outer tables and inner tables.
To be consistent, table data may need to be modified or deleted. Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results.
For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user. Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data.
For directory aliases, if the operating system directory syntax used in the export system is not valid on the import system, no error is reported at import time. Subsequent access to the file data receives an error.
It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system. Import does not verify that the location referenced by the foreign function library is correct.
If the formats for directory and filenames used in the library's specification on the export file are invalid on the import system, no error is reported at import time.
Subsequent usage of the callout functions will receive an error. It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system. If the compilation is successful, it can be accessed by remote procedures without error. The compilation takes place the next time the procedure, function, or package is used.
When you import Java objects into any schema, the Import utility leaves the resolver unchanged.
0コメント