In vsam why we use export import utility




















If needed, define new data files and add them to the data store procedure. The data files will be initialized the first time data store starts up. Rate this topic 5 stars 4 stars 3 stars 2 stars 1 star. Comment on this topic.

By clicking this box, you acknowledge that you are NOT a U. Federal Government employee or agency, nor are you submitting information with respect to or on behalf of one. HCL provides software and services to U. The bad file is opened if one or more input records from a data file contain data errors. When you supply filespecs for these files and Loader encounters input records that cause them to be opened, the filespecs are subject to extension processing.

The bad file uses the extension bad and the discard file uses dsc. When you do not supply filespecs for these files and the load needs to use them, Loader derives the filespecs based on the filespec of the associated data file. When the data filespec is derived from that of the control file, as described previously, this derivation is based on the derived data filespec. Bad and discard filespec derivation works as follows:. At most 99 bad and discard files are supported by this scheme.

All data files in a load including non-DD types are counted for purposes of determining the relative data file numbers. If a data filespec is a data set name type, the bad filespec is derived as a data set named ctlhlq. BAD and the discard filespec is derived as ctlhlq. DSC, where ctlhlq is the "directory" high-level qualifier from the control filespec and root is the data file data set name with any "directory" and extension suffix removed.

Use of the control filespec "directory" in these derivations mirrors what is in the control filespec: if the filespec contains a quoted data set name with an explicit high level qualifier, the same qualifier and quotes are used in the derived filespecs. This file is written in the directory associated with the control file, not the directory of the data file. If either your control or data filespec is counter to the POSIX expectation, the bad and discard filespecs must not be derived-they must be supplied explicitly on the command line or in the Loader control file.

While you can specify an AIX PATH for a load, causing records to be read in alternate key sequence, this impacts load performance and is not recommended. The bad and discard files, when written, contain subsets of the data from the associated data file. When using HFS files, the bad and discard files are written with the same line terminator and related attributes that you specified or defaulted in the file-processing options string for the data file.

When you load data from a data set, you can allow the DCB attributes of the bad and discard files to default or you can override them by coding them explicitly on a DD statement or by using an existing data set with established DCB attributes which Loader will preserve.

If you override these attributes you must ensure that the output record length is sufficient. Loader cannot create a VSAM bad or discard file. If you override bad or discard DCB attributes with a fixed F or FB format and a data file record to be written is shorter than the fixed record length, the record is padded with binary zeroes to the required length.

Refer to the Oracle Database Utilities manual for information about conditions in which return codes are set. It is similar to the previous example except for the environment. All filespecs except the control file have been allowed to default. Export and Import are complementary Oracle utilities used chiefly to transport Oracle database data between Oracle databases and between systems. Export writes a sequential file that is a "transportable copy" of database tables, indexes, and other objects along with their metadata descriptions, such as table and column names and data types.

Import reads the sequential file produced by export, defining the database objects in the target database and then loading the data rows, index entries, or other contents. Datapump Export and Import utilities, described in the section "Datapump Export and Import" provide functions similar to Export and Import with additional features and capabilities.

If parameters are supplied in neither place, these utilities prompt for required inputs; prompts are written to C standard output and the responses are read from standard input. The parser used by Export and Import for parameters has its own syntax conventions described in the generic documentation and is sensitive to apostrophes single quotes. If you use apostrophes in a filespec parameter, you must "escape" them with a preceding backslash to keep the parser from trying to interpret them as syntax elements.

DMP' in a batch Export job, use the following code:. Notice that the filespec apostrophes in the example are doubled, which is required to get an apostrophe through the JCL PARM mechanism, as well as being escaped with a preceding backslash for Export's parser. The file must not contain record sequence numbers or other data not part of Export or Import's command line parameters. Refer to the section "Parameters Containing Spaces" for information about how to specify such values.

The file written by Export and read by Import is called an export file or dump file. Filespecs that you supply for this purpose are subject to extension processing with the suffix dmp.

DMP with prefixing implied. In a shell, it is treated as an HFS file named expdat. When the export file is a data set, DCB attributes must be established when the Export utility opens the file.

Unlike most Oracle tool and utility files, the export file is opened in "binary" mode and is subject to different default attributes than those described in the section "Data Set DCB Attributes". Either the transporting software for example, FTP or the Import utility on the target system may have difficulty with the imbedded record and block descriptors used by V and VB formats.

One of the strengths of Export and Import is that they can be used to move Oracle data between dissimilar platforms without an Oracle Net connection between the two. This may be faster, and in some cases more secure, than running one utility or the other over an Oracle Net connection.

When Import reads data that was created by Export running on a different platform, the data must be unmodified from what was written. Translation of data between formats and character sets is handled automatically by Import. If you use something like File Transfer Protocol FTP software to move the data to the target system, specify a "binary mode" or similar processing option to prevent attempts to translate character data.

If you fail to do this and the data is translated, Import typically issues the following message:. Refer to Oracle Database Utilities for information about conditions in which return codes are set.

The following example shows a batch jobstep execution of Export. This is a simple export of all objects owned by a user ID:. Datapump Export and Import provide capabilities that Export and Import do not, in particular an easy way to spread processing over multiple tasks to improve parallelism.

This includes the dump files written by Datapump Export or read by Datapump Import and the text log file written by both components. CICS Option recognizes these resource types and attributes but takes no action on them.

Just as on the mainframe, you can hold some information about your files in the catalog, and the rest within CICS - in the file control table FCT entry for the file. For the FILE resource type, there is a set of attributes which are mandatory if you want to use the FCT entry to store all the file information, and not required at all if you want to use the catalog - although in that case you will have to obtain the information to populate the catalog from elsewhere.

The two methods of populating the catalog are:. The following table shows the attributes that can be held either in the catalog or in the FCT entry, along with the corresponding fields in the Interactive AMS dialog boxes. There is no exact match between the terminology used, the attributes held and what they mean.

How exactly you go about this depends on a number of considerations:. They are for your own use, and you can modify them in any way that you see fit, to suit your installation's requirements or standards. You can find out more about these programs by reading the comments in the source program listings. This is an example of definitions for a KSDS file and its alternate index, with the additional Micro Focus file statements in bold.

If you want to use the catalog to store the extra information, what you need to do depends on whether or not data set names are stored in the CSD on the mainframe. The CSD won't contain data set names if you use JCL to create your files or a data set management product from a third party.

You need the data set name because the catalog links the data set name on the mainframe to the name of the physical file on the PC. If you do not want to use the catalog to store extra file information, you need to obtain the information in some way.

You have the following options:. Instead it has a list function. CBL , to suit your site's requirements. This section describes some of the modifications you might want to make and how to set about making them.

The method we recommend is to write a small CICS program that creates a file containing the file ID and its associated data set name.

The CICS commands to achieve this are:. For example:. Under certain circumstances, you may want to filter out some types of resource definitions.

To achieve this, you could:. If you do not intend to use the catalog, and you want to import a definition for an alternate index file, you will need to add the BASEFILE statement to the resource definition. This statement names the base file to which the alternate index file is related. However, if your installation has a naming convention that allows easy identification of an alternate index's base file, you may modify the supplied programs to effect this.

As an alternative to obtaining the file information before you import the definitions from the mainframe, you can obtain it on the PC, during the import process. The source code, dfhrdtux. You should update the source code so that it can supply the missing information from a local source, such as a data file on the PC.

You set the Non-mainframe dialect on the Build Settings dialog box. Full details about the interface are provided in comments in the supplied source code file, dfhrdtux. This section describes the stages in the process of extracting the rescource definitions using the Micro Focus mainframe utility programs. Typical JCL is as follows:.



0コメント

  • 1000 / 1000