Oracle Bulkload

The Oracle Bulkload origin reads all available data from multiple Oracle tables, then stops the pipeline. The origin can use multiple threads to enable the parallel processing of data. For information about supported versions, see Supported Systems and Versions in the Data Collector documentation.

Use the Oracle Bulkload origin to quickly read database tables, such as when you want to migrate tables to another database or system. You can use the origin to read from static tables or non-static tables.

When you configure the Oracle Bulkload origin, you specify connection information and the tables to read. You can also configure advanced properties, such as the number of threads to use, the number of batches to include in each transaction request, the maximum batch size, and whether to consider the case of schemas and tables when executing queries.

By default, the origin generates field attributes that provide additional information about each field.

The origin can generate events for an event stream. For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.

Before you use the Oracle Bulkload origin, you must complete a prerequisite task.
Note: The Oracle Bulkload origin does not maintain an offset. Each time the pipeline runs, the origin processes all available data. So even if the pipeline stops before completing all processing, the origin processes all available data again when you restart the pipeline.

Prerequisite

Before using the Oracle Bulkload origin, install the Oracle JDBC driver for the database. The origin cannot access the database until you install this driver.

To use the JDBC driver with multiple stage libraries, install the driver into each stage library associated with the stages.
  1. Download the Oracle JDBC driver from the Oracle website.
    Note: Writing XML data to Oracle requires installing the Oracle Data Integrator Driver for XML. For more information, see the Oracle documentation.
  2. Install the driver as an external library into the JDBC Oracle stage library, streamsets-datacollector-jdbc-oracle-lib, which includes the origin.

For information about installing additional drivers, see Install External Libraries in the Data Collector documentation.

Static and Non-Static Tables

You can use the Oracle Bulkload origin to read static tables, tables that do not change while the pipeline runs, or to read non-static tables, tables that change while the origin runs.

When using the origin to read from non-static tables that might change as the pipeline runs, configure the stage to use isolation levels. With isolation levels enabled, the origin uses the serializable isolation level and only reads changes committed when the pipeline starts. The origin does not capture changes made to the table while the pipeline runs. Oracle consistency checks that occur with this isolation level can significantly reduce throughput in an environment with many concurrent transactions.

After using the origin to migrate data from static tables, you can use a separate pipeline that includes the Oracle CDC or Oracle CDC Client origin to process CDC data from LogMiner redo logs or the JDBC Multitable Consumer origin to read data continuously from tables.

After using the origin to migrate data from non-static tables, processing CDC data does not capture changes made while migrating data, resulting in loss of data. Therefore, processing CDC data is not recommended after using the origin to migrate data from non-static tables.

Batch Processing

Unlike most Data Collector origins, the Oracle Bulkload origin performs batch processing only. After processing all data, the origin stops the pipeline, rather than waiting for additional data as with streaming pipelines.

The Oracle Bulkload origin does not maintain an offset during processing. Each time that you run a pipeline that includes the origin, the origin processes the available data in the specified tables. Then the origin stops the pipeline gracefully, allowing processes to complete.

Tip: If the pipeline stops before processing is complete, to avoid duplicate records, clear the destination system of processed records before starting the pipeline again.

Schema and Table Names

When you configure the Oracle Bulkload origin, you specify the tables that you want to read. To specify the tables, you define the schema and a table name pattern.

You can use SQL wildcards to define a set of tables within a schema or across multiple schemas.

For example, say you want to process all tables in the sales schema that start with SALES_. You can use the following configuration to specify the tables to process:
  • Schema: sales
  • Table Name Pattern: SALES_%

You can configure the origin to consider the case of schema and table names when executing queries.

Multithreaded Processing

The Oracle Bulkload origin performs parallel processing and enables the creation of a multithreaded pipeline.

When you start the pipeline, the Oracle Bulkload origin retrieves the list of tables defined in the table configuration. The origin then uses multiple concurrent threads for processing based on the Maximum Pool Size property on the Advanced tab.

As the pipeline runs, Oracle creates blocks of data in memory. The Oracle Bulkload origin creates a task from a block of data and passes it to an available pipeline runner. The pipeline runner creates batches from the task for processing based on the maximum batch size configured for the origin.

A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors, executors, and destinations in the pipeline and handles all pipeline processing after the origin. Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread.

When tasks created from Oracle blocks are smaller than desired, like when they are smaller than the maximum batch size, you can configure the origin to merge small tasks. Use the Minimum Task Size property on the Advanced tab to specify the minimum number of records to include in a task. When set, smaller tasks are merged to enable more efficient processing.

Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since tasks are processed by different pipeline runners, the order that batches are written to destinations is not ensured.

For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.

Field Attributes

The Oracle Bulkload origin generates field attributes for columns converted to the Decimal or Datetime data types in Data Collector. The attributes provide additional information about each field.

The following data type conversions do not include all information in the corresponding Data Collector type:
  • The Oracle Number data type is converted to the Data Collector Decimal data type, which does not store scale and precision.
  • The Oracle Timestamp data type is converted to the Data Collector Datetime data type, which does not store nanoseconds.
To preserve this information during data type conversion, the origin generates the following field attributes for these Data Collector data types:
Data Collector Data Type Generated Field Attribute Description
Decimal precision Provides the original precision for every number column.
Decimal scale Provides the original scale for every number column.
Datetime nanoSeconds Provides the original nanoseconds for every timestamp column.

You can use the record:fieldAttribute or record:fieldAttributeOrDefault functions to access the information in the attributes. For more information about working with field attributes, see Field Attributes.

Event Generation

The Oracle Bulkload origin can generate events that you can use in an event stream.

Oracle Bulkload events can be used in any logical way. For example:

For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.

Event Record

Event records generated by the Oracle Bulkload origin have the following event-related record header attributes:
Record Header Attribute Description
sdc.event.type Event type. Uses the following type:
  • table-finished - Generated when the origin completes processing all rows within a table.
sdc.event.version Integer that indicates the version of the event record type.
sdc.event.creation_timestamp Epoch timestamp when the stage created the event.

The Oracle Bulkload origin can generate the following event record:

table-finished
The Oracle Bulkload origin generates a table-finished event record when the origin completes processing all data within a table.
The table-finished event record has the following additional fields:
Event Record Field Description
schema Schema associated with the table that has no remaining data to be processed.
table Table that has no remaining data to be processed.
record-count Number of records that were successfully processed.

Configuring an Oracle Bulkload Origin

Configure an Oracle Bulkload origin to read data from one or more static database tables.

Before you use the origin in a pipeline, complete the prerequisite task.

  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    Produce Events Generates event records when events occur. Use for event handling.
    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline.
  2. On the JDBC tab, configure the following JDBC properties:
    JDBC Property Description
    JDBC Connection String
    Connection string to use to connect to the database.
    Note: If you include the JDBC credentials in the connection string, use the user account created for the origin.
    Use Credentials Enables entering credentials on the Credentials tab. Select when you do not include credentials in the JDBC connection string.
    Tables Tables to read. Configure properties for each table or set of tables that you want to read.

    Using simple or bulk edit mode, click the Add icon to define another table configuration.

    Schema Name Schema to use. You can enter a schema name or use SQL wildcards to define multiple schemas.
    Table Name A table name pattern that defines the tables to read. You can enter a table name or use SQL wildcards to define multiple tables.
    Additional JDBC Configuration Properties Additional JDBC configuration properties to use. To add properties, click Add and define the JDBC property name and value.

    Use the property names and values as expected by JDBC.

  3. To enter JDBC credentials separately from the JDBC connection string, on the Credentials tab, configure the following properties:
    Credentials Property Description
    Username

    Oracle user name.

    The user must have the following Oracle privileges:
    • SELECT on the tables being read.
    • SELECT on the SYS.DBA_EXTENTS system table.
    • READ on the SYS.USER_OBJECTS system table.
    Password Password for the account.
    Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.
  4. On the Advanced tab, optionally configure the following properties:
    Advanced Property Description
    Maximum Pool Size Number of threads to use for multithreaded processing.
    Batches per Request Number of batches to fetch from the database in each request.

    Default is 50.

    Max Batch Size (records) Maximum number of records processed at one time. Honors values up to the Data Collector maximum batch size.

    Default is 1000. The Data Collector default is 1000.

    Minimum Idle Connections Minimum number of connections to create and maintain. To define a fixed connection pool, set to the same value as Maximum Pool Size.

    Default is 1.

    Minimum Task Size Minimum number of records to allow in a task.

    This property determines whether smaller tasks should be merged with larger tasks for processing. Tasks are based on chunks of data provided by Oracle.

    Use -1 to opt out of using this property.

    Stop for SQL Exception Stops the pipeline upon encountering a SQL exception.
    Case Sensitive Considers the case of the schemas and tables when executing queries.
    Idle Timeout (seconds) Maximum time to allow a connection to idle. Use a time constant in an expression to define the time increment.

    Use 0 to avoid removing any idle connections.

    When the entered value is close to or more than the maximum lifetime for a connection, Data Collector ignores the idle timeout.

    Default is 10 minutes, defined as follows:
    ${10 * MINUTES}
    Use Isolation Levels Isolates changes made to the table while the origin reads data.

    Only select this option if reading from non-static tables that might change as the pipeline runs.

    Connection Timeout (seconds) Maximum time to wait for a connection. Use a time constant in an expression to define the time increment.
    Default is 30 seconds, defined as follows:
    ${30 * SECONDS}