Amazon S3

The Amazon S3 origin reads objects stored in Amazon S3. The object names must share a prefix pattern and should be fully written. To read messages from Amazon SQS, use the Amazon SQS Consumer origin. The Amazon S3 origin can process objects in parallel with multiple threads. For information about supported versions, see Supported Systems and Versions in the Data Collector documentation.

Note: The Amazon S3 origin can be used in standalone pipelines only. To use a cluster pipeline to read from Amazon S3, use a Hadoop FS origin in a cluster EMR batch pipeline that runs on an Amazon EMR cluster. Or, use a Hadoop FS origin in a cluster batch pipeline that runs on a Cloudera distribution of Hadoop (CDH) or Hortonworks Data Platform (HDP) cluster. For more information, see Amazon S3 Requirements for cluster pipelines.

With the Amazon S3 origin, you define the region, bucket, prefix pattern, optional common prefix, and read order. These properties determine the objects that the origin processes. You configure the authentication method that the origin uses to connect to Amazon S3. You can optionally include Amazon S3 object metadata in the record as record header attributes.

After processing an object or upon encountering errors, the origin can keep, archive, or delete the object. When archiving, the origin can copy or move the object.

When a pipeline stops, the Amazon S3 origin notes where it stops reading. When the pipeline starts again, the origin continues processing from where it stopped by default. You can reset the origin to process all requested objects.

You can configure the origin to decrypt data stored on Amazon S3 with server-side encryption and customer-provided encryption keys. You can optionally use a proxy to connect to Amazon S3. You can also use a connection to configure the origin.

Note: The origin processes objects based on object key names and locations. Having objects with the same key name in the same location can cause the origin to skip reading the duplicate objects.

The origin can generate events for an event stream. For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.

Authentication Method

You can configure the Amazon S3 origin to authenticate with Amazon Web Services (AWS) using an instance profile or AWS access keys. When accessing a public bucket, you can connect anonymously using no authentication.

For more information about the authentication methods and details on how to configure each method, see Security in Amazon Stages.

Common Prefix, Prefix Pattern, and Wildcards

The Amazon S3 origin appends the common prefix to the prefix pattern to define the objects that the origin processes. You can specify an exact prefix pattern or you can use Ant-style path patterns to read multiple objects recursively.

Ant-style path patterns can include the following wildcards:
  • Question mark (?) to match a single character
  • Asterisk (*) to match zero or more characters
  • Double asterisks (**) to match zero or more directories
For example, to process all log files in US/East/MD/ and all nested prefixes, you can use the following common prefix and prefix pattern:
Common Prefix: US/East/MD/
Prefix Pattern: **/*.log
If the unnamed nested prefixes that you want to include appear earlier in the hierarchy, such as US/**/weblogs/, you can include the nested prefixes in the prefix pattern or define the entire hierarchy in the prefix pattern, as follows:
Common Prefix: US/
Prefix Pattern: **/weblogs/*.log

Common Prefix: 
Prefix Pattern: US/**/weblogs/*.log

Multithreaded Processing

The Amazon S3 origin uses multiple concurrent threads to process data based on the Number of Threads property.

Each thread reads data from a single object, and each object can have a maximum of one thread read from it at a time. The object read order is based on the configuration for the Read Order property.

As the pipeline runs, each thread connects to the origin system, creates a batch of data, and passes the batch to an available pipeline runner. A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors, executors, and destinations in the pipeline and handles all pipeline processing after the origin.

Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property to specify the interval or to opt out of empty batch generation.

Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline runners, the order that batches are written to destinations is not ensured.

For example, suppose you configure the origin to use five threads to read objects in the order of last-modified timestamp. When you start the pipeline, the origin creates five threads, and Data Collector creates a matching number of pipeline runners.

The Amazon S3 origin assigns a thread to each of the five oldest objects. Each thread processes its assigned object, passing batches of data to the origin. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing.

After a thread completes processing an object, the origin assigns the thread to the next object based on the last-modified timestamp, until all objects are processed.

For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.

Record Header Attributes

When the Amazon S3 origin processes Avro data, it includes the Avro schema in an avroSchema record header attribute. When the origin processes Parquet data and Skip Union Indexes is not enabled, it generates an avro.union.typeIndex./id record header attribute identifying the index number of the element in a union the data is read from. You can also configure the origin to include Amazon S3 object metadata in record header attributes.

You can use the record:attribute or record:attributeOrDefault functions to access the information in the attributes. For more information about working with record header attributes, see Working with Header Attributes.

Object Metadata in Record Header Attributes

You can include Amazon S3 object metadata in record header attributes. Include metadata when you want to use the information to help process records. For example, you might include metadata if you want to route records to different branches of a pipeline based on the last-modified timestamp.

Use the Include Metadata property to include metadata in the record header attributes. When you include metadata in record header attributes, the Amazon S3 origin includes the following information:
System-defined metadata
The origin includes the following system-defined metadata:
  • Name - The object name. Bucket and prefix information is included as follows:
    <bucket>/<prefix>/<object_name>
  • Cache-Control
  • Content-Disposition
  • Content-Encoding
  • Content-Length
  • Content-MD5
  • Content-Range
  • Content-Type
  • ETag
  • Expires
  • Last-Modified
For more information about Amazon S3 system-defined metadata, see the Amazon S3 documentation.
User-defined metadata
When available, the Amazon S3 origin also includes user-defined metadata in record header attributes.
Amazon S3 requires user-defined metadata to be named with the following prefix: x-amz-meta-.
When generating the record header attribute, the origin omits the prefix.
For example, if you have user-defined metadata called "x-amz-meta-extraInfo", the origin names the record header attribute as follows: extraInfo.

For more information about record header attributes, see Record Header Attributes.

Read Order

The Amazon S3 origin reads objects in ascending order based on the object key name or the last modified timestamp. For best performance when reading a large number of objects, configure the origin to read objects based on the key name.

You can configure one of the following read orders:

Lexicographically Ascending Key Names
The Amazon S3 origin can read objects in lexicographically ascending order based on key names. Lexicographically ascending order reads the numbers 1 through 11 as follows:
1, 10, 11, 2, 3, 4... 9
For example, you configure the Amazon S3 origin to read from the following bucket, common prefix, and prefix pattern using lexicographically ascending order based on key names:
Bucket: WebServer
Common Prefix: 2016/
Prefix Pattern: **/web*.log
The origin reads the following objects in the following order:
s3://WebServer/2016/February/web-10.log
s3://WebServer/2016/February/web-11.log
s3://WebServer/2016/February/web-5.log
s3://WebServer/2016/February/web-6.log
s3://WebServer/2016/February/web-7.log
s3://WebServer/2016/February/web-8.log
s3://WebServer/2016/February/web-9.log
s3://WebServer/2016/January/web-1.log
s3://WebServer/2016/January/web-2.log
s3://WebServer/2016/January/web-3.log
s3://WebServer/2016/January/web-4.log
To read these objects in logical and lexicographically ascending order, you might add leading zeros to the file naming convention as follows:
s3://WebServer/2016/February/web-0005.log
s3://WebServer/2016/February/web-0006.log
...
s3://WebServer/2016/February/web-0010.log
s3://WebServer/2016/February/web-0011.log
s3://WebServer/2016/January/web-0001.log
s3://WebServer/2016/January/web-0002.log
s3://WebServer/2016/January/web-0003.log
s3://WebServer/2016/January/web-0004.log
Last Modified Timestamp
The Amazon S3 origin can read objects in ascending order based on the last modified timestamp. When you start a pipeline, the origin starts processing data with the earliest object that matches the common prefix and prefix pattern, and then progresses in chronological order. If two or more objects have the same timestamp, the origin processes the objects in lexicographically increasing order by key name.
To process objects that include a timestamp earlier than processed objects, reset the origin to read all available objects.

For example, you configure the origin to read from the ServerEast bucket, using LogFiles/ as the common prefix and *.log as the prefix pattern. You need to process the following log files from two different servers using ascending order based on the last modified timestamp:

s3://ServerEast/LogFiles/fileA.log        04-30-2016 12:03:23
s3://ServerEast/LogFiles/fileB.log        04-30-2016 15:34:51
s3://ServerEast/LogFiles/file1.log        04-30-2016 12:00:00
s3://ServerEast/LogFiles/file2.log        04-30-2016 18:39:44
The origin reads these objects in order of the timestamp, as follows:
s3://ServerEast/LogFiles/file1.log        04-30-2016 12:00:00
s3://ServerEast/LogFiles/fileA.log        04-30-2016 12:03:23
s3://ServerEast/LogFiles/fileB.log        04-30-2016 15:34:51
s3://ServerEast/LogFiles/file2.log        04-30-2016 18:39:44

If a new object arrives with a timestamp of 04-29-2016 12:00:00, the Amazon S3 origin does not process the object unless you reset the origin.

Buffer Limit and Error Handling

The Amazon S3 origin uses a buffer to read objects into memory to produce records. The size of the buffer determines the maximum size of the record that can be processed.

The buffer limit helps prevent out of memory errors. Decrease the buffer limit when memory on the Data Collector machine is limited. Increase the buffer limit to process larger records when memory is available.

When a record is larger than the specified limit, the origin processes the object based on the stage error handling:
Discard
The origin discards the record and all remaining records in the object, and then continues processing the next object.
Send to Error
With a buffer limit error, the origin cannot send the record to the pipeline for error handling because it is unable to fully process the record.

Instead, the origin displays a message indicating that a buffer overrun error occurred in the pipeline history.

If an error directory is configured for the stage, the origin moves the object to the error directory and continues processing the next object.

Stop Pipeline
The origin stops the pipeline and displays a message indicating that a buffer overrun error occurred. The message includes the object and offset where the buffer overrun error occurred. The information displays in the pipeline history.
Note: You can also check the Data Collector log file for error details.

Server Side Encryption

You can configure the origin to decrypt data stored on Amazon S3 with Amazon Web Services server-side encryption.

When configured for server-side encryption, the origin uses customer-provided encryption keys to decrypt the data. To use server-side encryption, provide the following information:
  • Base64 encoded 256-bit encryption key
  • Base64 encoded 128-bit MD5 digest of the encryption key using RFC 1321

For information about implementing customer-provided encryption keys in the origin system, see the Amazon S3 documentation.

Event Generation

The Amazon S3 origin can generate events that you can use in an event stream. When you enable event generation, the origin generates event records each time the origin starts or completes reading an object and after the configured batch wait time has elapsed following all processing of the available data.

Amazon S3 events can be used in any logical way. For example:
  • With the Pipeline Finisher executor to stop the pipeline and transition the pipeline to a Finished state when the origin completes processing available data.

    When you restart a pipeline stopped by the Pipeline Finisher executor, the origin continues processing from the last-saved offset unless you reset the origin.

    For an example, see Stopping a Pipeline After Processing All Available Data.

  • With a destination to store event information.

    For an example, see Preserving an Audit Trail of Events.

For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.

Event Records

Event records generated by the Amazon S3 origin have the following event-related record header attributes. Record header attributes are stored as String values:
Record Header Attribute Description
sdc.event.type Event type. Uses one of the following types:
  • new-file - Generated when the origin starts processing a new object.
  • finished-file - Generated when the origin completes processing an object.
  • no-more-data - Generated after the origin completes processing all available objects and the number of seconds configured for Batch Wait Time has elapsed.
sdc.event.version Integer that indicates the version of the event record type.
sdc.event.creation_timestamp Epoch timestamp when the stage created the event.

The Amazon S3 origin can generate the following types of event records:

new-file
The Amazon S3 origin generates a new-file event record when it starts processing a new object.
New-file event records have the sdc.event.type record header attribute set to new-file and include the following field:
Event Record Field Description
filepath Path and name of the object that the origin started processing.
finished-file
The Amazon S3 origin generates a finished-file event record when it finishes processing an object.
Finished-file event records have the sdc.event.type record header attribute set to finished-file and include the following fields:
Event Record Field Description
filepath Path and name of the object that the origin finished processing.
record-count Number of records successfully generated from the object.
error-count Number of error records generated from the object.
no-more-data
The Amazon S3 origin generates a no-more-data event record when the origin completes processing all available records and the number of seconds configured for Batch Wait Time elapses without any new objects appearing to be processed.
No-more-data event records have the sdc.event.type record header attribute set to no-more-data and include the following fields:
Event Record Field Description
record-count Number of records successfully generated since the pipeline started or since the last no-more-data event was created.
error-count Number of error records generated since the pipeline started or since the last no-more-data event was created.
file-count Number of objects that the origin attempted to process. Can include objects that were unable to be processed or were not fully processed.

Data Formats

The Amazon S3 origin processes data differently based on the data format. The origin processes the following types of data:
Avro
Generates a record for every Avro record. Includes a precision and scale field attribute for each Decimal field.
The stage includes the Avro schema in an avroSchema record header attribute. You can use one of the following methods to specify the location of the Avro schema definition:
  • Message/Data Includes Schema - Use the schema in the file.
  • In Pipeline Configuration - Use the schema that you provide in the stage configuration properties.
  • Confluent Schema Registry - Retrieve the schema from Confluent Schema Registry. Confluent Schema Registry is a distributed storage layer for Avro schemas. You can configure the stage to look up the schema in Confluent Schema Registry by the schema ID or subject specified in the stage configuration.
Using a schema in the stage configuration or retrieving a schema from Confluent Schema Registry overrides any schema that might be included in the file and can improve performance.
The stage reads files compressed by Avro-supported compression codecs without requiring additional configuration. To enable the stage to read files compressed by other codecs, use the compression format property in the stage.
Binary
Generates a record with a single byte array field at the root of the record.
When the data exceeds the user-defined maximum data size, the origin cannot process the data. Because the record is not created, the origin cannot pass the record to the pipeline to be written as an error record. Instead, the origin generates a stage error.
Delimited
Generates a record for each delimited line.
The CSV parser that you choose determines the delimiter properties that you configure and how the stage handles parsing errors. You can specify if the data includes a header line and whether to use it. You can define the number of lines to skip before reading, the character set of the data, and the root field type to use for the generated record.
You can also configure the stage to replace a string constant with null values and to ignore control characters.
For more information about reading delimited data, see Reading Delimited Data.
Excel
Generates a record for every row in the file. Can process .xls or .xlsx files.

You can configure the origin to read from all sheets in a workbook or from particular sheets in a workbook. You can specify whether files include a header row and whether to ignore the header row. You can also configure the origin to skip cells that do not have a corresponding header value. A header row must be the first row of a file. Vertical header columns are not recognized.

The origin cannot process Excel files with large numbers of rows. You can save such files as CSV files in Excel, and then use the origin to process with the delimited data format.

JSON
Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
When an object exceeds the maximum object length defined for the origin, the origin cannot continue processing data in the file. Records already processed from the file are passed to the pipeline. The behavior of the origin is then based on the error handling configured for the stage:
  • Discard - The origin continues processing with the next file, leaving the partially-processed file in the directory.
  • To Error - The origin continues processing with the next file. If a post-processing error directory is configured for the stage, the origin moves the partially-processed file to the error directory. Otherwise, it leaves the file in the directory.
  • Stop Pipeline - The origin stops the pipeline.
Parquet
The origin generates records for every Parquet record in the file. The file must contain the Parquet schema. The origin uses the Parquet schema to generate records.

The stage includes the Parquet schema in a parquetSchema record header attribute.

When Skip Union Indexes is not enabled, the origin generates an avro.union.typeIndex./id record header attribute identifying the index number of the element in the union that the data is read from. If a schema contains many unions and the pipeline does not depend on index information, you can enable Skip Union Indexes to avoid long processing times associated with storing a large number of indexes.

Log
Generates a record for every log line.
When a line exceeds the user-defined maximum line length, the origin truncates longer lines.
You can include the processed log line as a field in the record. If the log line is truncated, and you request the log line in the record, the origin includes the truncated line.
You can define the log format or type to be read.
Protobuf
Generates a record for every protobuf message.
Protobuf messages must match the specified message type and be described in the descriptor file.
When the data for a record exceeds 1 MB, the origin cannot continue processing data in the file. The origin handles the file based on file error handling properties and continues reading the next file.
For information about generating the descriptor file, see Protobuf Data Format Prerequisites.
SDC Record
Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
For error records, the origin provides the original record as read from the origin in the original pipeline, as well as error information that you can use to correct the record.
When processing error records, the origin expects the error file names and contents as generated by the original pipeline.
Text
Generates a record for each line of text or for each section of text based on a custom delimiter.
When a line or section exceeds the maximum line length defined for the origin, the origin truncates it. The origin adds a boolean field named Truncated to indicate if the line was truncated.
For more information about processing text with a custom delimiter, see Text Data Format with Custom Delimiters.
Whole File
Streams whole files from the origin system to the destination system. You can specify a transfer rate or use all available resources to perform the transfer.
The origin uses checksums to verify the integrity of data transmission.
The origin generates two fields: one for a file reference and one for file information. For more information, see Whole File Data Format.
XML
Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.
Generated records include XML attributes and namespace declarations as fields in the record by default. You can configure the stage to include them in the record as field attributes.
You can include XPath information for each parsed XML element and XML attribute in field attributes. This also places each namespace in an xmlns record header attribute.
Note: Field attributes and record header attributes are written to destination systems automatically only when you use the SDC RPC data format in destinations. For more information about working with field attributes and record header attributes, and how to include them in records, see Field Attributes and Record Header Attributes.
When a record exceeds the user-defined maximum record length, the origin cannot continue processing data in the file. Records already processed from the file are passed to the pipeline. The behavior of the origin is then based on the error handling configured for the stage:
  • Discard - The origin continues processing with the next file, leaving the partially-processed file in the directory.
  • To Error - The origin continues processing with the next file. If a post-processing error directory is configured for the stage, the origin moves the partially-processed file to the error directory. Otherwise, it leaves the file in the directory.
  • Stop Pipeline - The origin stops the pipeline.
Use the XML data format to process valid XML documents. For more information about XML processing, see Reading and Processing XML Data.
Tip: If you want to process invalid XML documents, you can try using the text data format with custom delimiters. For more information, see Processing XML Data with Custom Delimiters.

Configuring an Amazon S3 Origin

Configure an Amazon S3 origin to read data from objects in Amazon S3.
  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    Produce Events Generates event records when events occur. Use for event handling.
    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline.
  2. On the Amazon S3 tab, configure the following properties:
    Amazon S3 Property Description
    Connection Connection that defines the information required to connect to an external system.

    To connect to an external system, you can select a connection that contains the details, or you can directly enter the details in the pipeline. When you select a connection, Control Hub hides other properties so that you cannot directly enter connection details in the pipeline.

    Authentication Method Authentication method used to connect to Amazon Web Services (AWS):
    • AWS Keys - Authenticates using an AWS access key pair.
    • Instance Profile - Authenticates using an instance profile associated with the Data Collector EC2 instance.
    • None - Connects to a public bucket using no authentication.
    Access Key ID AWS access key ID. Required when using AWS keys to authenticate with AWS.
    Secret Access Key AWS secret access key. Required when using AWS keys to authenticate with AWS.
    Tip: To secure sensitive information such as access key pairs, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.
    Assume Role Temporarily assumes another role to authenticate with AWS.
    Role ARN

    Amazon resource name (ARN) of the role to assume, entered in the following format:

    arn:aws:iam::<account_id>:role/<role_name>

    Where <account_id> is the ID of your AWS account and <role_name> is the name of the role to assume. You must create and attach an IAM trust policy to this role that allows the role to be assumed.

    Available when assuming another role.

    Role Session Name

    Optional name for the session created by assuming a role. Overrides the default unique identifier for the session.

    Available when assuming another role.

    Session Timeout

    Maximum number of seconds for each session created by assuming a role. The session is refreshed if the pipeline continues to run for longer than this amount of time.

    Set to a value between 3,600 seconds and 43,200 seconds.

    Available when assuming another role.

    Set Session Tags

    Sets a session tag to record the name of the currently logged in StreamSets user that starts the pipeline or the job for the pipeline. AWS IAM verifies that the user account set in the session tag can assume the specified role.

    Select only when the IAM trust policy attached to the role to be assumed uses session tags and restricts the session tag values to specific user accounts.

    When cleared, the connection does not set a session tag.

    Available when assuming another role.

    Use Specific Region Specify the AWS region or endpoint to connect to.

    When cleared, the stage uses the Amazon S3 default global endpoint, s3.amazonaws.com.

    Region AWS region to connect to. Select one of the available regions. To specify an endpoint to connect to, select Other.
    Endpoint Endpoint to connect to when you select Other for the region. Enter the endpoint name.
    Use Custom Endpoint Specify a specific signing region when connecting to a custom endpoint.

    When cleared, the stage uses the region specified in the endpoint.

    Signing Region AWS region used by the custom endpoint.
    Bucket Bucket that contains the objects to be read.
    Note: The bucket name must be DNS compliant. For more information about bucket naming conventions, see the Amazon S3 documentation.
    Common Prefix Optional common prefix that describes the location of the objects. When defined, the common prefix acts as a root for the prefix pattern.
    Prefix Pattern Prefix pattern that describes the objects to be processed.

    You can include the entire path to the objects. You can also use Ant-style path patterns to read objects recursively.

    External ID External ID included in an IAM trust policy that allows the specified role to be assumed.

    Available when assuming another role.

    Delimiter Delimiter used by Amazon S3 to define the prefix hierarchy.

    Default is slash ( / ).

    Include Metadata Includes system-defined and user-defined metadata in record header attributes.
    Read Order The order to use when reading objects:
    • Lexicographically Ascending Keys Names - Reads objects in lexicographically ascending order based on key name.
    • Last Modified Timestamp - Reads objects in ascending order based on the last-modified timestamp. When objects have matching timestamps, reads objects in lexicographically ascending order based on key names.

    For best performance when reading a large number of objects, use lexicographical order based on key names.

    File Pool Size Maximum number of files that the origin stores in memory for processing after loading and sorting all files present on S3. Increasing this number can improve pipeline performance when Data Collector resources permit.

    Default is 100.

    Buffer Limit (KB) Maximum buffer size. The buffer size determines the size of the record that can be processed.

    Decrease when memory on the Data Collector machine is limited. Increase to process larger records when memory is available.

    Default is 128 KB.

    File Processing Delay (ms)

    The minimum number of milliseconds that must pass from the time a file is created before it is processed.

    Default is 10000 milliseconds.

    Max Batch Size (records) Maximum number of records processed at one time. Honors values up to the Data Collector maximum batch size.

    Default is 1000. The Data Collector default is 1000.

    Max Batch Wait Time (ms) Number of milliseconds to wait before sending a partial or empty batch.
  3. To use server-side encryption, on the SSE tab, configure the following properties:
    SSE Property Description
    Use Server-Side Encryption Enables the use of server-side encryption.
    Customer Encryption Key A Base64 encoded 256-bit encryption key.
    Customer Encryption Key MD5 A Base64 encoded 128-bit MD5 digest of the encryption key using RFC 1321.
  4. On the Error Handling tab, configure the following properties:
    Error Handling Property Description
    Error Handling Option Action taken when an error occurs while processing an object:
    • None - Keeps the object in place.
    • Archive - Copies or moves the object to another prefix or bucket.
    • Delete - Deletes the object.
    When defining the error handling, consider the following guidelines:
    • Do not set to None when post processing archives or deletes objects.
    • As a best practice, set to Archive when post processing archives objects.
    Archiving Option Action taken when archiving an object that cannot be processed.

    You can copy or move the object to another prefix or bucket. When you use another prefix, enter the prefix. When you use another bucket, enter a prefix and bucket.

    Copying the object leaves the original object in place.

    Error Prefix Prefix for the objects that cannot be processed.
    Error Bucket Bucket for the objects that cannot be processed.
  5. On the Post Processing tab, configure the following properties:
    Post Processing Property Description
    Post Processing Option Action taken after successfully processing an object:
    • None - Keep the object in place.
    • Archive - Copy or move the object to another location.
    • Delete - Delete the object.
    Archiving Option Action to take when archiving a processed object.

    You can copy or move the object to another prefix or bucket. When you use another prefix, enter the prefix. When you use another bucket, enter a prefix and bucket.

    Copying the object leaves the original object in place.

    Post Process Prefix Prefix for processed objects.
    Post Process Bucket Bucket for processed objects.
  6. On the Advanced tab, optionally configure the number of threads and proxy information:
    Advanced Property Description
    Number of Threads Number of threads the origin generates and uses for multithreaded processing. Default is 1.
    Connection Timeout Seconds to wait for a response before closing the connection.
    Socket Timeout Seconds to wait for a response to a query.
    Retry Count Maximum number of times to retry requests.
    Use Proxy Specifies whether to use a proxy to connect.
    Proxy Host Proxy host.
    Proxy Port Proxy port.
    Proxy User User name for proxy credentials.
    Proxy Password Password for proxy credentials.
    Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.
    Proxy Domain Optional domain name for the proxy server.
    Proxy Workstation Optional workstation for the proxy server.
  7. On the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format Data format for source files. Use one of the following formats:
  8. For Avro data, on the Data Format tab, configure the following properties:
    Avro Property Description
    Avro Schema Location Location of the Avro schema definition to use when processing data:
    • Message/Data Includes Schema - Use the schema in the file.
    • In Pipeline Configuration - Use the schema provided in the stage configuration.
    • Confluent Schema Registry - Retrieve the schema from Confluent Schema Registry.

    Using a schema in the stage configuration or in Confluent Schema Registry can improve performance.

    Avro Schema Avro schema definition used to process the data. Overrides any existing schema definitions associated with the data.

    You can optionally use the runtime:loadResource function to load a schema definition stored in a runtime resource file.

    Schema Registry URLs Confluent Schema Registry URLs used to look up the schema. To add a URL, click Add and then enter the URL in the following format:
    http://<host name>:<port number>
    Basic Auth User Info User information needed to connect to Confluent Schema Registry when using basic authentication.

    Enter the key and secret from the schema.registry.basic.auth.user.info setting in Schema Registry using the following format:

    <key>:<secret>
    Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.
    Lookup Schema By Method used to look up the schema in Confluent Schema Registry:
    • Subject - Look up the specified Avro schema subject.
    • Schema ID - Look up the specified Avro schema ID.
    Overrides any existing schema definitions associated with the data.
    Schema Subject Avro schema subject to look up in Confluent Schema Registry.

    If the specified subject has multiple schema versions, the origin uses the latest schema version for that subject. To use an older version, find the corresponding schema ID, and then set the Look Up Schema By property to Schema ID.

    Schema ID Avro schema ID to look up in Confluent Schema Registry.
  9. For binary data, on the Data Format tab, configure the following properties:
    Binary Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Max Data Size (bytes) Maximum number of bytes in the message. Larger messages cannot be processed or written to error.
  10. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Header Line Indicates whether a file contains a header line, and whether to use the header line.
    Delimiter Format Type Delimiter format type. Use one of the following options:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma-separated file.
    • Tab-Separated Values - File that includes tab-separated values.
    • PostgreSQL CSV - PostgreSQL comma-separated file.
    • PostgreSQL Text - PostgreSQL text file.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    • Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.

    Available when using the Apache Commons parser type.

    Multi Character Field Delimiter Characters that delimit fields.

    Default is two pipe characters (||).

    Available when using the Apache Commons parser with the multi-character delimiter format.

    Multi Character Line Delimiter Characters that delimit lines or records.

    Default is the newline character (\n).

    Available when using the Apache Commons parser with the multi-character delimiter format.

    Delimiter Character Delimiter character. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ​N is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Available when using the Apache Commons parser with a custom delimiter format.

    Field Separator One or more characters to use as delimiter characters between columns.

    Available when using the Univocity parser.

    Escape Character Escape character.

    Available when using the Apache Commons parser with the custom or multi-character delimiter format. Also available when using the Univocity parser.

    Quote Character Quote character.

    Available when using the Apache Commons parser with the custom or multi-character delimiter format. Also available when using the Univocity parser.

    Line Separator Line separator.

    Available when using the Univocity parser.

    Allow Comments Allows commented data to be ignored for custom delimiter format.

    Available when using the Univocity parser.

    Comment Character

    Character that marks a comment when comments are enabled for custom delimiter format.

    Available when using the Univocity parser.

    Enable Comments Allows commented data to be ignored for custom delimiter format.

    Available when using the Apache Commons parser.

    Comment Marker Character that marks a comment when comments are enabled for custom delimiter format.

    Available when using the Apache Commons parser.

    Lines to Skip Number of lines to skip before reading data.
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    CSV Parser Parser to use to process delimited data:
    • Apache Commons - Provides robust parsing and a wide range of delimited format types.
    • Univocity - Can provide faster processing for wide delimited files, such as those with over 200 columns.

    Default is Apache Commons.

    Max Columns Maximum number of columns to process per record.

    Available when using the Univocity parser.

    Max Character per Column Maximum number of characters to process in each column.

    Available when using the Univocity parser.

    Skip Empty Lines Allows skipping empty lines.

    Available when using the Univocity parser.

    Allow Extra Columns Allows processing records with more columns than exist in the header line.

    Available when using the Apache Commons parser to process data with a header line.

    Extra Column Prefix Prefix to use for any additional columns. Extra columns are named using the prefix and sequential increasing integers as follows: <prefix><integer>.

    For example, _extra_1. Default is _extra_.

    Available when using the Apache Commons parser to process data with a header line while allowing extra columns.

    Max Record Length (chars) Maximum length of a record in characters. Longer records are not read.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Available when using the Apache Commons parser.

    Ignore Empty Lines Allows empty lines to be ignored.

    Available when using the Apache Commons parser with the custom delimiter format.

    Root Field Type Root field type to use:
    • List-Map - Generates an indexed list of data. Enables you to use standard functions to process data. Use for new pipelines.
    • List - Generates a record with an indexed list with a map for header and value. Requires the use of delimited data functions to process data. Use only to maintain pipelines created before 1.1.0.
    Parse NULLs Replaces the specified string constant with null values.
    NULL Constant String constant to replace with null values.
    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  11. For Excel files, on the Data Format tab, configure the following properties:
    Excel Property Description
    Excel Header Option Indicates whether files include a header row and whether to ignore the header row. A header row must be the first row of a file.
    Skip Cells With No Header Skips processing cells when they do not have a corresponding header value.

    Available when Excel Header Option is set to With Header Line.

    Include Cells With Empty Value Includes empty cells in records.
    Read All Sheets Reads all sheets in the Excel file.
    Import Sheets Name of sheet to read. Using simple or bulk edit mode, click Add Another to add additional sheets.

    Available when Read All Sheets is not selected.

  12. For JSON data, on the Data Format tab, configure the following properties:
    JSON Property Description
    JSON Content Type of JSON content. Use one of the following options:
    • JSON array of objects
    • Multiple JSON objects
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Max Object Length (chars) Maximum number of characters in a JSON object.

    Longer objects are diverted to the pipeline for error handling.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  13. For log data, on the Data Format tab, configure the following properties:
    Log Property Description
    Log Format Format of the log files. Use one of the following options:
    • Common Log Format
    • Combined Log Format
    • Apache Error Log Format
    • Apache Access Log Custom Format
    • Regular Expression
    • Grok Pattern
    • Log4j
    • Common Event Format (CEF)
    • Log Event Extended Format (LEEF)
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Max Line Length Maximum length of a log line. The origin truncates longer lines.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Retain Original Line Determines how to treat the original log line. Select to include the original log line as a field in the resulting record.

    By default, the original line is discarded.

    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
    • When you select Apache Access Log Custom Format, use Apache log format strings to define the Custom Log Format.
    • When you select Regular Expression, enter the regular expression that describes the log format, and then map the fields that you want to include to each regular expression group.
    • When you select Grok Pattern, you can use the Grok Pattern Definition field to define custom grok patterns. You can define a pattern on each line.

      In the Grok Pattern field, enter the pattern to use to parse the log. You can use a predefined grok patterns or create a custom grok pattern using patterns defined in Grok Pattern Definition.

      For more information about defining grok patterns and supported grok patterns, see Defining Grok Patterns.

    • When you select Log4j, define the following properties:
      Log4j Property Description
      On Parse Error Determines how to handle information that cannot be parsed:
      • Skip and Log Error - Skips reading the line and logs a stage error.
      • Skip, No Error - Skips reading the line and does not log an error.
      • Include as Stack Trace - Includes information that cannot be parsed as a stack trace to the previously-read log line. The information is added to the message field for the last valid log line.
      Use Custom Log Format Allows you to define a custom log format.
      Custom Log4J Format Use log4j variables to define a custom log format.
  14. For Parquet data, on the Data Format tab, configure the following property:
    Parquet Property Description
    Skip Union Indexes Omits header attributes identifying the index number of the element in a union that data is read from.

    If a schema contains many unions and the pipeline does not depend on index information, you can enable this property to avoid long processing times associated with storing a large number of indexes.

  15. For protobuf data, on the Data Format tab, configure the following properties:
    Protobuf Property Description
    Protobuf Descriptor File Descriptor file (.desc) to use. The descriptor file must be in the Data Collector resources directory, $SDC_RESOURCES.

    For more information about environment variables, see Data Collector Environment Configuration in the Data Collector documentation. For information about generating the descriptor file, see Protobuf Data Format Prerequisites.

    Message Type The fully-qualified name for the message type to use when reading data.

    Use the following format: <package name>.<message type>.

    Use a message type defined in the descriptor file.
    Delimited Messages Indicates if a file might include more than one protobuf message.
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

  16. For SDC Record data, on the Data Format tab, configure the following properties:
    SDC Record Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

  17. For text data, on the Data Format tab, configure the following properties:
    Text Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Max Line Length Maximum number of characters allowed for a line. Longer lines are truncated.

    Adds a boolean field to the record to indicate if it was truncated. The field name is Truncated.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Use Custom Delimiter Uses custom delimiters to define records instead of line breaks.
    Custom Delimiter One or more characters to use to define records.
    Include Custom Delimiter Includes delimiter characters in the record.
    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  18. For whole files, on the Data Format tab, configure the following properties:
    Whole File Property Description
    Verify Checksum Verifies the checksum during the read.
    Buffer Size (bytes) Size of the buffer to use to transfer data.
    Rate per Second Transfer rate to use.

    Enter a number to specify a rate in bytes per second. Use an expression to specify a rate that uses a different unit of measure per second, e.g. ${5 * MB}. Use -1 to opt out of this property.

    By default, the origin does not use a transfer rate.

  19. For XML data, on the Data Format tab, configure the following properties:
    XML Property Description
    Delimiter Element
    Delimiter to use to generate records. Omit a delimiter to treat the entire XML document as one record. Use one of the following:
    • An XML element directly under the root element.

      Use the XML element name without surrounding angle brackets ( < > ) . For example, msg instead of <msg>.

    • A simplified XPath expression that specifies the data to use.

      Use a simplified XPath expression to access data deeper in the XML document or data that requires a more complex access method.

      For more information about valid syntax, see Simplified XPath Syntax.

    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Preserve Root Element Includes the root element in the generated records.

    When omitting a delimiter to generate a single record, the root element is the root element of the XML document.

    When specifying a delimiter to generate multiple records, the root element is the XML element specified as the delimiter element or is the last XML element in the simplified XPath expression specified as the delimiter element.

    Include Field XPaths Includes the XPath to each parsed XML element and XML attribute in field attributes. Also includes each namespace in an xmlns record header attribute.

    When not selected, this information is not included in the record. By default, the property is not selected.

    Note: Field attributes and record header attributes are written to destination systems automatically only when you use the SDC RPC data format in destinations. For more information about working with field attributes and record header attributes, and how to include them in records, see Field Attributes and Record Header Attributes.
    Namespaces Namespace prefix and URI to use when parsing the XML document. Define namespaces when the XML element being used includes a namespace prefix or when the XPath expression includes namespaces.

    For information about using namespaces with an XML element, see Using XML Elements with Namespaces.

    For information about using namespaces with XPath expressions, see Using XPath Expressions with Namespaces.

    Using simple or bulk edit mode, click the Add icon to add additional namespaces.

    Output Field Attributes Includes XML attributes and namespace declarations in the record as field attributes. When not selected, XML attributes and namespace declarations are included in the record as fields.
    Note: Field attributes are automatically included in records written to destination systems only when you use the SDC RPC data format in the destination. For more information about working with field attributes, see Field Attributes.

    By default, the property is not selected.

    Max Record Length (chars)

    The maximum number of characters in a record. Longer records are diverted to the pipeline for error handling.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.