Azure Data Lake Storage Gen2
The Azure Data Lake Storage Gen2 origin reads data from Microsoft Azure Data Lake Storage Gen2. The origin can create multiple threads to enable parallel processing in a multithreaded pipeline. Use the origin only in pipelines configured for standalone execution mode. For information about supported versions, see Supported Systems and Versions.
The origin uses the Microsoft Azure Data Lake Storage Gen2 API to request a list of objects located in a storage container or file system and that match a pattern in a directory. As Azure returns pages with the requested objects, the origin launches threads to read and process the data. The objects must be fully written.
After processing an object or upon encountering errors, the origin can keep, archive, or delete the object. When archiving, the origin can move the object.
The origin can generate events for an event stream. For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.
When you configure the Azure Data Lake Storage Gen2 origin, you specify connection information for Azure Data Lake Storage Gen2, including the storage container or file system and authentication method. You also specify the number of objects for Azure to list on a page and the maximum time to wait for Azure to return the requested objects.
You specify information about the objects to read and how to process, including the directory that contains the objects, the matching name pattern, the order to read the objects, the number of threads used to process the data, and the batch size.
You also specify what to do with processed objects and how to handle objects that cannot be processed.
When a pipeline stops, the Azure Data Lake Storage Gen2 origin notes where it stops reading. When the pipeline starts again, the origin continues reading from where it stopped by default. You can reset the origin to read and process all requested objects.
Prerequisites
- If necessary, create a new Azure Active Directory
application for Data Collector.
For information about creating a new application, see the Azure documentation.
- Ensure that the Azure Active Directory Data Collector application
has the appropriate access control to perform the necessary
tasks.
The Data Collector application requires Read and Execute permissions to read data in Azure. If also writing to Azure, the application requires Write permission as well.
For information about configuring Gen2 access control, see the Azure documentation.
- Retrieve information from Azure to configure the origin.
After you complete all of the prerequisite tasks, you can configure the Azure Data Lake Storage Gen2 origin.
Retrieve Authentication Information
The Azure Data Lake Storage Gen2 origin can use different methods to authenticate connections with Azure.
- OAuth with Service Principal
- Connections made with OAuth with Service Principal authentication require
the following information:
- Application ID - Application ID for the Azure Active Directory Data Collector
application. Also known as the client ID.
For information on accessing the application ID from the Azure portal, see the Azure documentation.
- Tenant ID - Tenant ID for the Azure Active Directory
Data Collector application. Also known as the directory ID.
For information on accessing the tenant ID from the Azure portal, see the Azure documentation.
- Application Key - Authentication key or client secret
for the Azure Active Directory application. Also known as the
client secret.
For information on accessing the application key from the Azure portal, see the Azure documentation.
- Application ID - Application ID for the Azure Active Directory Data Collector
application. Also known as the client ID.
- Azure Managed Identity
- Connections made with Azure Managed Identity authentication
require the following information:
- Application ID - Application ID for the Azure Active Directory Data Collector
application. Also known as the client ID.
For information on accessing the application ID from the Azure portal, see the Azure documentation.
- Application ID - Application ID for the Azure Active Directory Data Collector
application. Also known as the client ID.
- Connections made with Shared Key authentication require the following
information:
Common Path, Path Pattern, and Wildcards
The Azure Data Lake Storage Gen2 origin concatenates the common path and the path pattern to define the objects that the origin reads. You can specify an exact path pattern or you can specify an Ant-style path pattern to have the origin read multiple objects recursively.
- Question mark (?) to match a single character
- Asterisk (*) to match zero or more characters
- Double asterisks (**) to match zero or more directories
- Read all log files in all nested directories
- The following table presents two options for configuring the properties to
read all log files in US/East/MD/ and all nested
directories:
Property Option 1 Option 2 Common Path US/East/MD/ US/ Path Pattern **/*.log East/MD/**/*.log - Read all log files in a particular subdirectory
- The following table presents two options for configuring the properties to
read all log files in a weblogs subdirectory nested
anywhere in the hierarchy under US/West:
Property Option 1 Option 2 Common Path US/West/ US/ Path Pattern **/weblogs/*.log West/**/weblogs/*.log
Read Order
The Azure Data Lake Storage Gen2 origin reads objects in ascending order based on the object name or the last modified timestamp. For best performance when reading a large number of objects, configure the origin to read objects based on the object name. Azure always returns objects sorted by name. To read in timestamp order, the origin sorts returned objects by timestamp. Therefore, timestamp ordering requires all objects be returned before the origin starts reading.
- Lexicographically Ascending Names
- The origin reads objects in lexicographically ascending order based on
object names. Lexicographically ascending order reads the numbers 1 through
11 as follows:
1, 10, 11, 2, 3, 4... 9
To read objects with names that sort before already processed objects, reset the origin to read all available objects.
- Last Modified Timestamp
- The origin reads objects in ascending order based on the last modified
timestamp. After Azure returns all objects from a request, the origin sorts
the objects by timestamp and then reads them in chronological order. If two
or more objects have the same timestamp, the origin reads those objects in
lexicographically increasing order by object name.
To read objects that include a timestamp earlier than already processed objects, reset the origin to read all available objects.
Buffer Limit and Error Handling
The Azure Data Lake Storage Gen2 origin uses a buffer to read objects into memory to produce records. The size of the buffer determines the maximum size of the record that can be processed.
The buffer limit helps prevent out-of-memory errors. Decrease the buffer limit when memory on the Data Collector machine is limited. Increase the buffer limit to process larger records when memory is available.
- Discard
- The origin discards the record and all remaining records in the object, and then continues processing the next object.
- Send to Error
- With a buffer limit error, the origin cannot send the record to the pipeline
for error handling because it is unable to fully process the record.
Instead, the origin displays a message in Monitor mode indicating that a buffer overrun error occurred. The message includes the object and offset where the buffer overrun error occurred. The information displays in the pipeline history and displays as an alert when you monitor the pipeline.
If an error container and path are configured for the stage, the origin moves the object to that location and continues processing the next object.
- Stop Pipeline
- The origin stops the pipeline and displays a message in Monitor mode indicating that a buffer overrun error occurred. The message includes the object and offset where the buffer overrun error occurred. The information displays as an alert and in the pipeline history.
Multithreaded Processing
The Azure Data Lake Storage Gen2 origin uses multiple concurrent threads to process data based on the Number of Threads property.
Each thread reads data from a single object, and each object can have a maximum of one thread read from it at a time. The object read order is based on the configuration for the Read Order property.
As the pipeline runs, each thread connects to the origin system, creates a batch of data, and passes the batch to an available pipeline runner. A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors, executors, and destinations in the pipeline and handles all pipeline processing after the origin.
Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property to specify the interval or to opt out of empty batch generation.
Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline runners, the order that batches are written to destinations is not ensured.
For example, suppose you configure the origin to use five threads to read objects in the order of last-modified timestamp. When you start the pipeline, the origin creates five threads, and Data Collector creates a matching number of pipeline runners.
The origin assigns a thread to each of the five oldest objects. Each thread processes its assigned object, passing batches of data to the origin. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing.
After a thread completes processing an object, the origin assigns the thread to the next object based on the last-modified timestamp, until all objects are processed.
For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.
Record Header Attributes
When the Azure Data Lake Storage Gen2 origin processes Avro data, it includes the Avro schema in
an avroSchema
record header attribute. When the origin processes Parquet data and Skip Union Indexes is
not enabled, it generates an avro.union.typeIndex./id
record header attribute identifying the index number of the
element in a union the data is read from.
You can also configure the origin to include Azure Data Lake Storage Gen2 object metadata in record header attributes.
You can use the record:attribute
or
record:attributeOrDefault
functions to access the information
in the attributes. For more information about working with record header attributes,
see Working with Header Attributes.
Object Metadata in Record Header Attributes
You can include Azure Data Lake Storage Gen2 object metadata in record header attributes. Include metadata when you want to use the information to help process records. For example, you might include metadata if you want to route records to different branches of a pipeline based on the last-modified timestamp.
- System-defined metadata
- The origin includes the following system-defined metadata:
- Creation-Time
- Last-Modified
- Etag
- Content-Length
- Content-Encoding
- Content-Language
- Content-MD5
- Content-Disposition
- Cache-Control
- Custom metadata
- The origin includes the following custom metadata:
- container
- objectKey
- file
- filename
- mtime
- size
- owner
- permissions
- continuationToken
For more information about record header attributes, see Record Header Attributes.
Event Generation
The Azure Data Lake Storage Gen2 origin can generate events that you can use in an event stream. With event generation enabled, the origin generates event records each time the origin starts or completes reading an object and after the configured batch wait time has elapsed following all processing of the available data.
- With the Pipeline Finisher executor to
stop the pipeline and transition the pipeline to a Finished state when
the origin completes processing available data.
When you restart a pipeline stopped by the Pipeline Finisher executor, the origin continues processing from the last-saved offset unless you reset the origin.
For an example, see Stopping a Pipeline After Processing All Available Data.
- With a destination to store event information.
For an example, see Preserving an Audit Trail of Events.
Event Records
Record Header Attribute | Description |
---|---|
sdc.event.type | Event type. Uses one of the following types:
|
sdc.event.version | Integer that indicates the version of the event record type. |
sdc.event.creation_timestamp | Epoch timestamp when the stage created the event. |
- new-file
- The Azure Data Lake Storage Gen2 origin generates a new-file event record when it starts processing a new object.
- finished-file
- The Azure Data Lake Storage Gen2 origin generates a finished-file event record when it finishes processing an object.
- no-more-data
- The Azure Data Lake Storage Gen2 origin generates a no-more-data event record when the origin completes processing all available records and the number of seconds configured for Batch Wait Time elapses without any new objects appearing to be processed.
Data Formats
- Avro
- Generates a record for every Avro record. Includes a
precision
andscale
field attribute for each Decimal field. - Delimited
- Generates a record for each delimited line.
- Excel
- Generates a record for every row in the file. Can process
.xls
or.xlsx
files.You can configure the origin to read from all sheets in a workbook or from particular sheets in a workbook. You can specify whether files include a header row and whether to ignore the header row. You can also configure the origin to skip cells that do not have a corresponding header value. A header row must be the first row of a file. Vertical header columns are not recognized.
The origin cannot process Excel files with large numbers of rows. You can save such files as CSV files in Excel, and then use the origin to process with the delimited data format.
- JSON
- Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
- Log
- Generates a record for every log line.
- Parquet
- The origin generates records for every Parquet record in the file.
The file must contain the Parquet schema. The origin uses the
Parquet schema to generate records.
The stage includes the Parquet schema in a
parquetSchema
record header attribute.When Skip Union Indexes is not enabled, the origin generates an
avro.union.typeIndex./id
record header attribute identifying the index number of the element in the union that the data is read from. If a schema contains many unions and the pipeline does not depend on index information, you can enable Skip Union Indexes to avoid long processing times associated with storing a large number of indexes. - Protobuf
- Generates a record for every protobuf message.
- SDC Record
- Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
- Text
- Generates a record for each line of text or for each section of text based on a custom delimiter.
- Whole File
- Streams whole files from the origin system to the destination system. You can specify a transfer rate or use all available resources to perform the transfer.
- XML
- Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.
Configuring an Azure Data Lake Storage Gen2 Origin
Configure an Azure Data Lake Storage Gen2 origin to read data from Microsoft Azure Data Lake Storage Gen2. Be sure to complete the necessary prerequisites before you configure the origin.
-
In the Properties panel, on the General tab, configure the
following properties:
General Property Description Name Stage name. Description Optional description. Produce Events Generates event records when events occur. Use for event handling. On Record Error Error record handling for the stage: - Discard - Discards the record.
- Send to Error - Sends the record to the pipeline for error handling.
- Stop Pipeline - Stops the pipeline.
-
On the Azure tab, configure the following
properties:
Azure Property Description Account FQDN The host name of the Data Lake Storage Gen2 account. For example: <storage account name>.dfs.core.windows.net
Storage Container / File System Name of the storage container or file system that contains the data. Authentication Method Authentication method used to connect to Azure: - OAuth with Service Principal
- Azure Managed Identity
Application ID Application ID for the Azure Active Directory Data Collector application. Also known as the client ID. For information on accessing the application ID from the Azure portal, see the Azure documentation.
Available when using the OAuth with Service Principal or the Azure Managed Identity authentication method.
Endpoint Type Method to provide endpoint details. Available when using the OAuth with Service Principal authentication method.
Tenant ID Tenant ID for the Azure Active Directory Data Collector application. Also known as the directory ID. For information on accessing the tenant ID from the Azure portal, see the Azure documentation.
Available when Endpoint Type is set to Tenant ID.
Endpoint URL Endpoint URL for the Azure Active Directory Data Collector application. Default is
https://login.microsoftonline.com/<tenant-id>/oauth2/token
.In the URL, specify the tenant ID for the Azure Active Directory Data Collector application.
For information on accessing the tenant ID from the Azure portal, see the Azure documentation.
Available when Endpoint Type is set to Endpoint URL.
Application Key Authentication key or client secret for the Azure Active Directory application. Also known as the client secret. For information on accessing the application key from the Azure portal, see the Azure documentation.
Available when using the OAuth with Service Principal authentication method.
Account Shared Key Shared access key that Azure generated for the storage account. For more information on accessing the shared access key from the Azure portal, see the Azure documentation.
Available when using the Shared Key authentication method.
Max Results per Page Requested number of objects per page. Enter a smaller value to increase the number of pages but simplify error recovery. Timeout (ms) Maximum number of milliseconds allowed for Azure to return the requested objects. Negative values indicate no limit. Note that more time is required to return more pages. -
On the File Configuration tab, configure the following
properties:
File Configuration Property Description Common Path Common path that describes the location of the objects in the storage container or file system. The common path acts as a root path. Path Pattern Path pattern that describes the objects to be processed within the defined common path. You can include Ant-style path patterns to specify objects in nested directories.
Include Metadata Includes system-defined and custom metadata in record header attributes. Number of Threads Number of threads the origin generates and uses for multithreaded processing. Default is 1. Spooling Period (secs) Number of seconds between requests for new objects. If more time is required to retrieve new objects, the next request is delayed until the previous request is retrieved. Delimiter Character that separates path segments to define a directory hierarchy. Default is slash ( / ).
Read Order Order to read objects: - Lexicographically Ascending Names - Read objects in lexicographically ascending order based on name.
- Last Modified Timestamp - Read objects in ascending order based on the last-modified timestamp. When objects have matching timestamps, read objects in lexicographically ascending order based on names.
For best performance when reading a large number of objects, use lexicographical order based on names.
Object Pool Size Maximum number of objects that the origin stores in memory for processing. Increasing this number can improve pipeline performance when Data Collector resources permit. Default is 100.
Buffer Limit (KB) Maximum buffer size. The buffer size determines the size of the record that can be processed. Decrease when memory on the Data Collector machine is limited. Increase to process larger records when memory is available.
Default is 128 KB.
File Processing Delay (ms) The minimum number of milliseconds that must pass from the time a file is created before it is processed.
Default is 10000 milliseconds.
Max Batch Size (records) Maximum number of records processed at one time. Honors values up to the Data Collector maximum batch size. Default is 1000. The Data Collector default is 1000.
Batch Wait Time (ms) Number of milliseconds to wait before sending a partial or empty batch. -
On the Post Processing tab, configure the following
properties:
Post Processing Property Description Post-Processing Option Action taken after successfully processing an object: - None - Keep the object in place.
- Archive - Move the object to another location.
- Delete - Delete the object.
Archiving Option Method for archiving an object. The origin can move the object to another location.Important: The copy option is not supported at this time.Available when Post-Processing Option is set to Archive.
Post-Processing Path Path where successfully processed objects are archived. Available when Post-Processing Option is set to Archive.
-
On the Error Handling tab, configure the following
properties:
Error Handling Property Description Error Handling Option Action taken when an error occurs while processing an object: - None - Keep the object in place.
- Archive - Copy or move the object to another location.
- Delete - Delete the object.
When archiving processed objects, best practice is to also archive objects that cannot be processed.
Archiving Option Method for archiving an object following an error. The origin can either copy or move the object to another location.
Copying the object leaves the original object in place.
Available when Error Handling Option is set to Archive.
Error Path Path where objects are archived following an error. Available when Error Handling Option is set to Archive.
-
On the Data Format tab, configure the following
property:
Data Format Property Description Data Format Data format for source files. Use one of the following formats: - Avro
- Delimited
- Excel
- JSON
- Log
- Parquet
- Protobuf
- SDC Record
- Text
- Whole File
- XML
-
For Avro data, on the Data Format tab, configure the
following properties:
Avro Property Description Avro Schema Location Location of the Avro schema definition to use when processing data: - Message/Data Includes Schema - Use the schema in the file.
- In Pipeline Configuration - Use the schema provided in the stage configuration.
- Confluent Schema Registry - Retrieve the schema from Confluent Schema Registry.
Using a schema in the stage configuration or in Confluent Schema Registry can improve performance.
Avro Schema Avro schema definition used to process the data. Overrides any existing schema definitions associated with the data. You can optionally use the
runtime:loadResource
function to load a schema definition stored in a runtime resource file.Schema Registry URLs Confluent Schema Registry URLs used to look up the schema. To add a URL, click Add and then enter the URL in the following format: http://<host name>:<port number>
Basic Auth User Info User information needed to connect to Confluent Schema Registry when using basic authentication. Enter the key and secret from the
schema.registry.basic.auth.user.info
setting in Schema Registry using the following format:<key>:<secret>
Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores.Lookup Schema By Method used to look up the schema in Confluent Schema Registry: - Subject - Look up the specified Avro schema subject.
- Schema ID - Look up the specified Avro schema ID.
Schema Subject Avro schema subject to look up in Confluent Schema Registry. If the specified subject has multiple schema versions, the origin uses the latest schema version for that subject. To use an older version, find the corresponding schema ID, and then set the Look Up Schema By property to Schema ID.
Schema ID Avro schema ID to look up in Confluent Schema Registry. -
For delimited data, on the Data Format tab, configure the
following properties:
Delimited Property Description Header Line Indicates whether a file contains a header line, and whether to use the header line. Delimiter Format Type Delimiter format type. Use one of the following options: - Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
- RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
- MS Excel CSV - Microsoft Excel comma-separated file.
- MySQL CSV - MySQL comma-separated file.
- Tab-Separated Values - File that includes tab-separated values.
- PostgreSQL CSV - PostgreSQL comma-separated file.
- PostgreSQL Text - PostgreSQL text file.
- Custom - File that uses user-defined delimiter, escape, and quote characters.
- Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
Available when using the Apache Commons parser type.
Multi Character Field Delimiter Characters that delimit fields. Default is two pipe characters (||).
Available when using the Apache Commons parser with the multi-character delimiter format.
Multi Character Line Delimiter Characters that delimit lines or records. Default is the newline character (\n).
Available when using the Apache Commons parser with the multi-character delimiter format.
Delimiter Character Delimiter character. Select one of the available options or use Other to enter a custom character. You can enter a Unicode control character using the format \uNNNN, where N is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.
Default is the pipe character ( | ).
Available when using the Apache Commons parser with a custom delimiter format.
Field Separator One or more characters to use as delimiter characters between columns. Available when using the Univocity parser.
Escape Character Escape character. Available when using the Apache Commons parser with the custom or multi-character delimiter format. Also available when using the Univocity parser.
Quote Character Quote character. Available when using the Apache Commons parser with the custom or multi-character delimiter format. Also available when using the Univocity parser.
Line Separator Line separator. Available when using the Univocity parser.
Allow Comments Allows commented data to be ignored for custom delimiter format. Available when using the Univocity parser.
Comment Character Character that marks a comment when comments are enabled for custom delimiter format.
Available when using the Univocity parser.
Enable Comments Allows commented data to be ignored for custom delimiter format. Available when using the Apache Commons parser.
Comment Marker Character that marks a comment when comments are enabled for custom delimiter format. Available when using the Apache Commons parser.
Lines to Skip Number of lines to skip before reading data. Compression Format The compression format of the files: - None - Processes only uncompressed files.
- Compressed File - Processes files compressed by the supported compression formats.
- Archive - Processes files archived by the supported archive formats.
- Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json. Default is *, which processes all files.
CSV Parser Parser to use to process delimited data: - Apache Commons - Provides robust parsing and a wide range of delimited format types.
- Univocity - Can provide faster processing for wide delimited files, such as those with over 200 columns.
Default is Apache Commons.
Max Columns Maximum number of columns to process per record. Available when using the Univocity parser.
Max Character per Column Maximum number of characters to process in each column. Available when using the Univocity parser.
Skip Empty Lines Allows skipping empty lines. Available when using the Univocity parser.
Allow Extra Columns Allows processing records with more columns than exist in the header line. Available when using the Apache Commons parser to process data with a header line.
Extra Column Prefix Prefix to use for any additional columns. Extra columns are named using the prefix and sequential increasing integers as follows: <prefix><integer>
.For example,
_extra_1
. Default is_extra_
.Available when using the Apache Commons parser to process data with a header line while allowing extra columns.
Max Record Length (chars) Maximum length of a record in characters. Longer records are not read. This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.
Available when using the Apache Commons parser.
Ignore Empty Lines Allows empty lines to be ignored. Available when using the Apache Commons parser with the custom delimiter format.
Root Field Type Root field type to use: - List-Map - Generates an indexed list of data. Enables you to use standard functions to process data. Use for new pipelines.
- List - Generates a record with an indexed list with a map for header and value. Requires the use of delimited data functions to process data. Use only to maintain pipelines created before 1.1.0.
Parse NULLs Replaces the specified string constant with null values. NULL Constant String constant to replace with null values. Charset Character encoding of the files to be processed. Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters. -
For Excel files, on the Data Format tab, configure the
following properties:
Excel Property Description Excel Header Option Indicates whether files include a header row and whether to ignore the header row. A header row must be the first row of a file. Skip Cells With No Header Skips processing cells when they do not have a corresponding header value. Available when Excel Header Option is set to With Header Line.
Include Cells With Empty Value Includes empty cells in records. Read All Sheets Reads all sheets in the Excel file. Import Sheets Name of sheet to read. Using simple or bulk edit mode, click Add Another to add additional sheets. Available when Read All Sheets is not selected.
-
For Excel files, on the Data Format tab, configure the
following properties:
Excel Property Description Excel Header Option Indicates whether files include a header row and whether to ignore the header row. A header row must be the first row of a file. Skip Cells With No Header Skips processing cells when they do not have a corresponding header value. Available when Excel Header Option is set to With Header Line.
Include Cells With Empty Value Includes empty cells in records. Read All Sheets Reads all sheets in the Excel file. Import Sheets Name of sheet to read. Using simple or bulk edit mode, click Add Another to add additional sheets. Available when Read All Sheets is not selected.
-
For JSON data, on the Data Format tab, configure the
following properties:
JSON Property Description JSON Content Type of JSON content. Use one of the following options: - JSON array of objects
- Multiple JSON objects
Compression Format The compression format of the files: - None - Processes only uncompressed files.
- Compressed File - Processes files compressed by the supported compression formats.
- Archive - Processes files archived by the supported archive formats.
- Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json. Default is *, which processes all files.
Max Object Length (chars) Maximum number of characters in a JSON object. Longer objects are diverted to the pipeline for error handling.
This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.
Charset Character encoding of the files to be processed. Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters. -
For log data, on the Data Format tab, configure the
following properties:
Log Property Description Log Format Format of the log files. Use one of the following options: - Common Log Format
- Combined Log Format
- Apache Error Log Format
- Apache Access Log Custom Format
- Regular Expression
- Grok Pattern
- Log4j
- Common Event Format (CEF)
- Log Event Extended Format (LEEF)
Compression Format The compression format of the files: - None - Processes only uncompressed files.
- Compressed File - Processes files compressed by the supported compression formats.
- Archive - Processes files archived by the supported archive formats.
- Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json. Default is *, which processes all files.
Max Line Length Maximum length of a log line. The origin truncates longer lines. This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.
Retain Original Line Determines how to treat the original log line. Select to include the original log line as a field in the resulting record. By default, the original line is discarded.
Charset Character encoding of the files to be processed. Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters. - When you select Apache Access Log Custom Format, use Apache log format strings to define the Custom Log Format.
- When you select Regular Expression, enter the regular expression that describes the log format, and then map the fields that you want to include to each regular expression group.
- When you select Grok Pattern, you can use the
Grok Pattern Definition field to define
custom grok patterns. You can define a pattern on each line.
In the Grok Pattern field, enter the pattern to use to parse the log. You can use a predefined grok patterns or create a custom grok pattern using patterns defined in Grok Pattern Definition.
For more information about defining grok patterns and supported grok patterns, see Defining Grok Patterns.
- When you select Log4j, define the following properties:
Log4j Property Description On Parse Error Determines how to handle information that cannot be parsed: - Skip and Log Error - Skips reading the line and logs a stage error.
- Skip, No Error - Skips reading the line and does not log an error.
- Include as Stack Trace - Includes information that cannot be parsed as a stack trace to the previously-read log line. The information is added to the message field for the last valid log line.
Use Custom Log Format Allows you to define a custom log format. Custom Log4J Format Use log4j variables to define a custom log format.
-
For Parquet data, on the Data Format tab, configure the
following property:
Parquet Property Description Skip Union Indexes Omits header attributes identifying the index number of the element in a union that data is read from. If a schema contains many unions and the pipeline does not depend on index information, you can enable this property to avoid long processing times associated with storing a large number of indexes.
-
For protobuf data, on the Data Format tab, configure the
following properties:
Protobuf Property Description Protobuf Descriptor File Descriptor file (.desc) to use. The descriptor file must be in the Data Collector resources directory, $SDC_RESOURCES
.For more information about environment variables, see Data Collector Environment Configuration. For information about generating the descriptor file, see Protobuf Data Format Prerequisites.
Message Type The fully-qualified name for the message type to use when reading data. Use the following format:
Use a message type defined in the descriptor file.<package name>.<message type>
.Delimited Messages Indicates if a file might include more than one protobuf message. Compression Format The compression format of the files: - None - Processes only uncompressed files.
- Compressed File - Processes files compressed by the supported compression formats.
- Archive - Processes files archived by the supported archive formats.
- Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json. Default is *, which processes all files.
-
For SDC Record data, on the Data Format tab, configure the
following properties:
SDC Record Property Description Compression Format The compression format of the files: - None - Processes only uncompressed files.
- Compressed File - Processes files compressed by the supported compression formats.
- Archive - Processes files archived by the supported archive formats.
- Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json. Default is *, which processes all files.
-
For text data, on the Data Format tab, configure the
following properties:
Text Property Description Compression Format The compression format of the files: - None - Processes only uncompressed files.
- Compressed File - Processes files compressed by the supported compression formats.
- Archive - Processes files archived by the supported archive formats.
- Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json. Default is *, which processes all files.
Max Line Length Maximum number of characters allowed for a line. Longer lines are truncated. Adds a boolean field to the record to indicate if it was truncated. The field name is Truncated.
This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.
Use Custom Delimiter Uses custom delimiters to define records instead of line breaks. Custom Delimiter One or more characters to use to define records. Include Custom Delimiter Includes delimiter characters in the record. Charset Character encoding of the files to be processed. Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters. -
For whole files, on the Data Format tab, configure the
following properties:
Whole File Property Description Verify Checksum Verifies the checksum during the read. Buffer Size (bytes) Size of the buffer to use to transfer data. Rate per Second Transfer rate to use. Enter a number to specify a rate in bytes per second. Use an expression to specify a rate that uses a different unit of measure per second, e.g. ${5 * MB}. Use -1 to opt out of this property.
By default, the origin does not use a transfer rate.
-
For XML data, on the Data Format tab, configure the
following properties:
XML Property Description Delimiter Element Delimiter to use to generate records. Omit a delimiter to treat the entire XML document as one record. Use one of the following:- An XML element directly under the root element.
Use the XML element name without surrounding angle brackets ( < > ) . For example, msg instead of <msg>.
- A simplified XPath expression that specifies the
data to use.
Use a simplified XPath expression to access data deeper in the XML document or data that requires a more complex access method.
For more information about valid syntax, see Simplified XPath Syntax.
Compression Format The compression format of the files: - None - Processes only uncompressed files.
- Compressed File - Processes files compressed by the supported compression formats.
- Archive - Processes files archived by the supported archive formats.
- Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json. Default is *, which processes all files.
Preserve Root Element Includes the root element in the generated records. When omitting a delimiter to generate a single record, the root element is the root element of the XML document.
When specifying a delimiter to generate multiple records, the root element is the XML element specified as the delimiter element or is the last XML element in the simplified XPath expression specified as the delimiter element.
Include Field XPaths Includes the XPath to each parsed XML element and XML attribute in field attributes. Also includes each namespace in an xmlns record header attribute. When not selected, this information is not included in the record. By default, the property is not selected.
Namespaces Namespace prefix and URI to use when parsing the XML document. Define namespaces when the XML element being used includes a namespace prefix or when the XPath expression includes namespaces. For information about using namespaces with an XML element, see Using XML Elements with Namespaces.
For information about using namespaces with XPath expressions, see Using XPath Expressions with Namespaces.
Using simple or bulk edit mode, click the Add icon to add additional namespaces.
Output Field Attributes Includes XML attributes and namespace declarations in the record as field attributes. When not selected, XML attributes and namespace declarations are included in the record as fields. By default, the property is not selected.
Max Record Length (chars) The maximum number of characters in a record. Longer records are diverted to the pipeline for error handling.
This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.
Charset Character encoding of the files to be processed. Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters. - An XML element directly under the root element.