Origins

An origin stage represents the source for the pipeline. You can use a single origin stage in a pipeline.

You can use different origins based on the execution mode of the pipeline: standalone, cluster, or edge or cluster. To help create or test pipelines, you can use development origins.

Standalone Pipelines

In standalone pipelines, you can use the following origins:
  • Amazon S3 - Reads objects from Amazon S3. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Amazon SQS Consumer - Reads data from queues in Amazon Simple Queue Services (SQS). Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Aurora PostgreSQL CDC Client - Reads Amazon Aurora PostgreSQL WAL data to generate change data capture records.
  • Azure Blob Storage - Reads data from Microsoft Azure Blob Storage. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Azure Data Lake Storage Gen1 (deprecated) - Reads data from Microsoft Azure Data Lake Storage Gen1. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Azure Data Lake Storage Gen2 - Reads data from Microsoft Azure Data Lake Storage Gen2. Creates multiple threads to enable parallel processing in a multithreaded pipeline. Use this origin for new development.
  • Azure Data Lake Storage Gen2 (Legacy) - Reads data from Microsoft Azure Data Lake Storage Gen2. Creates multiple threads to enable parallel processing in a multithreaded pipeline. Do not use this origin for new development.
  • Azure IoT/Event Hub Consumer - Reads data from Microsoft Azure Event Hub. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • CoAP Server - Listens on a CoAP endpoint and processes the contents of all authorized CoAP requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Couchbase - Reads JSON data from Couchbase Server. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Cron Scheduler - Generates a record with the current datetime as scheduled by a cron expression. This is an orchestration stage.
  • Directory - Reads fully-written files from a directory. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Elasticsearch - Reads data from an Elasticsearch cluster. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • File Tail - Reads lines of data from an active file after reading related archived files in the directory.
  • Google BigQuery - Executes a query job and reads the result from Google BigQuery.
  • Google Cloud Storage - Reads fully written objects from Google Cloud Storage.
  • Google Pub/Sub Subscriber - Consumes messages from a Google Pub/Sub subscription. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Groovy Scripting - Runs a Groovy script to create Data Collector records. Can create multiple threads to enable parallel processing in a multithreaded pipeline.
  • Hadoop FS Standalone - Reads fully-written files from HDFS or Azure Blob storage. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • HTTP Client - Reads data from a streaming HTTP resource URL.
  • HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • JavaScript Scripting - Runs a JavaScript script to create Data Collector records. Can create multiple threads to enable parallel processing in a multithreaded pipeline.
  • JDBC Multitable Consumer - Reads database data from multiple tables through a JDBC connection. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • JDBC Query Consumer - Reads database data using a user-defined SQL query through a JDBC connection.
  • Jira - Reads data from a Jira instance
  • JMS Consumer - Reads messages from JMS.
  • Jython Scripting - Runs a Jython script to create Data Collector records. Can create multiple threads to enable parallel processing in a multithreaded pipeline.
  • Kafka Consumer (deprecated) - Reads messages from a single Kafka topic.
  • Kafka Multitopic Consumer - Reads messages from multiple Kafka topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Kinesis Consumer - Reads data from Kinesis Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR DB CDC - Reads changed MapR DB data that has been written to MapR Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR DB JSON - Reads JSON documents from MapR DB JSON tables.
  • MapR FS Standalone - Reads fully-written files from MapR FS. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR Multitopic Streams Consumer - Reads messages from multiple MapR Streams topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • MapR Streams Consumer - Reads messages from MapR Streams.
  • MongoDB - Reads documents from MongoDB.
  • MongoDB Atlas - Reads documents from MongoDB Atlas or MongoDB Enterprise Server.
  • MongoDB Atlas CDC - Reads changes from a MongoDB Change Stream or Oplog.
  • MongoDB Oplog - Reads entries from a MongoDB Oplog.
  • MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
  • MySQL Binary Log - Reads MySQL binary logs to generate change data capture records.
  • NiFi HTTP Server (deprecated) - Listens for requests from a NiFi PutHTTP processor and processes NiFi FlowFiles.
  • Omniture (deprecated) - Reads web usage reports from the Omniture reporting API.
  • OPC UA Client - Reads data from a OPC UA server.
  • Oracle Bulkload - Reads data from multiple Oracle database tables, then stops the pipeline. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Oracle CDC - Processes change data capture information stored in redo logs using LogMiner. Use this origin for new development.
  • Oracle CDC Client - Processes change data capture information stored in redo logs using LogMiner. This is the older Oracle origin. Use the Oracle CDC origin for new development.
  • Oracle Multitable Consumer - Reads data from multiple Oracle database tables.
  • PostgreSQL CDC Client - Reads PostgreSQL WAL data to generate change data capture records.
  • Pulsar Consumer - Reads messages from Apache Pulsar topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Pulsar Consumer (Legacy) - Reads messages from Apache Pulsar topics.
  • RabbitMQ Consumer - Reads messages from RabbitMQ.
  • Redis Consumer - Reads messages from Redis.
  • REST Service - Listens on an HTTP endpoint, parses the contents of all authorized requests, and sends responses back to the originating REST API. Creates multiple threads to enable parallel processing in a multithreaded pipeline. Use only in microservice pipelines.
  • Salesforce - Reads data from Salesforce using the SOAP or Bulk API.
  • Salesforce Bulk API 2.0 - Reads data from Salesforce using Salesforce Bulk API 2.0. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • SAP HANA Query Consumer - Reads data from an SAP HANA database using a user-defined SQL query.
  • SDC RPC (deprecated) - Reads data from an SDC RPC destination in an SDC RPC pipeline.
  • SFTP/FTP/FTPS Client - Reads files from an SFTP, FTP, or FTPS server.
  • Snowflake Bulk - Reads data from Snowflake tables, then stops the pipeline. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • SQL Server 2019 BDC Multitable Consumer - Reads data from Microsoft SQL Server 2019 Big Data Cluster (BDC) through a JDBC connection. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • SQL Server CDC Client - Reads data from Microsoft SQL Server CDC tables. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • SQL Server Change Tracking - Reads data from Microsoft SQL Server change tracking tables and generates the latest version of each record. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Start Jobs - Starts one or more Control Hub jobs in parallel. This is an orchestration stage.
  • Start Pipelines (deprecated) - Starts one or more pipelines in parallel. This is an orchestration stage.
  • TCP Server - Listens at the specified ports and processes incoming data over TCP/IP connections. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Teradata Consumer (deprecated) - Reads data from Teradata Database tables through a JDBC connection. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • UDP Multithreaded Source - Reads messages from one or more UDP ports. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • UDP Source - Reads messages from one or more UDP ports.
  • Web Client - Reads data from an HTTP endpoint.
  • WebSocket Client - Reads data from a WebSocket server endpoint. Can send responses back to the origin system as part of a microservice pipeline.
  • WebSocket Server - Listens on a WebSocket endpoint and processes the contents of all authorized WebSocket client requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline. Can send responses back to the origin system as part of a microservice pipeline.

Cluster Pipelines (Deprecated)

In cluster pipelines, you can use the following origins:

Edge Pipelines

In edge pipelines, you can use the following origins:
  • Directory - Reads fully-written files from a directory.
  • File Tail - Reads lines of data from an active file after reading related archived files in the directory.
  • gRPC Client - Reads data from a gRPC server.
  • HTTP Client - Reads data from a streaming HTTP resource URL.
  • HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests.
  • MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
  • System Metrics - Reads system metrics from the edge device where SDC Edge is installed.
  • WebSocket Client - Reads data from a WebSocket server endpoint.
  • Windows Event Log - Reads data from a Microsoft Windows event log located on a Windows machine.

Development Origins

To help create or test pipelines, you can use the following development origins:
  • Dev Data Generator
  • Dev Random Source
  • Dev Raw Data Source
  • Dev SDC RPC with Buffering
  • Dev Snapshot Replaying
  • Sensor Reader

For more information, see Development Stages.

Comparing Azure Storage Origins

We have several Azure storage origins, make sure to use the best one for your needs. Here's a quick breakdown of some key differences:

Origin Description
Azure Blob Storage
  • Accesses data using the Microsoft Azure Blob Storage API.
  • Connects to an Azure Blob Storage account using the following format for the Fully Qualified Domain Name (FQDN) of the account:

    <storage account name>.blob.core.windows.net

  • Supports the following Azure authentication methods:
    • OAuth with Service Principal
    • Azure Managed Identity
    • Shared Key
    • SAS Token
  • Processes all data formats, except for Datagram.
  • When archiving successfully processed objects, can copy or move the objects to another container or file system.
  • Can include Azure Blob Storage system-defined and custom metadata in record header attributes. Can also include user-defined metadata.
Azure Data Lake Storage Gen2
  • Accesses data using the Microsoft Azure Data Lake Storage Gen2 API.
  • Connects to an Azure Data Lake Storage Gen2 account using the following format for the Fully Qualified Domain Name (FQDN) of the account:

    <storage account name>.dfs.core.windows.net

  • Supports the following Azure authentication methods:
    • OAuth with Service Principal
    • Azure Managed Identity
    • Shared Key
  • Processes all data formats, except for Binary and Datagram.
  • When archiving successfully processed objects, can only move the objects to the same container or file system.
  • Can include Azure Data Lake Storage Gen2 system-defined and custom metadata in record header attributes.
Azure Data Lake Storage Gen2 (Legacy)
  • Accesses data using the Hadoop FileSystem interface.
  • For all new development, use one of the other Azure storage origins which provide better performance.

Comparing HTTP Origins

We have several HTTP origins, make sure to use the best one for your needs. Here's a quick breakdown of some key differences:
Origin Description
HTTP Client
  • Initiates HTTP requests for an external system.
  • Processes data synchronously.
  • Processes JSON, text, and XML data.
  • Can process a range of HTTP requests.
  • Can be used in a pipeline with processors.

HTTP Server
  • Listens for incoming HTTP requests and processes them while the sender waits for confirmation.
  • Processes data synchronously.
  • Creates multithreaded pipelines, thus suitable for high throughput of incoming data.
  • Processes virtually all data formats. Processes HTTP POST and PUT requests.
  • Can be used in a pipeline with processors.
Web Client
  • Initiates HTTP requests for an external system.

  • Processes data synchronously.

  • Processes virtually all data formats.

  • Can be configured to process different data formats for request data and response data.

  • Can be configured with per-timeout actions.

  • Can process a range of HTTP requests.

  • Can be used in a pipeline with processors.

Comparing MapR Origins

We have several MapR origins, make sure to use the best one for your needs. Here's a quick breakdown of some key differences:
Origin Description
MapR DB CDC
  • Reads change data capture MapR DB data using MapR Streams.
  • Includes CDC information in record header attributes.

  • Use in standalone execution mode pipelines.
MapR DB JSON
  • Reads JSON documents from MapR DB.
  • Converts each JSON document to a record.
  • Use in standalone execution mode pipelines.
MapR FS
  • Reads files from MapR FS.
  • Can be used with Kerberos Authentication.

  • Use in cluster execution mode pipelines.
MapR FS Standalone
  • Reads files from MapR FS.
  • Can use multiple threads to enable the parallel processing of files.
  • Can be used with Kerberos Authentication.
  • Use in standalone execution mode pipelines.
MapR Multitopic Streams
  • Streams data from MapR Streams.
  • Can use multiple threads to read from multiple topics.
  • Use in standalone execution mode pipelines.
MapR Streams Consumer
  • Streams data from MapR Streams.
  • Reads from a single topic using a single thread.
  • Use in standalone execution mode pipelines.

Comparing UDP Source Origins

The UDP Source and UDP Multithreaded Source origins are very similar. The main differentiator is that the UDP Multithreaded Source can use multiple threads to process data within the pipeline.

The UDP Multithreaded Source has a processing queue that aids multithreaded processing. But use of this queue can slow processing under certain circumstances.

The following table describes some cases when you might want to use each origin:
Origin Ideally Used When
UDP Multithreaded Source
  • Epoll support enables the use of multiple receiver threads to pass data to the pipeline.
  • Complex pipeline requires longer processing time.

or

  • Lack of epoll support allows only a single receiver thread to pass data to the pipeline.
  • High volumes of data.
UDP Source
  • Epoll support enables the use of multiple receiver threads to pass data to the pipeline.
  • Relatively simple pipeline enables speedy Data Collector processing.

Comparing WebSocket Origins

We have two WebSocket origins, make sure to use the best one for your needs. Here's a quick breakdown of some key differences:

Origin Description
WebSocket Client
  • Initiates a connection to a WebSocket server endpoint and then waits for the WebSocket server to push data.
WebSocket Server
  • Listens for incoming WebSocket requests and processes them while the sender waits for confirmation.
  • Creates multithreaded pipelines, thus suitable for high throughput of incoming data.

Batch Size and Wait Time

For origin stages, the batch size determines the maximum number of records sent through the pipeline at one time. The batch wait time determines the time that the origin waits for data before sending a batch. At the end of the wait time, it sends the batch regardless of how many records the batch contains.

For example, a File Tail origin is configured for a batch size of 20 records and a batch wait time of 240 seconds. When data arrives quickly, File Tail fills a batch with 20 records and sends it through the pipeline immediately, creating a new batch and sending it again as soon as it is full. As incoming data slows, a remaining batch contains a few records, gaining an extra record periodically. 240 seconds after creating the batch, File Tail sends the partially-full batch through the pipeline. It immediately creates a new batch and starts a new countdown.

Configure the batch wait time based on your processing needs. You might reduce the batch wait time to ensure all data is processed within a specified time frame or to make regular contact with pipeline destinations. Use the default or increase the wait time if you prefer not to process partial or empty batches.

Maximum Record Size

Most data formats have a property that limits the maximum size of the record that an origin can parse. For example, the delimited data format has a Max Record Length property, the JSON data format has Max Object Length, and the text data format has Max Line Length.

When the origin processes data that is larger than the specified length, the behavior differs based on the origin and the data format. For example, with some data formats, oversized records are handled based on the record error handling configured for the origin. While in other data formats, the origin might truncate the data. For details on how an origin handles size overruns for each data format, see the "Data Formats" section of the origin documentation.

When available, the maximum record size properties are limited by the Data Collector parser buffer size, which is 1048576 bytes by default. So, when raising the maximum record size property in the origin does not change the origin's behavior, you might need to increase the Data Collector parser buffer size by configuring the parser.limit property in the Data Collector configuration file.

Note that most of the maximum record size properties are specified in characters, while the Data Collector limit is defined in bytes.

Previewing Raw Source Data

Some origins allow you to preview raw source data. Preview raw source data when reviewing the data might help with origin configuration.

When you preview file data, you can use the real directory and actual source file. Or when appropriate, you might use a different file that is similar to the source.

When you preview Kafka data, you enter the connection information for the Kafka cluster.

The data used for the raw source preview in an origin stage is not used when previewing data for the pipeline.

  1. In the Properties panel for the origin stage, click the Raw Preview tab.
  2. For a Directory or File Tail origin, enter a directory and file name.
  3. For a Kafka Consumer or Kafka Multitopic Consumer, enter the following information:
    Kafka Raw Preview Property Description
    Topic Kafka topic to read.
    Partition Partition to read.
    Broker Host Broker host name. Use any broker associated with the partition.
    Broker Port Broker port number.
    Max Wait Time (secs) Maximum amount of time the preview waits to receive data from Kafka.
  4. Click Preview.
The Raw Source Preview area displays the preview.