A processor stage represents
a type of data processing that you want to perform. You can use as many processors in a
pipeline as you need.
You can use different processors based on the execution mode of the pipeline.
In standalone or cluster pipelines, you can use the following processors:
- Aggregator - Performs
aggregations and displays the results in Monitor mode and writes the results to events
when enabled. This processor does not update the records being evaluated.
- Base64 Field Decoder - Decodes
Base64 encoded data to binary data.
- Base64 Field Encoder - Encodes
binary data using Base64.
- Data Parser - Parses NetFlow or
syslog data embedded in a field.
- Delay - Delays passing a batch to the
rest of the pipeline.
- Expression Evaluator - Performs
calculations on data. Can also add or modify record header attributes.
- Field Flattener - Flattens
nested fields.
- Field Hasher - Uses an algorithm
to encode sensitive data.
- Field Masker - Masks sensitive
string data.
- Field Merger - Merges fields in
complex lists or maps.
- Field Order - Orders fields in a
map or list-map root field type and outputs the fields into a list-map or list root field
type.
- Field Pivoter - Pivots data in a
list, map, or list-map field and creates a record for each item in the field.
- Field Remover - Removes fields
from a record.
- Field Renamer - Renames fields
in a record.
- Field Replacer - Replaces
field values.
- Field Splitter - Splits the
string values in a field into different fields.
- Field Type Converter -
Converts the data types of fields.
- Field Zip - Merges list data from
two fields.
- Geo IP- Returns geolocation and IP
intelligence information for a specified IP address.
- Groovy Evaluator - Processes records
based on custom Groovy code.
- HBase Lookup - Performs
key-value lookups in HBase to enrich records with data.
- Hive Metadata - Works with the
Hive Metastore destination as part of the Drift Synchronization Solution for Hive.
- HTTP Client - The HTTP Client
processor sends requests to an HTTP resource URL and writes the results to a field.
- JavaScript Evaluator - Processes
records based on custom JavaScript code.
- JDBC Lookup - Performs lookups in
a database table through a JDBC connection.
- JDBC Tee - Writes data to a database
table through a JDBC connection, and enriches records with data from generated database
columns.
- JSON Generator - Serializes
data from a field to a JSON-encoded string.
- JSON Parser - Parses a JSON
object embedded in a string field.
- Jython Evaluator - Processes records
based on custom Jython code.
- Kudu Lookup - Performs lookups
in Kudu to enrich records with data.
- Log Parser - Parses log data in a
field based on the specified log format.
- Postgres Metadata - Tracks
structural changes in source data then creates and alters PostgreSQL tables as part of the
Drift Synchronization Solution for Postgres.
- Redis Lookup - Performs
key-value lookups in Redis to enrich records with data.
- Salesforce Lookup -
Performs lookups in Salesforce to enrich records with data.
- Schema Generator -
Generates a schema for each record and writes the schema to a record header attribute.
- Spark Evaluator - Processes data based
on a custom Spark application.
- Static Lookup - Performs
key-value lookups in local memory.
- Stream Selector - Routes data
to different streams based on conditions.
- Value Replacer (Deprecated) -
Replaces existing nulls or specified values with constants or nulls.
- XML Flattener - Flattens XML
data in a string field.
- XML Parser - Parses XML data in a
string field.
In standalone pipelines, you can also use the following processor:
In edge pipelines, you can use the following processors:
To help create or test pipelines, you can use the following development processors:
-
Dev Identity
-
Dev Random Error
-
Dev Record Creator
For more information, see Development Stages.