Named Pipe

The Named Pipe destination writes data to a UNIX named pipe.

A named pipe (or FIFO) can be accessed by two separate processes on a machine - one process opens the pipe as the writer, and the other as a reader. The Named Pipe destination serves as the writer. Another process running on the same Data Collector machine can serve as the reader.

Use the destination to send data to an application that can read data from a named pipe. For example, the Greenplum gpload utility supports loading data from named pipes. You can configure the utility to read from the named pipe written to by the Named Pipe destination. The utility can then load that data into a Greenplum database.

Before you use the Named Pipe destination, you must create a named pipe on the machine where Data Collector is installed. You also must configure the named pipe reader - or reading application - that runs on the Data Collector machine to receive the data from the same named pipe.

When you configure the Named Pipe destination, you enter the full path to the local named pipe that you created. You also specify the data format that the destination uses to write data to the named pipe.

Prerequisites

Before you can write to a named pipe, you must complete the following prerequisites:
  • Create the named pipe on the local Data Collector machine.
  • Configure the named pipe reader on the Data Collector local machine to read from the same named pipe.

Create the Named Pipe

Use the mkfifo command to create the named pipe on the same machine where Data Collector is installed.

For example, use the following command to create a local named pipe:
mkfifo /tmp/my_pipe

Configure the Named Pipe Reader

The named pipe reader - or reading application - must be installed on the Data Collector machine and must be configured to read from the same named pipe.

For example, configure the reader to read from the same named pipe that you created with the mkfifo command:

/tmp/my_pipe

As a best practice, we recommend starting the reader before starting the pipeline that contains the Named Pipe destination.

Working with the Named Pipe Reader

The Named Pipe destination writes data to the named pipe, and the named pipe reader - or reading application - then reads the incoming data.

Consider the following ways that the Named Pipe destination and the named pipe reader interact:

  • If you start the pipeline before the named pipe reader is available to read, the pipeline remains in a STARTING state until the reader becomes available.

    As a best practice, we recommend starting the reader before starting the pipeline that contains the Named Pipe destination.

  • When the named pipe reader becomes available, the pipeline transitions to a RUNNING state and the Named Pipe destination begins writing to the named pipe. If the destination writes faster than the reader can process the data, the named pipe might become full. In this case, the destination waits to write additional data until the named pipe can receive more data.
  • If the pipeline stops while the named pipe reader is still available, the reader receives an IOException for the broken pipe.
  • If the named pipe reader stops while the pipeline with the Named Pipe destination is running, Data Collector displays a stage exception about the broken pipe.

Data Formats

The Named Pipe destination writes data to a named pipe based on the data format that you select. The destination supports data formats that produce a single line of output for each record. Data formats that produce a nested structure with multiple lines of output for each record - such as the XML data format - are not supported.

You can use the following data formats:
Delimited
The destination writes records as delimited data. When you use this data format, the root field must be list or list-map.
You can use the following delimited format types:
  • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
  • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
  • MS Excel CSV - Microsoft Excel comma-separated file.
  • MySQL CSV - MySQL comma-separated file.
  • Tab-Separated Values - File that includes tab-separated values.
  • PostgreSQL CSV - PostgreSQL comma-separated file.
  • PostgreSQL Text - PostgreSQL text file.
  • Custom - File that uses user-defined delimiter, escape, and quote characters.
  • Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
JSON
The destination writes records as JSON data. You can use one of the following formats:
  • Array - Each file includes a single array. In the array, each element is a JSON representation of each record.
  • Multiple objects - Each file includes multiple JSON objects. Each object is a JSON representation of a record.
SDC Record
The destination writes records in the SDC Record data format.
Text
The destination writes data from a single text field to the destination system. When you configure the stage, you select the field to use.
You can configure the characters to use as record separators. By default, the destination uses a UNIX-style line ending (\n) to separate records.
When a record does not contain the selected text field, the destination can report the missing field as an error or to ignore the missing field. By default, the destination reports an error.
When configured to ignore a missing text field, the destination can discard the record or write the record separator characters to create an empty line for the record. By default, the destination discards the record.

Configuring a Named Pipe Destination

Configure a Named Pipe destination to write data to a UNIX named pipe.

  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    Required Fields Fields that must include data for the record to be passed into the stage.
    Tip: You might include fields that the stage uses.

    Records that do not include all required fields are processed based on the error handling configured for the pipeline.

    Preconditions Conditions that must evaluate to TRUE to allow a record to enter the stage for processing. Click Add to create additional preconditions.

    Records that do not meet all preconditions are processed based on the error handling configured for the stage.

    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline.
  2. On the Named Pipe tab, configure the following property:
    Named Pipe Property Description
    Named Pipe Full path to the local named pipe created with the mkfifo command.
  3. On the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format Data format to write data:
  4. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Delimiter Format Format for delimited data:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma-separated file.
    • Tab-Separated Values - File that includes tab-separated values.
    • PostgreSQL CSV - PostgreSQL comma-separated file.
    • PostgreSQL Text - PostgreSQL text file.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    Header Line Indicates whether to create a header line.
    Delimiter Character Delimiter character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ​N is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Record Separator String Characters to use to separate records. Use any valid Java string literal. For example, when writing to Windows, you might use \r\n to separate records.

    Available when using a custom delimiter format.

    Escape Character Escape character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    Default is the backslash character ( \ ).

    Quote Character Quote character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    Default is the quotation mark character ( " ).

    Replace New Line Characters Replaces new line characters with the configured string.

    Recommended when writing data as a single line of text.

    New Line Character Replacement String to replace each new line character. For example, enter a space to replace each new line character with a space.

    Leave empty to remove the new line characters.

    Charset Character set to use when writing data.
  5. For JSON data, on the Data Format tab, configure the following properties:
    JSON Property Description
    JSON Content Method to write JSON data:
    • JSON Array of Objects - Each file includes a single array. In the array, each element is a JSON representation of each record.
    • Multiple JSON Objects - Each file includes multiple JSON objects. Each object is a JSON representation of a record.
    Charset Character set to use when writing data.
  6. For text data, on the Data Format tab, configure the following properties:
    Text Property Description
    Text Field Path Field that contains the text data to be written. All data must be incorporated into the specified field.
    Record Separator Characters to use to separate records. Use any valid Java string literal. For example, when writing to Windows, you might use \r\n to separate records.

    By default, the destination uses \n.

    On Missing Field When a record does not include the text field, determines whether the destination reports the missing field as an error or ignores the missing field.
    Insert Record Separator if No Text When configured to ignore a missing text field, inserts the configured record separator string to create an empty line.

    When not selected, discards records without the text field.

    Charset Character set to use when writing data.