The Amazon S3 origin processes data
differently based on the data format. The origin processes the following types of data:
- Avro
- Generates a record for every Avro record. Includes a
precision
and scale
field attribute for each Decimal field.
- The stage includes the Avro schema in an
avroSchema
record header attribute. You can use one of the
following methods to specify the location of the Avro schema
definition:
- Message/Data Includes Schema -
Use the schema in the file.
- In Pipeline Configuration - Use
the schema that you provide in the stage
configuration properties.
- Confluent Schema Registry -
Retrieve the schema from Confluent Schema Registry.
Confluent Schema Registry is a distributed storage
layer for Avro schemas. You can configure the stage
to look up the schema in Confluent Schema Registry
by the schema ID or subject specified in the stage
configuration.
- Using a schema in the stage configuration or retrieving a schema
from Confluent Schema Registry overrides any schema that might
be included in the file and can improve performance.
- The stage reads files compressed by Avro-supported compression
codecs without requiring additional configuration. To enable the
stage to read files compressed by other codecs, use the
compression format property in the stage.
- Delimited
- Generates a record for each delimited line.
- The CSV parser that you choose
determines the delimiter properties that you configure and
how the stage handles parsing errors. You can specify if
the data includes a header line and whether to use it. You
can define the number of lines to skip before reading, the
character set of the data, and the root field type to use
for the generated record.
- You can also configure the stage
to replace a string constant with null values and to
ignore control characters.
- For more information about reading
delimited data, see Reading Delimited Data.
- Excel
- Generates a record for every row in the file. Can process
.xls
or .xlsx
files.You can configure the origin to read from all
sheets in a workbook or from particular sheets in a
workbook. You can specify
The origin cannot process Excel files with
large numbers of rows. You can save such files as CSV
files in Excel, and then use the origin to process with
the delimited data format.
- JSON
- Generates a record for each JSON object. You can process JSON
files that include multiple JSON objects or a single JSON
array.
- When an object exceeds the maximum object length defined for the
origin, the origin cannot continue processing data in the file.
Records already processed from the file are passed to the
pipeline. The behavior of the origin is then based on the error
handling configured for the stage:
- Discard - The origin continues processing with the
next file, leaving the partially-processed file in
the directory.
- To Error - The origin continues processing with the
next file. If a post-processing error directory is
configured for the stage, the origin moves the
partially-processed file to the error directory.
Otherwise, it leaves the file in the directory.
- Stop Pipeline - The origin stops the pipeline.
- Log
- Generates a record for every log line.
- When a line exceeds the user-defined maximum line length, the
origin truncates longer lines.
- You can include the processed log line as a field in the record.
If the log line is truncated, and you request the log line in
the record, the origin includes the truncated line.
- You can define the log format or type to be read.
- Protobuf
- Generates a record for every protobuf message.
- Protobuf messages must match the specified message type and be described
in the descriptor file.
- When the data for a record exceeds 1 MB, the origin cannot continue
processing data in the file. The origin handles the file based on file
error handling properties and continues reading the next file.
- For information about generating the descriptor file, see Protobuf Data Format Prerequisites.
- SDC Record
- Generates a record for every record. Use to process records
generated by a Data Collector
pipeline using the SDC Record data format.
- For error records, the origin provides the original record as read
from the origin in the original pipeline, as well as error
information that you can use to correct the record.
- When processing error records, the origin expects the error file
names and contents as generated by the original pipeline.
- Text
- Generates a record for each line of text or for each section of
text based on a custom delimiter.
- When a line or section exceeds the maximum line length defined for
the origin, the origin truncates it. The origin adds a boolean
field named Truncated to indicate if the line was
truncated.
- For more information about processing text with a custom
delimiter, see Text Data Format with Custom Delimiters.
- Whole File
- Streams whole files from the origin system to the destination
system. You can specify a transfer rate or use all available
resources to perform the transfer.
- The origin uses checksums to verify the integrity of data
transmission.
- The origin generates two fields: one for a file reference and one
for file information. For more information, see Whole File Data Format.
- XML
- Generates records based on a user-defined delimiter element. Use
an XML element directly under the root element or define a
simplified XPath expression. If you do not define a delimiter
element, the origin treats the XML file as a single record.
- Generated records include XML attributes and namespace
declarations as fields in the record by default. You can
configure the stage to include them in the record as field
attributes.
- You can include XPath information for each parsed XML element and
XML attribute in field attributes. This also places each
namespace in an xmlns record header attribute.
- When a record exceeds the user-defined maximum record length, the
origin cannot continue processing data in the file. Records
already processed from the file are passed to the pipeline. The
behavior of the origin is then based on the error handling
configured for the stage:
- Discard - The origin continues processing with the
next file, leaving the partially-processed file in
the directory.
- To Error - The origin continues processing with the
next file. If a post-processing error directory is
configured for the stage, the origin moves the
partially-processed file to the error directory.
Otherwise, it leaves the file in the directory.
- Stop Pipeline - The origin stops the pipeline.
- Use the XML data format to process valid XML documents. For more
information about XML processing, see Reading and Processing XML Data.