Event Generation
The Hadoop FS destination can generate events that you can use in an event stream. When you enable event generation, the destination generates event records each time the destination closes a file or completes streaming a whole file.
- With the HDFS File Metadata executor to move
or change permissions on closed files.
For an example, see Managing Output Files.
- With the Hive Query executor to run Hive or
Impala queries after closing output files.
For an example, see Automating Impala Metadata Updates for Drift Synchronization for Hive.
- With the MapReduce executor to convert
completed Avro files to ORC files or to Parquet.
For an example, see Converting Data to the Parquet Data Format.
- With the Email executor to send a custom email
after receiving an event.
For an example, see Sending Email During Pipeline Processing.
- With a destination to store event information.
For an example, see Preserving an Audit Trail of Events.
For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.