Databricks Job Launcher

The Databricks Job Launcher executor starts a Databricks job each time it receives an event. You can run jobs based on notebooks or JARs. For information about supported versions, see Supported Systems and VersionsSupported Systems and Versions in the Data Collector documentation.

Use the executor to start a Databricks job as part of an event stream. You can use the executor in any logical way, such as running Databricks jobs after the Hadoop FS, MapR FS, or Amazon S3 destination closes files.

Note that the Databricks Job Launcher executor starts a job in an external system. It does not monitor the job or wait for it to complete. The executor becomes available for additional processing as soon as it successfully submits a job.

Before you use the executor, perform the necessary prerequisites.

When you configure the executor, you specify the cluster base URL, job type, job ID, and user credentials. You can optionally configure job parameters and security such as an HTTP proxy and SSL/TLS details.

You can configure the executor to generate events for another event stream. For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.