SSL/TLS Encryption

Many stages can use SSL/TLS encryption to securely connect to the external system.

Some stages always use SSL/TLS to securely connect to the external system - you don't need to configure them to do so. For example, most external systems managed by a cloud service provider, such as Amazon S3 or Google Cloud Storage, use well-known public root certificates. As a result, stages can automatically use SSL/TLS encryption to connect to these systems.

However, other stages require that you configure SSL/TLS properties to securely connect to the external system. For example, external systems that you manage, such as a Cassandra or Oracle database or an SFTP server, might use private certificates. As a result, you must configure stages connecting to these systems to use the required SSL/TLS properties. You might be able to enable client certificate authorization or specify a custom truststore file.

For stages that allow you to customize SSL/TLS properties, you can generally configure the properties on the TLS tab of the stage. The properties that are available depend on the stage that you are configuring. The TLS tab can include the following properties:

  • Keystore properties
  • Truststore properties
  • TLS protocols
  • Cipher suites
Note: You can also enable HTTPSenable HTTPS for Data Collector to secure the communication to the Data Collector UI and REST API.to configure the Control Hub web browser to use direct engine REST APIs to communicate with Data Collector. And you can enable HTTPS for cluster pipelines to secure the communication between the gateway and worker nodes in the cluster. For more information, see Enabling HTTPS in the Data Collector documentation.
You can enable SSL/TLS type properties in the following stages and locations:
  • Cassandra destination
  • Control Hub API processor
  • Couchbase Lookup processor and Couchbase destination
  • Databricks Job Launcher executor
  • gRPC Client origin
  • HTTP Client origin, processor, and destination
  • HTTP Server origin
  • Kafka Consumer origin, Kafka Multitopic Consumer origin, and Kafka Producer destination - These stages require configuring additional Kafka properties. For more information, see Security in Kafka Stages.
  • MongoDB origin and destination, MongoDB Oplog origin, and MongoDB Lookup processor - These stages require configuring the SDC_JAVA_OPTS environment variableJava configuration options in the deployment. For more information, see "Enabling SSL/TLS" in the stage documentation.
  • MQTT Subscriber origin and MQTT Publisher destination
  • OPC UA Client origin
  • Pulsar Consumer origin and Pulsar Producer destination - These stages require certificate files rather than keystore and truststore files. For more information, see "Enabling Security" in the stage documentation.
  • RabbitMQ Consumer origin and RabbitMQ Producer destination
  • REST Service origin
  • Salesforce origin, lookup, and destination, and the Tableau CRM destination
  • SDC RPC origin and destination
  • SFTP/FTP/FTPS Client origin and destination
  • Splunk destination
  • Start Jobs origin and processor
  • Start Pipelines origin and processor
  • Syslog destination - This destination requires configuring the SDC_JAVA_OPTS environment variableJava configuration options in the deployment. For more information, see Enabling SSL/TLS in the destination documentation.
  • TCP Server origin
  • Wait for Jobs processor
  • Wait for Pipelines processor
  • WebSocket Client origin and destination
  • WebSocket Server origin
  • Pipeline error handling, when writing error records to another pipeline