Tableau CRM
The Tableau CRM destination writes data to Salesforce Tableau CRM. The destination connects to Tableau CRM to upload external data to a dataset. For information about supported versions, see Supported Systems and Versions in the Data Collector documentation.
When you configure the destination, you define connection information, including the API version and authentication type that the destination uses to connect to Tableau CRM. You can also use a connection to configure the destination.
You specify the edgemart alias or name of the dataset to upload data to. You can also optionally define the name of the edgemart container or app that contains the dataset.
The destination can upload external data to a new dataset or to an existing dataset using an append, delete, overwrite, or upsert operation. Based on the operation type, you define the metadata of the data to be uploaded in JSON format.
The destination performs automatic recovery by default. You can configure the destination to skip recovery.
You can also optionally use an HTTP proxy to connect to Salesforce Tableau CRM. When enabled in Salesforce, you can configure the destination to use mutual authentication to connect.
Changing the API Version
Data Collector ships with version 57.0.0 of the Salesforce Web Services Connector libraries. You can use a different Salesforce API version if you need to access functionality not present in version 57.0.0.
- On the Salesforce tab, set the API Version property to the version that you want to use.
-
Download the relevant version of the following JAR files from Salesforce Web
Services Connector (WSC):
-
WSC JAR file - force-wsc-<version>.0.0.jar
-
Partner API JAR file - force-partner-api-<version>.0.0.jar
Where
<version>
is the API version number.For information about downloading libraries from Salesforce WSC, see the Salesforce Developer documentation.
-
-
In the following Data Collector
directory, replace the default force-wsc-
57.0.0
.jar and force-partner-api-57.0.0
.jar files with the versioned JAR files that you downloaded:$SDC_DIST/streamsets-libs/streamsets-datacollector-salesforce-lib/lib/
- Restart Data Collector for the changes to take effect.
Define the Operation
- Append - Appends data to the dataset, creating the dataset if it doesn’t exist.
- Delete - Deletes rows from the dataset. The rows to delete must contain a single field with a unique identifier.
- Overwrite - Replaces data in the dataset, creating the dataset if it doesn't exist.
- Upsert - Inserts or updates rows in the dataset, creating the dataset if it doesn’t exist. The rows to upsert must contain a single field with a unique identifier.
For more information about unique identifiers, see the Salesforce Developer documentation.
Metadata JSON
- Data file that contains the external data.
- Optional metadata file that describes the schema of the data in JSON format.
The Tableau CRM destination creates the data file based on the incoming record. You define the metadata in JSON format when you configure the destination.
You must define metadata for the append, upsert, and delete operations. For append and upsert, the metadata must match the metadata of the dataset being uploaded to. For delete, the metadata must be a subset of the dataset columns.
You can optionally define metadata for the overwrite operation so that Tableau CRM can correctly interpret the data type of the data. If you do not enter metadata, then Tableau CRM treats every field as text.
For more information about how Tableau CRM handles JSON metadata for uploaded external data, see the Salesforce Developer documentation.
Automatic Recovery
By default, the Tableau CRM destination initiates automatic data recovery after an unexpected stop of the pipeline. When the pipeline restarts, the destination directs Tableau CRM to process all incomplete uploads.
You can configure the destination to skip automatic recovery if you prefer to evaluate the need for recovery and perform manual upload recovery, as needed.
To understand the automatic recovery process, you need to know how the Tableau CRM destination typically writes data to Tableau CRM:
- The destination uses the Salesforce SOAP API to create an
InsightsExternalData record containing the configured edgemart alias, metadata
JSON, operation, and other metadata. Tableau CRM sets the
Action
field on the InsightsExternalData record toNone
. - The destination formats each batch of records as CSV and uses the Salesforce SOAP API to create one or more InsightsExternalDataPart records for each batch of CSV-formatted data. The InsightsExternalDataPart records are associated with the InsightsExternalData record created in the previous step.
- If no additional data arrives before the configured wait time, or if
the pipeline comes to an expected graceful stop, the destination sets the
Action
field of the InsightsExternalData record toProcess
to indicate that the data is ready for processing.
If the pipeline stops unexpectedly before step 3 is complete, data can remain in Tableau CRM in an unprocessed state.
When performing automatic recovery, as the pipeline restarts, the destination
queries Salesforce for InsightsExternalData records with a matching edgemart alias and
an Action
field set to None
.
If the destination is configured to append a timestamp to the edgemart alias, then the destination searches for InsightsExternalData records with the edgemart alias as a prefix. Otherwise, the destination searches for an exact match on the edgemart alias.
The destination then sets the Action
field on matching records
to Process
to indicate that the uploads are ready for processing. After
the destination directs Salesforce to process the incomplete upload, it continues
writing new records to Tableau CRM as described above.
Action
field set to
None
, even if they were not written by the destination. So if
you have other matching InsightsExternalData records, the destination directs
Tableau CRM to process those records as well.Manual Upload Recovery
You can use the Salesforce Developer Console to search for unprocessed data manually, and either process or delete the data. You might perform manual upload recovery when the Tableau CRM destination is configured to skip automatic recovery and the pipeline comes to an unexpected stop.
- Processing unprocessed data
- To process unprocessed data, perform the following steps:
- In the Salesforce Developer Console, select the Query
Editor tab and specify the query that you want to
use.
For example, you can use the following query to select unprocessed data, most recent first:
SELECT Id, EdgemartAlias, Action, CreatedDate FROM InsightsExternalData WHERE Action = 'None' ORDER BY CreatedDate DESC
- For the records that you wish to process, double-click the
Action
field and change the value toProcess
. - Click Save Rows.
- In the Salesforce Developer Console, select the Query
Editor tab and specify the query that you want to
use.
- Deleting unprocessed data
- To delete unprocessed data, perform the following steps:
- In the Salesforce Developer Console, select the Query
Editor tab and specify a query for unprocessed
InsightsExternalDataPart records.
For example, the following query selects unprocessed InsightsExternalDataPart records, most recent first:
SELECT Id, InsightsExternalData.EdgemartAlias, CreatedDate FROM InsightsExternalDataPart WHERE InsightsExternalData.Action = 'None' ORDER BY CreatedDate DESC
- Select the records you wish to delete, and click the Delete Row button.
- Specify a query for the corresponding unprocessed
InsightsExternalData records.
For example, the following query selects unprocessed InsightsExternalData records, most recent first:
SELECT Id, EdgemartAlias, CreatedDate FROM InsightsExternalData WHERE Action = 'None' ORDER BY CreatedDate DESC
- Select the records you wish to delete, and click the Delete Row button.
- In the Salesforce Developer Console, select the Query
Editor tab and specify a query for unprocessed
InsightsExternalDataPart records.
Configuring an Tableau CRM Destination
-
In the Properties panel, on the General tab, configure the
following properties:
General Property Description Name Stage name. Description Optional description. Required Fields Fields that must include data for the record to be passed into the stage. Tip: You might include fields that the stage uses.Records that do not include all required fields are processed based on the error handling configured for the pipeline.
Preconditions Conditions that must evaluate to TRUE to allow a record to enter the stage for processing. Click Add to create additional preconditions. Records that do not meet all preconditions are processed based on the error handling configured for the stage.
On Record Error Error record handling for the stage:- Discard - Discards the record.
- Send to Error - Sends the record to the pipeline for error handling.
- Stop Pipeline - Stops the pipeline.
-
On the Analytics tab, configure the following
properties:
Analytics Property Description Connection Connection that defines the information required to connect to an external system. To connect to an external system, you can select a connection that contains the details, or you can directly enter the details in the pipeline. When you select a connection, Control Hub hides other properties so that you cannot directly enter connection details in the pipeline.
Auth Endpoint Salesforce SOAP API authentication endpoint. For example, you might enter one of the following common values: login.salesforce.com
- Use to connect to a Production or Developer Edition organization.test.salesforce.com
- Use to connect to a sandbox organization.
Default is
login.salesforce.com
.API Version Salesforce API version used to connect to Salesforce. Default is 57.0.0. If you change the version, you also must download the relevant JAR files from Salesforce Web Services Connector (WSC).
Authentication Type Authentication type to use to connect to Salesforce: - Basic Authentication - Specify a user name and password.
- Connected App with OAuth - Use an OAuth 2.0-enabled connected app to enable machine-to-machine OAuth with JWT Bearer Flow.
Username Salesforce username in the following email format: <text>@<text>.com
.When using Connected App with OAuth authentication, the user must be authorized to use the app.
Password Salesforce password.
If the Data Collector machine is outside the trusted IP range configured in your Salesforce environment, you must use a security token along with the password. Use Salesforce to generate a security token and then set this property to the password followed by the security token.
For example, if the password is
abcd
and the security token is1234
, then set this property to abcd1234. For more information on generating a security token, see Reset Your Security Token.Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.Consumer Key Consumer key from the connected app. Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.Available when using Connected App with OAuth authentication.
Private Key Private key from the public key certificate that you used with the connected app. Ensure that the key is formatted correctly, with no spaces or extra line breaks. Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.Available when using Connected App with OAuth authentication.
Edgemart Alias Dataset name. The alias must be unique across an organization. Append Timestamp to Alias Appends the edgemart alias or dataset name with the timestamp of the dataset upload. To create a new dataset for each upload of data, select this option. To append, delete, overwrite, or upsert data to an existing dataset, clear this option.
Edgemart Container Name of the edgemart container or app that contains the dataset. Enter the developer name or the ID of the app rather than the display label. For example, the developer name of an app is "AnalyticsCloudPublicDatasets", but the display label of the app is "Analytics Cloud Public Datasets".
To get the developer name or ID, run the following query in Salesforce:
SELECT Id,DeveloperName,Name, AccessType,CreatedDate,Type FROM Folder where Type = 'Insights'
If not defined when the destination creates a new dataset, the destination uses the user's private app. If not defined when the destination uploads to an existing dataset, Tableau CRM resolves the app name.
If defined when the destination uploads to an existing dataset, the name must match the name of the current app containing the existing dataset.
Operation Operation to perform when uploading external data to a dataset. Use Dataflow Determines whether to use an Tableau CRM dataflow to combine multiple datasets together. Important: Using dataflows is now deprecated and will be removed in a future release. We recommend configuring the destination to use the append operation to combine data into a single dataset.Dataflow Name Name of the existing dataflow. You must create the dataflow in Tableau CRM.
Run Dataflow After Upload Determines whether the destination runs the dataflow each time that it uploads a dataset to Tableau CRM. Metadata JSON Metadata in JSON format that describes the schema of the data to be uploaded. Required for the append, upsert, and delete operations. Optional for the overwrite operation.
Skip Recovery Determines whether the destination initiates automatic recovery of unprocessed data after an unexpected stop of the pipeline. Dataset Wait Time (secs) Maximum time in seconds to wait for new data to arrive. After no data has arrived in this amount of time, the destination uploads the data to Tableau CRM. The dataset wait time must be at least as long as the Batch Wait Time for the origin in the pipeline.
-
On the Advanced tab, configure the following
properties:
Advanced Property Description Use Proxy Specifies whether to use an HTTP proxy to connect to Salesforce. Proxy Hostname Proxy host. Proxy Port Proxy port. Proxy Requires Credentials Specifies whether the proxy requires a user name and password. Proxy Realm Authentication realm for the proxy server. Proxy Username User name for proxy credentials. Proxy Password Password for proxy credentials. Tip: To secure sensitive information such as user names and passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.Use Mutual Authentication When enabled in Salesforce, you can use SSL/TLS mutual authentication to connect to Salesforce.
Mutual authentication is not enabled in Salesforce by default. To enable mutual authentication, contact Salesforce.
Before enabling mutual authentication, you must store a mutual authentication certificate in the Data Collector resources directory. For more information, see Keystore and Truststore Configuration.
Use Remote Keystore Enables loading the contents of the keystore from a remote credential store or from values entered in the stage properties. Private Key Private key used in the remote keystore. Enter a credential function that returns the key or enter the contents of the key. Certificate Chain Each PEM certificate used in the remote keystore. Enter a credential function that returns the certificate or enter the contents of the certificate. Using simple or bulk edit mode, click the Add icon to add additional certificates.
Keystore File Path to the local keystore file. Enter an absolute path to the file or enter the following expression to define the file stored in the Data Collector resources directory:
${runtime:resourcesDirPath()}/keystore.jks
By default, no keystore is used.
Keystore Type Type of keystore to use. Use one of the following types: - Java Keystore File (JKS)
- PKCS #12 (p12 file)
Default is Java Keystore File (JKS).
Keystore Password Password to the keystore file. A password is optional, but recommended.
Tip: To secure sensitive information such as passwords, you can use runtime resources or credential stores. For more information about credential stores, see Credential Stores in the Data Collector documentation.Keystore Key Algorithm Algorithm to manage the keystore.
Default is SunX509.