Snowflake
The Snowflake destination writes data to Snowflake. You can use the Snowflake destination to write to any accessible Snowflake database, including those hosted on Amazon S3, Microsoft Azure, and private Snowflake installations.
The Snowflake destination stages data to an internal table before writing it to Snowflake.
When you configure the destination, you specify the Snowflake region, database, table, and schema to use. You also specify the user account and password. You can optionally specify a custom role that overrides the default role for the user account. The user account or the custom role must have the required Snowflake privileges.
You can also use a connection to configure the destination.
You specify the write mode to use: overwrite, append, or merge. When appending data, the destination creates the table if it does not exist. To merge data, you configure additional merge properties.
You specify how to map record fields to table columns and select the behavior for data type mismatches. You can configure the origin to preserve existing capitalization for column names. You can also specify the number of connections to use and configure additional Snowflake properties.
Required Privileges and Custom Roles
The Snowflake destination requires a Snowflake role that grants the following privileges:
Object | Privilege |
---|---|
Schema | CREATE TABLE |
Table | SELECT, INSERT |
- Assign the custom role as the default role
- In Snowflake, assign the custom role as the default role for the Snowflake user account specified in the stage. A Snowflake user account is associated with a single default role.
- Override the default role with the custom role
- In the stage, select Use Snowflake Role on the connection information tab. Then, specify the name of the custom role. The custom role overrides the default role assigned to the Snowflake user account specified in the stage.
Write Mode
- Overwrite existing table
- If the table exists, the destination drops and recreates the table. Then, the destination inserts all data to the table.
- Append rows to existing table or create table if none exists
- If the table exists, the destination appends data to the table. If the table does not exist, the destination creates the table.
- Merge rows to existing table
- The destination merges data with data in an existing table. The destination performs inserts, updates, and deletes based on the specified merge properties.
Merge Properties
- Join Key
- One or more key columns in the table. Used to perform updates and deletes and to ensure that duplicate rows do not exist for inserts. Pipeline records must include a matching field name.
- When Clause
- Action that the destination
performs when a record meets the specified conditions. You can specify
multiple merge configurations for the destination to perform. Important: The destination performs the writes in the specified order. Best practice is to list merge configurations with the smallest number of affected records first, progressing to the largest number of affected records. When defining multiple merge configurations of the same type, carefully consider the order that you use.
Snowflake Data Types
When writing to Snowflake, the Snowflake destination converts the Spark data types used in the pipeline to Snowflake data types. The following table describes how this conversion occurs.
Spark Data Type | Snowflake Data Type |
---|---|
ArrayType | VARIANT |
BooleanType | BOOLEAN |
ByteType | INTEGER |
DateType | DATE |
DecimalType | DECIMAL |
DoubleType | DOUBLE |
FloatType | FLOAT |
IntegerType | INTEGER |
LongType | INTEGER |
MapType | VARIANT |
ShortType | INTEGER |
StringType | VARCHAR(N) for specified lengths, otherwise VARCHAR |
StructType | VARIANT |
TimestampType | TIMESTAMP |
Configuring a Snowflake Destination
Configure the Snowflake destination to write data to Snowflake.
-
On the Properties panel, on the General tab, configure the
following properties:
General Property Description Name Stage name. Description Optional description. Stage Library Stage library to use to connect to Snowflake: - Snowflake cluster-provided libraries - The cluster where the pipeline runs has Snowflake libraries installed, and therefore has all of the necessary libraries to run the pipeline.
- Snowflake Transformer-provided libraries - Transformer passes the necessary libraries with the pipeline
to enable running the pipeline.
Use when running the pipeline locally or when the cluster where the pipeline runs does not include the Snowflake libraries.
Note: When using additional Snowflake stages in the pipeline, ensure that they use the same stage library. -
On the Connection tab, configure the following
properties:
Connection Property Description Connection Connection that defines the information required to connect to an external system. To connect to an external system, you can select a connection that contains the details, or you can directly enter the details in the pipeline. When you select a connection, Control Hub hides other properties so that you cannot directly enter connection details in the pipeline.
To create a new connection, click the Add New Connection icon: . To view and edit the details of the selected connection, click the Edit Connection icon: .
Snowflake Region Region where the Snowflake warehouse is located. Select one of the following: - An available Snowflake region.
- Other - Enables specifying a Snowflake region not listed in the property.
- Custom JDBC URL - Enables specifying a virtual private Snowflake installation.
Available when Include Organization is disabled.
Custom Snowflake Region Custom Snowflake region. Available when using Other as the Snowflake region. Virtual Private Snowflake URL Custom JDBC URL to use when using a virtual private Snowflake installation. Account Snowflake account name. User Snowflake user name. The user account or the custom role that overrides the default role for this user account must have the required Snowflake privileges.
For details, see Required Privileges and Custom Roles.
Password Snowflake password. Use Snowflake Role Overrides the default role for the specified user account. The custom role must have the required Snowflake privileges.
Snowflake Role Name Name of the custom Snowflake role to use. Connection Properties Additional Snowflake Connection for Spark properties to use. For information about additional options, see the Snowflake documentation. To add properties, click the Add icon and define the property name and value. Specify the property names and values as expected by Snowflake.
You can use simple or bulk edit mode to configure the properties.
Warehouse Snowflake warehouse. Database Snowflake database. Schema Snowflake schema. -
On the Table tab, configure the following
properties:
Table Property Description Table Name of the Snowflake table to write to. Keep Column Case Preserves the case used in field names when writing data to Snowflake. When not selected, field names are converted to all caps when writing to Snowflake, unless the names are enclosed in quotation marks.
Column Mapping Mode Mode to map record fields to Snowflake table columns: - By Order - Record fields map to table columns based on order.
- By Name - Record fields map to table columns based on names. The mapping is not case-sensitive.
Mismatch Behavior Action to take when field data is the wrong data type for the mapped table column: - Error - Generates an error that stops the pipeline.
- Ignore - Drops the invalid data from the record and inserts a null value in the table column.
Write Mode Write mode for the stage: - Overwrite existing table - If the table exists, the destination drops and recreates the table. Then, the destination inserts all data to the table.
- Append rows to existing table or create table if
none exists - If the table exists, the destination appends data to the
table. If the table does not exist, the destination creates the
table.
Append data to an existing table only when each record has a unique primary key. If a row with the same primary key exists in the table, the pipeline fails with a primary key violation.
- Merge rows to existing table - The destination merges data with data in an existing table. The destination performs inserts, updates, and deletes based on the specified merge properties.
Join Key One or more key columns in the table. Used to perform updates and deletes and to ensure that duplicate rows do not exist for inserts. Pipeline records must include a matching field name. Merge Configuration Action that the destination performs when a record meets the specified conditions. You can specify multiple merge configurations for the destination to perform. Important: The destination performs the writes in the specified order. Best practice is to list merge configurations with the smallest number of affected records first, progressing to the largest number of affected records. When defining multiple merge configurations of the same type, carefully consider the order that you use.Configure the following properties:- When Clause - Determines when an action is
performed. Select one of the following options:
- When Matched - Use with the Update and Delete operations. Updates or deletes a row upon finding a matching join key.
- When Not Matched - Use with the Insert operation. Inserts a row when no matching join key exists in the table.
- When Not Matched by Source - Do not use. Snowflake does not support this option.
- Additional Condition - An optional additional
condition for the merge. Use
$input.
to prefix field names in records, and$target.
to prefix column names in the table. - Operation - The operation to perform: Insert, Update, or Delete. The operation that you can use is dependent on the When Clause option that you select.
Click the Add icon to specify another merge clause.
Available with the Merge Rows to Existing Table write mode.
-
Optionally, on the Advanced tab, configure the following
property:
Advanced Property Description Connection Pool Maximum number of connections to Snowflake that the stage uses. Default is 4.
Increasing this property can improve performance. However, Snowflake warns that setting this property to an arbitrarily high value can adversely affect performance. The default is the recommended value.