Try Transformer for Snowflake

This tutorial covers the steps needed to try Transformer for Snowflake. You will learn how to work with the Control Hub user interface, build a basic Snowflake pipeline, and preview the pipeline actions.

Although the tutorial provides a simple use case, keep in mind that StreamSets is a powerful platform that enables you to build and run large numbers of complex pipelines.

To complete this tutorial, you must have an existing StreamSets account. If you do not have one, use the following URL to sign up for a free account: https://streamsets.com/access/
Note: When you sign up, StreamSets grants your user account all of the roles required to complete the tasks in this tutorial. If you are invited to join an existing organization, your user account requires the Pipeline Editor and Job Operator roles to complete tutorial tasks.
To try Transformer for Snowflake, complete the following steps:
  1. Complete Prerequisite Tasks
  2. Build a Snowflake Pipeline
  3. Run a Job

Complete Prerequisite Tasks

Before you start the tutorial, perform the following tasks:
Verify Snowflake requirement for network policies
If you are just starting to use Transformer for Snowflake, and your Snowflake account uses network policies, complete the Snowflake requirement.
Verify user permissions
Make sure that the user account for the tutorial has the following permissions on the database where you create the tables:
  • Read
  • Write
  • Create Table
Create source tables
Use the following SQL queries to create and populate two tables in Snowflake.
Notice that the WAREHOUSE_WEST table has a Bin column that the WAREHOUSE_EAST table does not.
  • WAREHOUSE_EAST table
    CREATE OR REPLACE TABLE warehouse_east (
        id integer,
        name string,
        inventory integer
    );
    INSERT INTO warehouse_east VALUES (1, 'toolbox', 5), (2, 'hammer', 11), (3, 'multitool', 6);

    In the Snowflake console, the resulting table should look like this:

  • WAREHOUSE_WEST table
    CREATE OR REPLACE TABLE warehouse_west (
        id integer,
        name string,
        inventory integer,
        bin string
    );
    INSERT INTO warehouse_west VALUES (3, 'multitool', 25, '2-1'), (4, 'wrench', 30, '2-2');

    In the Snowflake console, the resulting table should look like this:

Build a Snowflake Pipeline

With these steps, you build a Snowflake pipeline that uses two Snowflake Table origins to read from the two source tables that you created, a Union processor to merge the data, and a Snowflake Table destination to write to a new output table.

You also preview the pipeline to verify how the stages process data.

  1. Use the following URL to log in to StreamSets: https://cloud.login.streamsets.com/

    Control Hub displays the Getting Started view.

  2. Click Quick Start > Create a pipeline. Or, in the navigation panel, click Build > Pipelines, then click Create a Pipeline.
  3. Enter the following pipeline name: Snowflake Tutorial.
  4. For Engine Type, select Transformer for Snowflake, start with the default blank pipeline, and then click Next.
  5. In the Share Pipeline step, click Save & Open in Canvas.

    A blank pipeline opens in the canvas.

  6. On the General tab of the pipeline properties, specify the following Snowflake properties:
    General Property Description
    Snowflake URL Enter the URL of your Snowflake instance. For example:

    https://<yourcompany>.snowflakecomputing.com

    Role Enter the name of the role used to access your Snowflake account.
    Note: For this tutorial, the role must have the following permissions:
    • Read
    • Write
    • Create Table
    Warehouse Enter the name of the warehouse where you created the tutorial tables.
    Default Schema Specify the database and schema where you created the tutorial tables.

    This schema becomes the default schema for the origins and destinations in the pipeline. You can override this property in those stages when necessary.

    Enter the database and schema name, or click the Select Schema icon to explore your Snowflake data for the database and schema to use.

    You should have read, write, and create table permissions for the database.

  7. In the canvas, click the Add Stage icon () to open the stage selector.
  8. Select Snowflake Table.

    The origin is added to the canvas.

  9. In the properties panel below the canvas, click the General tab, then name the origin Reads WAREHOUSE_EAST.
  10. Click the Table tab.
  11. Click the Select Table icon (). Navigate to the schema and database where you created the tutorial tables, then select the WAREHOUSE_EAST table.

    We want to read all data in the table, so no other properties are needed.

  12. To add another origin, click the Add Stage icon in the toolbar above the canvas, click Origins, and then select Snowflake Table.

    Select the second origin and drag it below the first origin in the pipeline canvas.

  13. Click the General tab for the origin, then name the origin Reads WAREHOUSE_WEST.
  14. Click the Table tab.
  15. Click the Select Table icon (). Navigate to the schema and database where you created the tutorial tables, then select the WAREHOUSE_WEST table.

    We want to read all data in the table, so no other properties are needed.

  16. Click the Add Stage icon connected to the second origin, click Processors in the stage selector, and then select the Union processor.
  17. Click the General tab for the processor, then name the processor Union All.
    Using the stage selector connects the new stage with the selected stage. Connect the other origin by clicking on the output of the origin, then drag and drop it over the processor.
    The input order is irrelevant in this case, but to change the order, select the processor, then in the pop-up menu select the Reorder icon:
  18. Click the Union tab, then configure the following properties:
    Union Property Value
    Operation Union
    Column Handling Pass all Columns

    This passes all columns from both origins and ensures that all rows have the superset of columns passed into the processor. That is, any columns that exist only in one set of rows are added to the other rows. These new columns are populated with null values.

  19. Click the Add Stage icon connected to the processor, click Destinations in the stage selector, and then select the Snowflake Table destination.
  20. Click the General tab for the destination, then name the stage Tutorial Output.
  21. Click the Table tab.
  22. Click the Table text box.

    The Overwrite Database, Schema, and Table dialog box displays. By default, the destination uses the database and schema that you defined in the pipeline properties. You can optionally override the database and schema to write the output to another schema.

  23. Enter TUTORIAL_OUTPUT for the table, and then click Save.
  24. Configure the following additional properties:
    Snowflake Table Value
    Write Mode Overwrite Table

    This ensures that if you run this pipeline again, you receive the expected results.

    Overwrite Mode Drop Table
  25. Use the default values for all remaining properties.

    Note that Create Table is enabled by default so that the destination creates the TUTORIAL_OUTPUT table before writing to it.

    Now that the pipeline is complete, you can preview how the pipeline works. Preview a pipeline to help you develop pipeline logic. When you preview the pipeline, you can view how a sample set of rows change as they pass through the pipeline.
  26. In the toolbar above the pipeline canvas, click the Preview icon: .
  27. If you have not yet stored Snowflake credentials with your account, enter the following information in the Snowflake Credentials dialog box and click Save:
    If you have already specified Snowflake credentials, the Snowflake Credentials dialog box does not appear.
    Snowflake Credentials Property Description
    Username Snowflake user name.
    Authentication Method Authentication method to use: password or private key.
    Password Password for the Snowflake account.

    Available when using password authentication.

    Private Key Private key for the Snowflake account. Enter a PKCS#1 or PKCS#8 private key and include the key delimiters.

    For example, when entering a PKCS#8 private key, include the -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY----- key delimiters.

    Available when using private key authentication.

    Role Optional role to use. Use to limit access to the Snowflake account.
    Note: For this tutorial, the role must have the following permissions:
    • Read
    • Write
    • Create Table

    The first origin is selected in the pipeline canvas, and preview displays several rows of output data read by the origin.

  28. In the canvas, select the processor so that you can review the output data.

    Notice how the two input streams have been merged into one. Rows from WAREHOUSE_EAST have a new Bin column with null values, because the Bin column existed in the WAREHOUSE_WEST data.

  29. Click Close Preview to close the preview.
    Now we know how the pipeline processes the data, but let's create and run a job to see the pipeline in action.

Run a Job

Jobs are the execution of a dataflow that is represented in a pipeline.

When pipeline development is complete, you check in the pipeline to indicate that the pipeline is ready to be added to a job and run. When you check in a pipeline, you enter a commit message. StreamSets maintains the commit history of each pipeline.

Since the Snowflake engine is hosted on StreamSets DataOps Platform, job configuration is simple, compared to Data Collector or Transformer pipelines. As a result, you can use the default values when creating the job.

  1. With the pipeline open in the canvas, click the Check In icon:
  2. Enter a commit message. You can use the default: New Pipeline.
    As a best practice, state what changed in this pipeline version so that you can track the commit history of the pipeline.
  3. Click Publish and Next.

    The Share Pipeline step displays. You can skip this step for now. When additional users join your organization, you must share the pipeline to grant them access to it.

  4. Click Save & Create New Job.

    The Create Job Instances wizard appears.

  5. Use the defaults in the Define Job step, and click Next.
  6. In the Select Pipeline step, click Next.
  7. In the Review & Start step, click Start & Monitor Job.

    The job displays in the canvas, and Control Hub indicates that the job is active. When the job completes, the job has an Inactive status and displays the time that the job started and stopped.

  8. Click the Summary tab to view the input and output row count for the completed job.

  9. To view the results of the pipeline, go to your Snowflake console and navigate to the database, warehouse, and schema that you used for the tutorial. Notice that the pipeline created a new TUTORIAL_OUTPUT table.

    When you preview the data in the table, it should look like this:

    Notice the table includes a Bin column that only existed in the WAREHOUSE_WEST table. And as you saw when previewing the pipeline processing, rows that did not previously have Bin data, now have nulls in that column.

    Congratulations! You have built and run your first Snowflake pipeline.

Next Steps

Now that you are familiar with building Snowflake pipelines and running jobs, you might use the following suggestions to deepen your understanding of Snowflake pipelines and Control Hub.
Note: This documentation contains the information that you need to create Snowflake pipelines. Information about general Control Hub features is available in the Control Hub documentation.
Modify the tutorial pipeline
Add a couple processors to the tutorial pipeline to see how easily you can add a wide range of processing to the pipeline:
  • Add a Filter to remove the multitool rows from the data set.
  • Use a Snowflake SQL Evaluator to double the inventory values, and overwrite the existing values.
  • To see how the data drift feature works, change the destination Overwrite Mode property from Drop Table to Truncate Table. Then, add a Column Renamer processor to rename the Bin column.
Remember that you can preview the pipeline to see how each processor does its job. For more information, see the Control Hub documentation.
Create a new pipeline
Create a new pipeline using your own Snowflake data or the Snowflake sample data. You might explore some of the following functionality:
  • If you have a Snowflake query that you want to enhance, use the Snowflake Query origin to generate data for the pipeline, then add processors to perform additional processing.
  • Use the Join processor to join data from two Snowflake tables or views.
  • As you develop the pipeline, use the Trash destination with data preview to see if the pipeline processes data as expected. For more information about data preview, see the Control Hub documentation.
Explore advanced features
  • Try using an existing user-defined-function (UDF) or define one in the pipeline.
  • If you have an entire pipeline that you want to run with small changes, use runtime parameters to easily reuse and adapt pipeline logic.
  • If you have a series of stages that you want to reuse in multiple pipelines, try creating a pipeline fragment.
  • Configure Snowflake pipeline defaults to make configuring pipelines easier when you use the same Snowflake details in all or most of your pipelines.
Learn more about Control Hub
Add users to your organization
  • Invite other users to join your organization and collaboratively manage pipelines as a team.
  • To create a multitenant environment within your organization, create groups of users. Grant roles to these groups and share objects within the groups to grant each group access to the appropriate objects.