

Book a Demo
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
This document outlines the process and requirements for integrating your marketing attribution platform with a customer's Snowflake instance. The integration enables data from your platform to be written back into the customer’s Snowflake instance. Two primary approaches are covered:
The write-back flow is initialized after the Primary Connection Sync (in this case, Salesforce) completes.
1. RevSure Airflow Spark Job to Snowflake: Data processing is handled by Spark jobs, which directly write data into Snowflake using the Snowflake Spark connector.
2. RevSure Airflow to S3 to Snowflake: Data is first uploaded to an S3 bucket, then loaded into Snowflake using Snowflake's COPY INTO
command.
Architecture Overview: In this setup, after the Primary Connection completes the Projection Pipeline, RevSure Airflow triggers a Spark job that processes and writes data directly into Snowflake using the Snowflake Spark connector. This is an efficient solution for handling large datasets and performing complex transformations.
Steps
Example JDBC URL:
Example Scala Code:
Architecture Overview: Data is written to an intermediate S3 bucket before being loaded into Snowflake using the COPY INTO command. This approach is simpler to manage, especially when dealing with moderate data volumes.
Steps:
Example Python Code for Airflow Task:
Example SQL Command: