Feeling stuck with Segment? Say 👋 to RudderStack.

SVG
Log in

How to Load data from Salesforce to Snowflake Step-by-Step

Extract data from Salesforce

You can’t use a Data Warehouse without data, so the first and most important step is to extract the data you want from Salesforce.

Salesforce has many products, and it’s also a pioneer in cloud computing and the API economy. This means that it offers a plethora of APIs to access the services and the underlying data. In this post, we’ll focus only on Salesforce CRM, which again exposes a large number of APIs.

People that write their own scripts to ETL cloud data from their data source can benefit from this excellent post from Salesforce’s Helpdesk about which API to use.

You will have the following options:

  • REST_API
  • SOAP API
  • Chatter REST_API
  • Bulk API
  • Metadata API
  • Streaming API
  • Apex REST_API
  • Apex SOAP API
  • Tooling API

Pull data from the Salesforce REST API

From the above list, the complexity and feature richness of the Salesforce API is more than evident. The REST API and the SOAP API are exposing the same functionalities but using different protocols. Interacting with the REST_API can be done using tools like CURL or Postman or using HTTP clients for your favorite language or framework. A few suggestions:

  • Apache HttpClient for Java
  • Spray-client for Scala
  • Hyper for Rust
  • Ruby rest-client
  • Python http-client

The Salesforce REST API supports OAuth 2.0 authentication. More information can be found in the Understanding Authentication article. After you successfully authenticate with the REST_API, you have to start interacting with its resources and start fetching data from it to load them on a warehouse.

It’s easy to get a list of all the resources we have access to. For example, using curl, we can execute the following:

JAVASCRIPT
curl https://na1.salesforce.com/services/data/v26.0/ -H "Authorization: Bearer token"

A typical response from the server will be a list of available resources in JSON or XML, depending on what you have asked as part of your request.

JAVASCRIPT
{
"sobjects" : "/services/data/v26.0/sobjects",
"licensing" : "/services/data/v26.0/licensing",
"connect" : "/services/data/v26.0/connect",
"search" : "/services/data/v26.0/search",
"query" : "/services/data/v26.0/query",
"tooling" : "/services/data/v26.0/tooling",
"chatter" : "/services/data/v26.0/chatter",
"recent" : "/services/data/v26.0/recent"
}

The Salesforce REST_API is very expressive. It also supports a language called Salesforce Object Query Language (SOQL) for executing arbitrarily complex queries. For example, the following curl command will return the name fields of accounts:

JAVASCRIPT
curl https://na1.salesforce.com/services/data/v20.0/query/?q=SELECT+name+from+Account -H "Authorization: Bearer token"

and the result will look like the following:

JAVASCRIPT
{
"done" : true,
"totalSize" : 14,
"records" :
[
{
"attributes" :
{

Again, the result can be either in JSON or XML serialization. We would recommend using JSON to make the whole data connection process easier because the most popular data warehousing solutions natively support it.

With XML, you might have to transform it first into JSON before loading any data to the repository. More information about SOQL can be found on the Salesforce Object Query Language specification page.

If for any reason you would prefer to use SOAP, then you should create a SOAP client first: for example, you can use the force.com Web Service Connector (WSC) client. Or create your own using the WSDL using the information provided by this guide.

Despite the protocol changes, the architecture of the API remains the same, so again you will be able to access the same resources.

After you have your client ready and you can connect to Salesforce, you ought to perform the following steps:

  • decide which resources to extract from the API
  • map these resources to the schema of the warehouse of the data repository that you will use
  • transform data into it and
  • load the transformed data on the repository based on the instructions below

As you can see, accessing the API alone is not enough for ensuring the operation of a pipeline that will safely and on time deliver data you own on a data warehousing solution for analysis.

Pull Data using the Salesforce Streaming API

Another interesting way of interacting with Salesforce is through the Streaming API.

With it, you define queries, and every time something changes to the data that register to this query you get notifications. So, for example, every time you get a new account created, the API will push a notification about the event to your desired service. This is an extremely powerful mechanism that can guarantee almost real-time updates on a Data Warehouse repository.

To implement something like that, though, you must consider the limitations of both ends while ensuring the delivery semantics that your use case requires for every data management infrastructure that you will build.

For more information, you can read the documentation of the Streaming API.

[@portabletext/react] Unknown block type "aboutNodeBlock", specify a component for it in the `components.types` prop
[@portabletext/react] Unknown block type "aboutNodeBlock", specify a component for it in the `components.types` prop

Salesforce Data Preparation for Snowflake

Before you start ingesting your data into a Snowflake data warehouse instance, the first step is to have a well-defined data schema.

Data in Snowflake is organized around tables with a well-defined set of columns, with each one having a specific data type.

Snowflake supports a rich set of data types. It is worth mentioning that a number of semi-structured data types are also supported. With Snowflake, it is possible to directly load data in JSON, Avro, ORC, Parquet, or XML format. Hierarchical data is treated as a first-class citizen, similar to what Google BigQuery offers.

There is also one notable common data type that is not supported by Snowflake. LOB or large object data type is not supported. Instead, you should use a BINARY or VARCHAR type. But these types are not that useful for the warehouse of data use cases.

A typical strategy for loading data from Salesforce to Snowflake is to create a schema where you will map each API endpoint to a table.

Each key inside the Salesforce API endpoint response should be mapped to a column of that table, and you should ensure the right conversion to a Snowflake data type.

Of course, you must ensure that as any data types from the Salesforce API might change, you will adapt any database tables accordingly. There’s no such thing as automatic data type casting.

After you have a complete and well-defined data model or schema for Snowflake, you can move forward and start loading any data into the database.

Load data from Salesforce to Snowflake

Usually, data is loaded into Snowflake in bulk, using the COPY INTO command. Files containing data, usually in JSON format, are stored in a local file system or in Amazon S3 buckets. Next, a COPY INTO command is invoked on the Snowflake instance, and data is copied into a data warehouse.

The files can be pushed into Snowflake using the PUT command into a staging environment before the COPY command is invoked.

Another alternative is to upload data directly into a service like Amazon S3, from where Snowflake can access the data directly.

Finally, Snowflake offers a web interface as a data loading wizard where someone can visually set up and copy data into the data warehouse. Just keep in mind, that the functionality of this wizard is limited compared to the rest of the methods.

In contrast to other technologies like Amazon Redshift or PostgreSQL, Snowflake does not require a data schema to be packed together with data that will be copied. Instead, the schema is part of the query that will copy any data into the warehouse. This simplifies the data loading process and offers more flexibility on data type management.

[@portabletext/react] Unknown block type "aboutNodeBlock", specify a component for it in the `components.types` prop

Updating your Salesforce data on Snowflake

As you will be generating more data on Salesforce, you will need to update your older data on Snowflake. This includes new records together with updates to older records that for any reason have been updated on Salesforce.

You will have to periodically check Salesforce for new data and repeat the process that has been described previously while updating your currently available data if needed. Updating an already existing row on a Snowflake table is achieved by creating UPDATE statements.

Snowflake has a great tutorial on the different ways of handling updates, especially using primary keys.

Another issue that you need to take care of is the identification and removal of any duplicate records on the database. Either because Salesforce does not have a mechanism to identify new and updated records or because of errors on data pipelines, duplicate records might be introduced to your database.

In general, ensuring the quality of data that is inserted in your database is a big and difficult issue. In such cases, using cloud ETL tools like RudderStack can help you get valuable time back.

The best way to load data from Salesforce to Snowflake

So far, we just scraped the surface of what you can do with Snowflake and how to load data into it. Things can get even more complicated if you want to integrate data coming from different sources.

Are you striving to achieve results right now?

Instead of writing, hosting, and maintaining a flexible data infrastructure, RudderStack can handle everything automatically on the cloud integration level for you.

RudderStack, with one click, integrates with sources or services, creates analytics-ready data, and syncs your Salesforce to Snowflake right away.

Sign Up For Free And Start Sending Data

Test out our event stream, ELT, and reverse-ETL pipelines. Use our HTTP source to send data in less than 5 minutes, or install one of our 12 SDKs in your website or app.

Don't want to go through the pain of direct integration? RudderStack's Salesforce integration makes it easy to send data from Salesforce to Snowflake.