By Rudderstack Team

How to load data from the Aftership to Snowflake

This post will help you with syncing your Aftership data to Snowflake. By doing this, you will be able to perform advanced analytics on a system that is designed for this kind of data payloads, like Snowflake. Alternatively, you can simplify the process of syncing data from Aftership to Snowflake by using RudderStack, where the whole process will be handled by RudderStack and you can focus on what matters, the exploration and the analysis of your order tracking data.

More specifically, by doing so you can focus on monitoring of the status and accuracy of the orders that are being shipped to your customers via various carriers worldwide. You can also expand your traditional analysis in order to include key metrics on regional maps like delivery accuracy, department and arrival times of the orders as well as to capture the current trends regarding the number of products being shipped to each geographic region. This way, your team will able to respond to upcoming situations and notify your customers of any issues in advance.

Access your data on Aftership

The first step in loading your Aftership data to any kind of data warehouse solution is to access your data and start extracting it.

Using the REST API that Aftership offers, you can programmatically interact with your account in order to gain access to your order tracking data. By doing so, you can:

  • Get the list of all supported couriers.
  • Retrieve tracking results
  • Get tracking information of the last checkpoint of a tracking
  • Gain access to contacts (SMS or email) to be notified when the status of tracking has changed.

You can also retrieve some basic aggregated metrics for any user-defined time period, such as the average score of all your surveys or of a specific trend or client.

In addition to the above, the things that you have to keep in mind when dealing with the Aftership API are:

  • Rate limits. In order to guarantee a high quality of service to all users of the API, Aftership applies certain rate limits. Currently, users are limited to 600 requests per minute per account.
  • Authentication. You can authenticate to Aftership using a private API key that is linked to your account.
  • Pagination. API endpoints that return a collection of items are always paginated.

About Aftership

Aftership is a package tracking platform for online retailers and e-commerce businesses supporting. It was first introduced in 2011, and since then, it has been widely adopted from some of the biggest e-commerce companies like Wish and Etsy. Among the features the Aftership offers, the following are included:

  1. Customer engagement with branded tracking pages: Customers are directed to the company’s website for tracking in order to further engage them after-sales.
  2. Proactive delivery updates: Customers remain informed regarding the latest status of their orders via push notifications, email, or SMS.

Aftership is also one of the top apps and extensions at various shopping cart solutions like Shopify, Bigcommerce, eBay, and Magento, with millions of active shipments each month.

Transform and prepare your Aftership data for Snowflake

After you have accessed your data on Aftership, you will have to transform it based on two main factors,

  1. The limitations of the database that the data will be loaded onto
  2. The type of analysis that you plan to perform

Each system has specific limitations on the data types and data structures that it supports. If for example, you want to push data into Google BigQuery, then you can send nested data like JSON directly.

Also, you have to choose the right data types. Again, depending on the system that you will send the data to and the data types that the API exposes to you, you will have to make the right choices. These choices are important because they can limit the expressivity of your queries and limit your analysts on what they can do directly out of the database.

Also, you have to consider that the reports you’ll get from Aftership are like CSV files in terms of their structure and you need to somehow identify what and how to map to a table into your database.

Data in Snowflake is organized around tables with a well-defined set of columns with each one having a specific data type.

Snowflake supports a rich set of data types. It is worth mentioning that a number of semi-structured data types is also supported. With Snowflake, it is possible to load data directly in JSON, Avro, ORC, Parquet, or XML format. Hierarchical data is treated as a first-class citizen, similar to what Google BigQuery offers.

There is also one notable common data type that is not supported by Snowflake. LOB or large object data type is not supported. Instead, you should use a BINARY or VARCHAR type instead. But these types are not that useful for data warehouse use cases.

A typical strategy for loading data from Aftership to Snowflake is to create a schema where you will map each API endpoint to a table.

Each key inside the Aftership API endpoint response should be mapped to a column of that table and you should ensure the right conversion to a Snowflake data type.

Of course, you will need to ensure that as the data types from the Aftership API might change, you will adapt your database tables accordingly. There’s no such thing as automatic data typecasting.

After you have a complete and well-defined data model or schema for Snowflake, you can move forward and start loading your data into the database.

Load data from Aftership to Snowflake

Usually, data is loaded into Snowflake in a bulk way, using the COPY INTO command. Files containing the data, usually in JSON format, are stored in a local file system or in Amazon S3 buckets. Then a COPY INTO command is invoked on the Snowflake instance, and data is copied into the data warehouse.

The files can be pushed into Snowflake using the PUT command into a staging environment before the COPY command is invoked.

Another alternative is to upload the data directly into a service like Amazon S3, from where Snowflake can access the data directly.

Updating your Aftership data on Snowflake

As you will be generating more data on Aftership, you will need to update your older data on Snowflake. This includes new records together with updates to older records that, for any reason, have been updated on Aftership.

You will need to periodically check Aftership for new data and repeat the process that has been described previously while updating your currently available data if needed. Updating an already existing row on a Snowflake table is achieved by creating UPDATE statements.

Another issue that you need to take care of is the identification and removal of any duplicate records on your database. Either because Aftership does not have a mechanism to identify new and updated records or because of errors on your data pipelines, duplicate records might be introduced to your database.

In general, ensuring the quality o