Feeling stuck with Segment? Say 👋 to RudderStack.

SVG
Log in

How to load data from Mandrill to SQL Data Warehouse

[@portabletext/react] Unknown block type "aboutNodeBlock", specify a component for it in the `components.types` prop
[@portabletext/react] Unknown block type "aboutNodeBlock", specify a component for it in the `components.types` prop

How to Extract my data from Mandrill?

There are two main methods to get our data from Mandrill, the first one is to pull data out from it and the second one is to ask Mandrill to push data to us whenever something of importance happens. We will see the difference between these two solutions and how we can access data from both.

In order to pull data from Mandrill, we need to access its HTTP API. As a Web API following the RESTful architecture principles, it can be accessed through HTTP. As a RESTful API, interacting with it can be achieved by using tools like CURL or Postman or by using http clients for your favorite language or framework. A few suggestions:

Mandrill maintains a number of officially supported clients or SDKs that you can use with your favorite language to access it without having to mess with the raw underlying HTTP calls. These are the following:

There are also a number of unofficial clients that you can use if you prefer. The complete list can be found here.

In this post, we will consider the more generic case of accessing the HTTP endpoints directly for our examples, but of course, you are free to use the client of your choice for your project.

Mandrill API Authentication

In order to use the Mandrill API, you first have to generate an API key through your MandrillApp account. When you have created the key you can use it to access the API. You can actually have multiple keys per account, something that adds versatility to the platform. In most cases with the Mandrill API, you make a POST call to access an endpoint with a JSON body containing the access key.

Mandrill rate limiting


API rate limiting with Mandrill is a bit of a more complicated matter than in most cases of APIs out there. The reason is that Mandrill is mainly an SMTP as a service platform, in most cases when we make a call to its API we do it in order to send an email to someone, so rate limiting in the typical sense that we find it in web APIs does not apply in Mandrill.

What is actually happening is that every Mandrill account has a reputation and an hourly quota, the main reason that rate limiting is a bit more complicated in Mandrill is because they need to take special care of pointing out and handling potential spammers. So the hourly quota is affected by your reputation, if for example you have poor reputation then Mandrill will reduce the number of e-mails and consequently the API calls that you can do on a per hour basis, on the contrary if you have an excellent reputation you will be able to make more calls.

Free accounts can send up to 25 emails per hour. If you want to find your hourly quota and reputation you will have to check your Dashboard in MandrillApp.

Endpoints and available resources

Mandrill exposes the following endpoints:

  • Users: Information about your account, for example here you can validate that your API key is valid.
  • This endpoint is used to send messages through the Mandrill API.
  • Information and operation about user defined tags.
  • Rejects. Manage your email rejection list.
  • Whitelists. Manage your rejection whitelists.
  • Senders. Manage senders associated with your Mandrill account.
  • Get information about the URLs that are included in your emails.
  • Manage email templates.
  • Webhooks. Manage webhooks for your account.
  • Subaccounts. Manage sub-accounts.
  • Information about domains that have been configured for inbound delivery.
  • Run export jobs to retrieve data from your Mandrill account.
  • IPs. Information and operations about your dedicated IPs.
  • Information and operations about your custom metadata fields indexed for the account.

The above endpoints define the complete set of operations that we perform with Mandrill, in our case we care mainly about what data we can export so we will work with the export endpoint. Export jobs can be executed for the following data:

  • Export your rejection blacklist.
  • Export your rejection whitelist.
  • Export your activity history.

We assume that you would like to export your activity data. In order to do that you need to perform a POST request to the following endpoint:

JAVASCRIPT
/exports/activity.json

Keep in mind that the base url might change depending on the warehouse where your application is hosted. For this reason we will mention only the endpoints and you will have to prepend the base url for your case.

The body that we should post to the above end-point should look like this:

JAVASCRIPT
{
"key": "example key",
"notify_email": "notify_email@example.com",
"date_from": "2013-01-01 12:53:01",
"date_to": "2013-01-06 13:42:18",
"tags": [
"example-tag"
],
"senders": [
"test@example.com"
],
"states": [
"sent"
],
"api_keys": [
"ONzNrsmbtNXoIKyfPmjnig"
]
}

We need to provide our api key, and we can also define a date range from which the API will collect data for. If we want we can filter even more the data we will get back by requesting specific tags or senders and states.

The results will include fields about:

  • Date
  • Email address
  • Sender
  • Subject
  • Status
  • Tags
  • Opens
  • Clicks
  • bounce details

When the export job finishes, the data will be available through a URL in a gzipped format. Keep in mind that you will have to poll the Exports endpoint to figure out when the job is finished and get the exact url from which you will get the data. To do that you need to perform a POST request to the following end-point:

JAVASCRIPT
/exports/info.json

The body of the POST request should be a JSON document containing your api-key. You will get back a result like the following:

JAVASCRIPT
{
"id": "2013-01-01 12:20:28.13842",
"created_at": "2013-01-01 12:30:28",
"type": "activity",
"finished_at": "2013-01-01 12:35:52",
"state": "working",
"result_url": "http://exports.mandrillapp.com/example/export.zip"
}

As you can see from the response, we get a URL from which we can fetch the data and information about the completion or not of the job, if the state of the job is “complete” then we can safely download the data and further process it.

Another way of getting data from the Mandrill API is to ask it to push events to our system every time something of importance happens. To do that, we need to set up webhooks on our system and provide the URLs to Mandrill. The platform will POST data in JSON format to these URLs every time an event is triggered. The good thing about this mechanism is that we can have the data as soon as possible in our system for analysis.

Every Mandrill webhook uses the same general data format, regardless of the event type. The webhook request is a standard POST request with a single parameter (currently) – mandrill_events.

There are three types of webhooks that Mandrill currently POSTs: Message webhooks (such as when a message is sent, opened, clicked, rejected, deferred, or bounced), Sync webhooks, and Inbound webhooks.

The mandrill_events parameter contains a JSON-encoded array of webhook events, up to a maximum of 1000 events. Each element in the array is a single event, such as an open, click, or blacklist sync event. For examples of each type of event and a description of the keys, select the type of events you’ll be processing:

For more information about Webhooks, you can check here.

How can I Load my data from Mandrill to SQL Data Warehouse?

SQL Data Warehouse supports numerous options for loading data, such us:

  • PolyBase
  • Azure Data Factory
  • BCP command-line utility
  • SQL Server integration services

As we are interested in loading data from online services by using their exposed HTTP APIs, we are not going to consider the usage of BCP command-line utility or SQL server integration in this guide. We’ll consider the case of loading our data as Azure storage Blobs and then use PolyBase to load the data into SQL Data Warehouse.

Accessing these services happens through HTTP APIs, as we see again APIs play an important role in both the extraction but also the loading of data into our data warehouse. You can access these APIs by using a tool like CURL or Postman. Or use the libraries provided by Microsoft for your favorite language.

Before you actually upload any data you have to create a container which is something similar to a concept to the Amazon AWS Bucket, creating a container is a straightforward operation and you can do it by following the instructions found on the Blog storage documentation from Microsoft. As an example, the following code can create a container in Node.js.

JAVASCRIPT
blobSvc.createContainerIfNotExists('mycontainer', function(error, result, response){
if(!error){
// Container exists and allows
// anonymous read access to blob
// content and metadata within this container
}
});

After the creation of the container you can start uploading data to it by using again the given SDK of your choice in a similar fashion:

JAVASCRIPT
blobSvc.createBlockBlobFromLocalFile('mycontainer', 'myblob', 'test.txt', function(error, result, response){
if(!error){
// file uploaded
}
});

When you are done putting your data into Azure Blobs you are ready to load it into SQL Data Warehouse using PolyBase. To do that you should follow the directions in the Load with PolyBase documentation. In summary, the required steps to do it, are the following:

  • create a database master key
  • create a database scoped credentials
  • create an external file format
  • create an external data source

PolyBase’s ability to transparently parallelize loads from Azure Blob Storage will make it the fastest tool for loading data. After configuring PolyBase, you can load data directly into your SQL Data Warehouse by simply creating an external table that points to your data in storage and then mapping that data to a new table within SQL Data Warehouse.

Of course, you will need to establish a recurrent process that will extract any newly created data from your service, load them in the form of Azure Blobs and initiate the PolyBase process for importing the data again into SQL Data Warehouse. One way of doing this is by using the Azure Data Factory service. In case you would like to follow this path you can read some good documentation on how to move data to and from Azure SQL Warehouse using Azure Data Factory.

What is the best way to load data from Mandrill to SQL Data Warehouse? Which are the possible alternatives?

So far we just scraped the surface of what can be done with Microsoft Azure SQL Data Warehouse and how to load data into it. The way to proceed relies heavily on the data you want to load, from which service they are coming from, and the requirements of your use case. Things can get even more complicated if you want to integrate data coming from different sources.

A possible alternative, instead of writing, hosting, and maintaining a flexible data infrastructure, is to use a product like RudderStack that can handle this kind of problem automatically for you.

RudderStack integrates with multiple sources or services like databases, CRM, email campaigns, analytics, and more.

Sign Up For Free And Start Sending Data

Test out our event stream, ELT, and reverse-ETL pipelines. Use our HTTP source to send data in less than 5 minutes, or install one of our 12 SDKs in your website or app.