Feeling stuck with Segment? Say 👋 to RudderStack.

SVG
Log in

How to load data from PostgreSQL to Amazon Redshift

Access your data on PostgreSQL

The first step in migrating your PostgreSQL data to any kind of data warehouse solution is to access your data and start extracting it.

There are many ways of doing this. One possibility is a logical replication log. In this case, you need to listen to the log for changes to the database and you reflect them on the target system.

When pulling data from a database you also need to be able to filter tables and columns, find a way to identify updates and replicate the appropriate database schema, considering it will end up in a columnar database for analytics.

Another possibility is a JDBC importer. In this case, any input configuration will contain all the appropriate values for the database authentication and connection. By appropriately configuring the JDBC importer you can control each table’s behavior during import and altering its schema as well if desired.

Moreover, pagination of data importing can be simulated by querying tables in batching mode.

Transform and prepare your PostgreSQL data

After you have accessed your data on PostgreSQL, you will have to transform it based on two main factors,

  1. The limitations of the database that the data will be loaded onto
  2. The type of analysis that you plan to perform

Each system has specific limitations on the data types and data structures that it supports. You will have to make the right choices for data types depending on the system you plan to send the data to.

While for the most common data types the mapping choices may seem to be obvious, each database system will most probably support a set of more “sophisticated” and database-specific types whose mapping choices requires careful consideration since they can limit the expressivity of your queries and restrict your analysts on what they can do directly out of the database.

However, if you plan to push the data to another PostgreSQL database then you probably don’t have to worry about the data types, unless you have some reasons related to the analysis that you will perform.

[@portabletext/react] Unknown block type "aboutNodeBlock", specify a component for it in the `components.types` prop

Transform and prepare your PostgreSQL data for Amazon Redshift

Amazon Redshift is built around industry-standard SQL with added functionality to manage very large data sets and perform high-performance analysis. So, in order to load your data into it, you will have to follow its relational database model. The data you extract from PostgreSQL should be mapped into tables and columns. You can consider the table as a map to the resource you want to store and the columns as the attributes of that resource. Each attribute should adhere to the data types that are supported by Redshift.

As your data is probably coming in a representation like JSON that supports a much smaller range of data types you have to be really careful about what data you feed into Redshift and make sure that you have mapped your types into one of the datatypes that are supported by Redshift.

Designing a schema for Redshift and mapping the data from PostgreSQL to it is a process that can affect the performance of your cluster and the questions that you can answer. It’s always a good idea to have in your mind the best practices that Amazon has published regarding the design of a Redshift database.

When you have concluded on the design of your database you need to load your data, these are the data sources that are supported as input by Redshift:

  1. Amazon S3
  2. Amazon DynamoDB
  3. Amazon Kinesis Firehose

Load your PostgreSQL data into Amazon Redshift

To upload your data to Amazon S3 you will have to use the AWS REST API. APIs play an important role in both the extraction but also the loading of data into our data warehouse. The first task that you have to perform is to create a bucket. To do this, execute an HTTP PUT on the Amazon AWS REST API endpoints for S3.

You can do this by using a tool like CURL or Postman, or by using the libraries provided by Amazon for your favorite language. You can find more information by reading the API reference for the Bucket operations on Amazon AWS documentation.

After you have created your bucket, you can start sending your data to Amazon S3, using again the same AWS REST API but by using the endpoints for Object operations. As in the Bucket case, you can either access the HTTP endpoints directly or use the library of your preference.

Amazon Redshift supports two methods for loading data into it. The first one is by invoking an INSERT command. You can connect to your Amazon Redshift instance with your client, using either a JDBC or ODBC connection and then you perform an INSERT command for your data.

The way you invoke the INSERT command is the same as you would do with any other SQL database, for more information you can check the INSERT examples page on the Amazon Redshift documentation.

Redshift is not designed for INSERT operations, though: the most efficient way of loading data is by doing bulk uploads using a COPY command.

You can perform a COPY command for data that lives as flat files on S3 or from an Amazon DynamoDB table. When you perform COPY commands, Redshift is able to read multiple files simultaneously and it automatically distributes the workload to the cluster nodes and performs the load in parallel.

The best way to load data from PostgreSQL to Amazon Redshift

So far we just scraped the surface of what can be done with Amazon Redshift. The process for loading data from any source into Redshift relies heavily on the data you want to load, which service they are coming from, and the requirements of your use case. Things can get even more complicated if you want to integrate data coming from different sources.

A possible alternative to writing, hosting, and maintaining a flexible data infrastructure is to use a product like RudderStack that can handle this process automatically for you.

RudderStack integrates with multiple sources or services like databases, CRMs, email campaign managers, analytics, and more. Quickly and safely move all your data from PostgreSQL to Redshift and start generating insights from your data. Don't want to go through the pain of direct integration? RudderStack’s PostgreSQL to Amazon Redshift integration makes it easy to send data from PostgreSQL to Amazon Redshift.

Sign Up For Free And Start Sending Data

Test out our event stream, ELT, and reverse-ETL pipelines. Use our HTTP source to send data in less than 5 minutes, or install one of our 12 SDKs in your website or app.

Don't want to go through the pain of direct integration? RudderStack's Reverse ETL connection makes it easy to send data from PostgreSQL to Amazon Redshift.