Customer.io is a popular marketing platform for sending targeted emails and push and SMS notifications to improve conversions and customer engagement.
This document guides you in setting up Customer.io as a source in RudderStack. Once configured, RudderStack automatically ingests your Customer.io data and routes it to your specified data warehouse destination.
To set up Customer.io as a source in RudderStack, follow these steps:
- Log into your RudderStack dashboard.
- Go to Sources > New source > Cloud Extract and select Customer.io from the list of sources.
- Assign a name to your source and click Continue.
To set up Customer.io as a Cloud Extract source, you need to configure the following settings:
- App API Key: Enter your Customer.io API key which can be obtained in the Customer.io dashboard by navigating to Settings > Account Settings > API Credentials.
- Cutoff Days: Enter the number of days after which RudderStack fetches the updated data.
The following settings specify how RudderStack sends the data ingested from Customer.io to the connected warehouse destination:
- Table prefix: RudderStack uses this prefix to create a table in your data warehouse and loads all your Customer.io data into it.
- Schedule Settings: RudderStack gives you three options to ingest the data from Customer.io:
- Basic: Runs the syncs at the specified time interval.
- CRON: Runs the syncs based on the user-defined CRON expression.
- Manual: You are required to run the syncs manually.
You can choose the Customer.io data you want to ingest by selecting the required resources:
The below table lists the syncs supported by the Customer.io resources to your warehouse destination:
|Resource||Full Refresh sync||Incremental sync|
Customer.io is now configured as a source. RudderStack will start ingesting data from Customer.io as per your specified schedule and frequency.
You can further connect this source to your data warehouse by clicking on Add Destination, as shown:
Yes, it is.
RudderStack associates a table prefix for every Cloud Extract source writing to a warehouse schema. This way, multiple Cloud Extract sources can write to the same schema with different table prefixes.