Sync data from RudderStack to Databricks Delta Lake.
Delta Lake is a popular data lake used for both streaming and batch operations. It lets you store structured, unstructured, and semi-structured data securely and reliably.
Find the open source code for this destination in the GitHub repository.
Configuring Delta Lake destination in RudderStack
Before configuring Delta Lake as a destination in RudderStack, it is highly recommended to go through the following sections to obtain the necessary configuration settings. These sections also contain the steps to grant RudderStack and Databricks the necessary permissions to your preferred storage bucket.
To send event data to Delta Lake, you first need to add it as a destination in RudderStack and connect it to your data source. Once the destination is enabled, events will automatically start flowing to Delta Lake via RudderStack.
To configure Delta Lake as a destination in RudderStack, follow these steps:
In your RudderStack dashboard, set up the data source. Then, select Databricks Delta Lake from the list of destinations.
Assign a name to your destination and then click Next.
Connection settings
Host: Enter your server hostname from the Databricks dashboard.
Port: Enter the port number.
HTTP Path: Enter the cluster’s HTTP path.
For more information on obtaining the server hostname, port, and the cluster’s HTTP path, refer to the Obtaining the JDBC/ODBC configuration section below.
Personal Access Token: Enter your Databricks access token. For more information on generating the access token, refer to the Generating the Databricks access token section below.
Enable delta tables creation in an external location: Enable this setting to specify the external location to create the delta tables. You can specify the external location in the External delta table location setting. When disabled, RudderStack creates the delta tables at a default storage location for the non-external Apache Hive tables.
If you have not configured a Unity catalog, you can access the delta tables at {path_to_table}/{schema}/{table}.
If you have configured a Unity catalog, follow these steps:
Create an external location by following this Databricks documentation. Your location will look something like s3://{bucket_path}/{external_location}.
Specify the absolute location in the External delta table location setting. It will look something like s3://{bucket_path}/{external_location}/{path_to_table}/{schema}/{table}.
Catalog: If you have configured a Unity catalog, enter the catalog name where your data assets are organized. For more information on creating a Unity Catalog, refer to the Databricks documentation.
If you do not specify the catalog name, RudderStack uses the default catalog configured for your workspace.
Namespace: Enter the the name of the schema where RudderStack will create the tables. If you don’t specify a namespace in the dashboard settings, RudderStack will set it to the source name, by default.
Sync Frequency: Specify how often RudderStack should sync the data to your Delta Lake instance.
Sync Starting At: This optional setting lets you specify the particular time of the day (in UTC) when you want RudderStack to sync the data to the Delta Lake instance.
Exclude Window: This optional setting lets you specify the time window (in UTC) when RudderStack will skip the data sync.
Object Storage Configuration: Use this setting to specify your object storage.
Use RudderStack managed object storage: Enable this setting to use RudderStack-managed buckets for object storage. Note that this option is applicable only for RudderStack-hosted data planes. For self-hosted data planes, you need to specify your own object storage configuration settings.
Choose your storage provider: If Use RudderStack managed object storage is disabled, you can select any one of the following platforms for storing your staging files:
If you select S3 as your storage provider, RudderStack provides the option to specify your IAM role ARN or the AWS access key ID/secret access key by enabling the Use STS Tokens to copy staging files setting. For more information, refer to the Amazon S3 storage bucket settings section below.
Granting RudderStack access to your storage bucket
This section contains the steps to edit your bucket policy to grant RudderStack the necessary permissions, depending on your preferred cloud platform.
Amazon S3
Follow these steps to grant RudderStack access to your S3 bucket based on the following two cases:
Case 1: Use STS Token to copy staging files is disabled in the dashboard
Follow the steps listed in this section if the Use STS Token to copy staging files option is disabled, that is, you don’t want to specify the AWS credentials while configuring your Delta Lake destination.
For RudderStack Cloud
If you are using RudderStack Cloud, edit your bucket policy using the following JSON:
Make sure you replace USER_ARN with the ARN copied in the previous step. Also, replace ACCOUNT_ID with your AWS account ID and YOUR_BUCKET_NAME with the name of your S3 bucket.
Finally, add the programmatic access credentials to the env file present in your RudderStack installation:
Case 2: Use STS Token to copy staging files is enabled in the dashboard
In this case, provide the configuration directly while setting up the Delta Lake destination in RudderStack:
Role-based Authentication: Enable this setting to use the RudderStack IAM role for authentication. For more information on creating an AWS IAM role for RudderStack, refer to this guide.
IAM Role ARN: Enter the ARN of the IAM role.
It is highly recommended to enable this setting as the access keys-based authentication method is now deprecated.
If Role-based Authentication is disabled, you need to enter the AWS Access Key ID and AWS Secret Access Key to authorize RudderStack to write to your S3 bucket.
In both the role-based and access key-based authentication methods, you need to set a policy specifying the required permissions for RudderStack to write to your intermediary S3 bucket. Refer to the S3 permissions for warehouse destinations section for more information.
Google Cloud Storage
You can provide the necessary GCS bucket configuration while setting up the Delta Lake destination in RudderStack. For more information, refer to the Google Cloud Storage bucket settings.
Azure Blob Storage
You can provide the necessary Blob Storage container configuration while setting up the Delta Lake destination in RudderStack. For more information, refer to the Azure Blob Storage settings.
Granting Databricks access to your staging bucket
This section contains the steps to grant Databricks the necessary permissions to access your staging bucket, depending on your preferred cloud platform.
Amazon S3
Follow these steps to grant Databricks access to your S3 bucket depending on your case:
Case 1: Use STS Token to copy staging files is disabled in the dashboard
Follow the steps listed in this section if the Use STS Token to copy staging files option is disabled, i.e. you don’t want to specify the AWS access key and secret access key while configuring your Delta Lake destination.
In this case, you will be required to configure your AWS account to create an instance profile which will then be attached with your Databricks cluster.
Case 2: Use STS Token to copy staging files is enabled in the dashboard
Follow the steps listed in this section if the Use STS Token to copy staging files option is enabled, i.e. you are specifying the AWS access key and secret access key in the dashboard while configuring your Delta Lake destination.
Add the following Spark configuration to your Databricks cluster:
For more information on adding custom Spark configuration properties in a Databricks cluster, refer to Spark configuration guide.
Finally, replace the following fields with the values obtained from the downloaded JSON in the previous step: <project_id>,<private_key>, <private_key_id>,<client_email>.
Azure Blob Storage
To grant Databricks access to your Azure Blob Storage container, follow these steps:
Add the following Spark configuration to your Databricks cluster.
For more information on adding custom Spark configuration properties in a Databricks cluster, refer to Spark configuration guide.
Replace the following fields with the relevant values from your Blob Storage account settings: <storage-account-name>,<storage-account-access-key>.
Creating a new Databricks cluster
To create a new Databricks cluster, follow these steps:
Sign into your Databricks account. Then, click the Compute option on the dashboard:
Click the Create Cluster option.
Next, enter the cluster details. Fill in the Cluster Name:
Select the Cluster Mode depending on your use-case. The following image highlights the three cluster modes:
Then, select the Databricks Runtime Version as 7.1 or higher:
Configure the rest of the settings as per your requirement.
In the Advanced Options section, configure the Instances field as shown in the following image:
In the Instance Profile dropdown menu, select the Databricks instance profile that you added to your account in the previous step.
Finally, click the Create Cluster button to complete the configuration and create the Databricks cluster.
Obtaining the JDBC/ODBC configuration
Follow these steps to get the JDBC/ODBC configuration:
In your Databricks dashboard, click the Compute option:
Then, select the cluster you created in the previous section.
In the Advanced Options section, select the JDBC/ODBC field and copy the Server Hostname, Port, and HTTP Path values:
The Server Hostname, Port, and HTTP Path values are required to configure Delta Lake as a destination in RudderStack.
Generating the Databricks access token
To generate the Databricks access token, follow these steps:
In your Databricks dashboard, go to Settings and click User Settings:
Then, go to the Access Tokens section and click Generate New Token:
Enter your comment in the Comment field and click Generate:
Keep the Lifetime (days) field blank. If you enter a number, your access token will expire after that number of days.
Finally, copy the access token as it will be used during the Delta Lake destination setup in RudderStack.
IPs to be allowlisted
To enable network access to RudderStack, allowlist the following RudderStack IPs depending on your region and RudderStack Cloud plan:
Plan
Region
US
EU
Free, Starter, and Growth
23.20.96.9
18.214.35.254
52.38.160.231
34.211.241.254
18.198.90.215
18.196.167.201
Enterprise
34.198.90.241
54.147.40.62
3.216.35.97
100.20.239.77
44.236.60.231
3.66.99.198
3.64.201.167
All the outbound traffic is routed through these RudderStack IPs.
FAQ
What are the reserved keys for Delta Lake?
Refer to this documentation for a complete list of the reserved keywords.
Does the Databricks cluster attached to the destination need to be up all time?
No, your Databricks cluster attached to the destination need not be up all time.
What happens if the cluster or the destination service is down? Is there a possibility of data loss?
If a warehouse destination is down or unavailable, RudderStack will continue to retry sending events (on an exponential backoff basis, for up to 3 hours).
RudderStack stores the syncs as staging files and retries sending them at a later time when the cluster is up again. This allows for a successful delivery without any missing data.
After retrying for up to 3 hours, RudderStack marks the syncs as aborted. Once the service is up and running again, you can go to the Syncs tab in the RudderStack dashboard and retry sending the data.
Does RudderStack automatically spin the Databricks cluster/SQL endpoint every time it needs to write?
No, RudderStack does not spin the Databricks cluster or the SQL endpoint on its own every time it needs to write to the cluster.
Databricks itself starts up the cluster/endpoint when the connection is established. You just need to configure the automatic termination settings in the Autopilot Options on the cluster creation page:
How does RudderStack handle the reserved words in a column, table, or schema?
There are some limitations when it comes to using reserved words as a schema, table, or column name. If such words are used in event names, traits or properties, they will be prefixed with a _ when RudderStack creates tables or columns for them in your schema.
Also, integers are not allowed at the start of a schema or table name. Hence, such schema, column, or table names will be prefixed with a _. For example, '25dollarpurchase' will be changed to '_25dollarpurchase'.
How can I modify an existing table to a partitioned table?
To modify an existing table to a partitioned table, follow these steps:
Set an exclusion window (using the Exclude window connection setting) so that RudderStack does not process any data while performing the below changes.
Make the required changes in connection settings of the configured Delta Lake destination.
Run the following queries in the Databricks Cluster/SQL endpoints to:
How can I convert an existing managed or unmanaged table at a location to an unmanaged table at a new location?
Set an exclusion window (using the Exclude window connection setting) so that RudderStack does not process any data while performing the below changes.
Run the following queries in the Databricks Cluster/SQL endpoints to:
How do I convert an existing unmanaged table at a specific location to a managed table (at default location)?
Set an exclusion window (using the Exclude window connection setting) so that RudderStack does not process anything while performing the below changes.
Run the following queries in the Databricks Cluster/SQL Endpoints to:
This site uses cookies to improve your experience while you navigate through the website. Out of
these
cookies, the cookies that are categorized as necessary are stored on your browser as they are as
essential
for the working of basic functionalities of the website. We also use third-party cookies that
help
us
analyze and understand how you use this website. These cookies will be stored in your browser
only
with
your
consent. You also have the option to opt-out of these cookies. But opting out of some of these
cookies
may
have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This
category only includes cookies that ensures basic functionalities and security
features of the website. These cookies do not store any personal information.
This site uses cookies to improve your experience. If you want to
learn more about cookies and why we use them, visit our cookie
policy. We'll assume you're ok with this, but you can opt-out if you wish Cookie Settings.