Google BigQuery is an industry-leading, fully-managed cloud data warehouse that lets you efficiently store and analyze petabytes of data.
RudderStack lets you configure BigQuery as a destination to which you can send your event data seamlessly.
Refer to the Warehouse Schemas guide for more information on how the events are mapped to the tables in BigQuery.
Find the open-source transformer code for this destination in the GitHub repository.
Before you set up BigQuery as a destination in RudderStack, follow these steps to set up your BigQuery project:
- Create a Google Cloud Platform (GCP) project if you don't have one already. For more details, refer to this guide.
- Enable the BigQuery API for your existing project if it's not done already. For more information, refer to this BigQuery guide.
- Log into your BigQuery console. Copy the project ID as shown in the following image:
This project ID is required for configuring BigQuery as a destination in RudderStack.
To enable RudderStack to load data into your BigQuery cluster, make sure that billing is enabled for the project.
- Create a new Google Cloud Storage (GCS) bucket or provide an existing one to store files before loading the data into your BigQuery instance.
To make sure that the data loads from GCS to BigQuery, co-locate your GCS storage bucket with BigQuery. For more information, refer to the BigQuery documentation.
For RudderStack to successfully send events to your BigQuery instance, you also need to set up a service account with the necessary permissions.
Make sure that you create the service account for the same BigQuery project.
Follow the steps below to set up a service account:
- Create a new service account by going to IAM & Admin > Service Accounts.
- Add the service account permissions as specified below:
- Add the
Storage Object Creatorand
Storage Object Viewerroles to the account.
- Add the
BigQuery Job Userand
BigQuery Data Ownerroles to the account.
If a dataset name (configurable by the setting: Namespace in the RudderStack dashboard) already exists, the role of BigQuery Data Editor would suffice instead of BigQuery Data Owner.
- Create a key for the service account with JSON as the type and store it.
- Create and download the private JSON key required for configuring BigQuery as a destination in RudderStack, as shown:
You will need to whitelist the following RudderStack IPs to enable network access:
If you have your deployment in the EU region, you can whitelist only the following two IPs:
All the outbound traffic is routed through these RudderStack IPs.
To send event data to SQL Server, you first need to add it as a destination in RudderStack and connect it to your data source. Once the destination is enabled, events will automatically start flowing to SQL Server via RudderStack.
To configure SQL Server as a destination in RudderStack, follow these steps:
- In your RudderStack dashboard, set up the data source. Then, select BigQuery from the list of destinations.
- Assign a name to your destination and then click on Next.
Enter the following credentials in the Connection Credentials page:
- Project: The GCP project ID where the BigQuery database is located.
- Location: The GCP region for your dataset.
- Staging GCS Storage Bucket Name: The name of the storage bucket as specified in the Setting up the BigQuery Project section.
- Prefix: If specified, RudderStack creates a folder in the bucket with this prefix and pushes all the data within that folder.
- Namespace: Enter the schema name where RudderStack will create all the tables. If you don't specify any namespace, RudderStack will set this to the source name, by default.
- Credentials: Your GCP service account credentials JSON as created in the Setting up the service account for RudderStack section.
- Sync Frequency: Specify how often RudderStack should sync the data to your BigQuery dataset.
- Sync Starting At: This optional setting lets you specify the particular time of the day (in UTC) when you want RudderStack to sync the data to BigQuery.
- Exclude Window: This optional setting lets you set a time window when RudderStack will not sync the data to your database.
RudderStack uses the source name (written in snake case, for example,
source_name) to create a dataset in BigQuery.
For more details on the tables and columns created by RudderStack, refer to the Warehouse Schema guide.
RudderStack supports two modes by which data is ingested into BigQuery:
- Partitioned tables (default method)
By default, RudderStack uses the partitioned tables method to ingest data into BigQuery. Deduplication is disabled.
If you are using open source RudderStack or hosting the RudderStack data plane, you can manage these modes via the environment variable
RSERVER_WAREHOUSE_BIGQUERY_IS_DEDUP_ENABLED, as shown in the following table:
RudderStack creates ingestion-time partition tables based on the load date, so you can take advantage of it to query a subset of data.
You can find more details on the BigQuery partitioned tables in BigQuery documentation. For information on how RudderStack creates these tables on load, refer to the Creating partitioned tables section of the BigQuery documentation.
Duplicate data is not discarded when loading it into BigQuery.
In addition to tables, RudderStack creates a view (
<table_name>_view) for every table for de-duplication purposes.
More information on views can be found in this BigQuery guide.
It is highly recommended that you use the corresponding view (containing the events from the last 60 days) to avoid duplicate events in your query results. Since BigQuery views are merely logical views and are not cached, you can create a native table from it to save money - by avoiding running the query that defines the view every time.
Users can modify the view query to change the time window of the view. The default value is set to 60 days.
In this method, RudderStack automatically discards any duplicate events while loading them into the BigQuery tables. To enable this method, set the environment variable
RSERVER_WAREHOUSE_BIGQUERY_IS_DEDUP_ENABLED in your RudderStack data plane setup to
Views will not be created for the tables if you choose the deduplication method.
Note that the querying cost will increase if you choose this method, as RudderStack removes the duplicates from the tables through sequential scans that happen on every warehouse upload. For example, if your warehouse upload frequency is set to every 30 minutes, RudderStack performs a full scan on the
tracks table every 30 minutes to upload any new events.
You can easily switch from using the deduplication method to ingest your data in BigQuery to the partitioned tables by setting the environment variable
RSERVER_WAREHOUSE_BIGQUERY_IS_DEDUP_ENABLED in your RudderStack setup to
RudderStack then does the following:
users_viewwhile loading the
userstable, if not already present.
- Creates the views for the new
- All the other skipped views are not regenerated. You can either manually create those views or contact us to get them created.
There are some limitations when it comes to using reserved words in a schema, table, or column names. If such words are used in event names, traits or properties, they will be prefixed with a
_when RudderStack creates tables or columns for them in your schema.
Besides, integers are not allowed at the start of the schema or table name. Hence, such schema, column or table names will be prefixed with a
'25dollarpurchase' will be changed to
By default, RudderStack sends the data to the table/dataset based on the source it is connected to. For example, if the source is Google Tag Manager, RudderStack sets the schema name as
gtm_*. However, you can override this behavior by setting the Namespace field in the BigQuery destination settings, as shown:
I'm looking to send data to BigQuery through RudderStack and I'm trying to understand what data is populated in each column. How do I go about this?
Refer to the RudderStack Warehouse Schema documentation for details on how RudderStack generates the schema in the warehouse and populates the data in each column.
I am trying to load data into my BigQuery destination and I get the error "Cannot read and write in different locations". What should I do?
Make sure that both your BigQuery dataset and the bucket have the same region.
When piping data to a BigQuery destination, I can set the bucket but not a folder within the bucket. Is there a way to put Rudderstack data in a specific bucket folder?
Yes, you can set the desired folder name in the Prefix field while setting up your BigQuery destination in RudderStack.
The near-realtime BigQuery syncing feature is currently under development and is planned to be released in the coming months. Unfortunately, Event Replay is not a part of open-source RudderStack currently.
If you're using open source RudderStack, the minimum sync frequency is 30 minutes. If you're self-hosting the data plane or using RudderStack Cloud Pro / Enterprise, you can tweak the config to set
0 and determine the best possible value for near real-time sync.
For more information, refer to this FAQ.
Do I need to stop the running pipeline to change my sync frequency? Or will the new change be effective even without stopping the pipeline?
To change the sync frequency, you need not stop the pipeline.
BigQuery uses the credentials JSON from the dashboard configuration when setting up the destination. For more information, refer to the Setting up the service account for RudderStack section.
When configuring the BigQuery destination, should the user permissions be set for the specific dataset or the whole project?
You need to set the user permissions for the whole project. Otherwise, you may encounter issues.
RudderStack retries the failed syncs for up to 3 hours before aborting them. For more information, refer to this FAQ.