Google BigQuery destination

Sync data from RudderStack to Google BigQuery.

Google BigQuery is an industry-leading, fully-managed cloud data warehouse that lets you efficiently store and analyze petabytes of data.

Refer to the Warehouse Schema guide for more information on how the events are mapped to the tables in BigQuery.
Find the open source code for this destination in the GitHub repository.

Setting up the BigQuery project

Before you set up BigQuery as a destination in RudderStack, follow these steps to set up your BigQuery project:

  1. Create a Google Cloud Platform (GCP) project if you don’t have one already. For more details, refer to this BigQuery documentation.
  2. Enable the BigQuery API for your existing project if it’s not done already. For more information, refer to this BigQuery documentation.
  3. Log into your BigQuery console. Copy the project ID as shown in the following image:
Copy the Project ID
This project ID is required for configuring BigQuery as a destination in RudderStack.
To enable RudderStack to load data into your BigQuery cluster, make sure that billing is enabled for the project.
  1. Create a new Google Cloud Storage (GCS) bucket or provide an existing one to store files before loading the data into your BigQuery instance.
To make sure that the data loads from GCS to BigQuery, co-locate your GCS storage bucket with BigQuery. For more information, refer to the BigQuery documentation.

Setting up the service account for RudderStack

For RudderStack to successfully send events to your BigQuery instance, you also need to set up a service account with the necessary permissions.

Make sure that you create the service account for the same BigQuery project.

Follow the steps below to set up a service account:

  1. Create a new service account by going to IAM & Admin > Service Accounts.
  2. Add the service account permissions as specified below:
  • Add the Storage Object Creator and Storage Object Viewer roles to the account.
  • Add the BigQuery Job User and BigQuery Data Owner roles to the account.
If a dataset name (configurable by the setting: Namespace in the RudderStack dashboard) already exists, the role of BigQuery Data Editor would suffice instead of BigQuery Data Owner.
  1. Create a key for the service account with JSON as the type and store it.
Create a key
  1. Create and download the private JSON key required for configuring BigQuery as a destination in RudderStack:
JSON key required for the RudderStack UI

IPs to be allowlisted

By default, you can access BigQuery via the Google APIs, which are publicly accessible. As such, allowlisting any IPs is not required. However, if your VPC service restricts the BigQuery APIs, you will need to allowlist the below IPs by setting up network access control for BigQuery.

To enable network access to RudderStack, allowlist the following RudderStack IPs depending on your region and RudderStack Cloud plan:

Free, Starter, and Growth
All the outbound traffic is routed through these RudderStack IPs.

Configuring Google BigQuery destination in RudderStack

To send event data to SQL Server, you first need to add it as a destination in RudderStack and connect it to your data source. Once the destination is enabled, events will automatically start flowing to SQL Server via RudderStack.

To configure SQL Server as a destination in RudderStack, follow these steps:

  1. In your RudderStack dashboard, set up the data source. Then, select BigQuery from the list of destinations.
  2. Assign a name to your destination and then click Next.

Connection settings

  • Project: The GCP project ID where the BigQuery database is located.
  • Location: The GCP region for your dataset.
  • Staging GCS Storage Bucket Name: The name of the storage bucket as specified in the Setting up the BigQuery project section.
  • Prefix: If specified, RudderStack creates a folder in the bucket with this prefix and pushes all data within that folder.
  • Namespace: Enter the schema name where RudderStack will create all tables. If you don’t specify any namespace, RudderStack will set this to the source name, by default.
  • Credentials: Your GCP service account credentials JSON as created in the Setting up the service account for RudderStack section.
  • Sync Frequency: Specify how often RudderStack should sync the data to your BigQuery dataset.
  • Sync Starting At: This optional setting lets you specify the particular time of the day (in UTC) when you want RudderStack to sync the data to BigQuery.
  • Exclude Window: This optional setting lets you set a time window when RudderStack will not sync the data to your database.

Advanced settings

  • Skip User Table: Toggle on this setting to send events exclusively to the identifies table and skip the users table, eliminating the need for a merge operation on the users table.
  • Skip Tracks Table: Toggle on this setting to skip sending events to the tracks table.
  • JSON Columns: Use this optional setting to specify the required JSON column paths in dot notation, separated by commas. This option applies to all incoming track events for this destination.
With the JSON columns feature, you can now ingest semi-structured event data not defined by a fixed schema. For more information on this feature, refer to the JSON Column Support guide.

Schema, partitioned tables, views, and deduplication

RudderStack uses the source name (written in snake case, for example, source_name) to create a dataset in BigQuery.

For more details on the tables and columns created by RudderStack, refer to the Warehouse Schema guide.

RudderStack supports two modes by which data is ingested into BigQuery:

  • Partitioned tables (default method)
  • Deduplication
By default, RudderStack uses the partitioned tables method to ingest data into BigQuery. Deduplication is disabled.

If you are using RudderStack Open Source or hosting the RudderStack data plane, you can manage these modes via the environment variable RSERVER_WAREHOUSE_BIGQUERY_IS_DEDUP_ENABLED, as shown in the following table:

Partitioned tablesfalse

Partitioned tables and views

RudderStack creates ingestion-time partition tables based on the load date, so you can take advantage of it to query a subset of data.

You can find more details on the BigQuery partitioned tables in BigQuery documentation. For information on how RudderStack creates these tables on load, refer to the Creating partitioned tables section of the BigQuery documentation.
Duplicate data is not discarded when loading it into BigQuery.

In addition to tables, RudderStack creates a view (<table_name>_view) for every table for de-duplication purposes.

More information on views can be found in this BigQuery guide.

It is highly recommended that you use the corresponding view (containing the events from the last 60 days) to avoid duplicate events in your query results. Since BigQuery views are merely logical views and are not cached, you can create a native table from it to save money - by avoiding running the query that defines the view every time.

Users can modify the view query to change the time window of the view. The default value is set to 60 days.

Deduplication method

In this method, RudderStack automatically discards any duplicate events while loading them into the BigQuery tables. To enable this method, set the environment variable RSERVER_WAREHOUSE_BIGQUERY_IS_DEDUP_ENABLED in your RudderStack data plane setup to true.


Note the following:

  • The RudderStack team handles this setting for the RudderStack Cloud Growth and Enterprise plans.
  • Views will not be created for the tables if you choose the deduplication method.
  • The querying cost will increase if you choose this method, as RudderStack removes the duplicates from the tables through sequential scans that happen on every warehouse upload. For example, if your warehouse upload frequency is set to every 30 minutes, RudderStack performs a full scan on the tracks table every 30 minutes to upload any new events.

Switching from deduplication to partitioned tables

You can easily switch from using the deduplication method to ingest your data in BigQuery to the partitioned tables by setting the environment variable RSERVER_WAREHOUSE_BIGQUERY_IS_DEDUP_ENABLED in your RudderStack setup to false.

RudderStack then does the following:

  • Creates users_view while loading the users table, if not already present.
  • Creates the views for the new track events.
  • All the other skipped views are not regenerated. You can either manually create those views or contact us to create them.


Where do I add the allowlisted IPs in BigQuery?

By default, BigQuery is accessible via publicly accessible Google APIs. As such, allowlisting any IPs is not required. However, if your VPC service restricts the BigQuery APIs, you will need to allowlist the IPs by setting up network access control for BigQuery.

How are reserved words handled by RudderStack?

There are some limitations when it comes to using reserved words in a schema, table, or column names. If such words are used in event names, traits or properties, they will be prefixed with a _when RudderStack creates tables or columns for them in your schema.

Besides, integers are not allowed at the start of the schema or table name. Hence, such schema, column or table names will be prefixed with a _.

For instance, '25dollarpurchase’ will be changed to '_25dollarpurchase'.

When sending data into a data warehouse, how can I change the table where this data is sent?

By default, RudderStack sends the data to the table/dataset based on the source it is connected to. For example, if the source is Google Tag Manager, RudderStack sets the schema name as gtm_*. However, you can override this behavior by setting the Namespace field in the BigQuery destination settings:


I’m looking to send data to BigQuery through RudderStack and I’m trying to understand what data is populated in each column. How do I go about this?

Refer to the Warehouse Schema documentation for details on how RudderStack generates the schema in the warehouse and populates the data in each column.

I am trying to load data into my BigQuery destination and I get the error “Cannot read and write in different locations”. What should I do?

Make sure that both your BigQuery dataset and the bucket have the same region.

When piping data to a BigQuery destination, I can set the bucket but not a folder within the bucket. Is there a way to put RudderStack data in a specific bucket folder?

Yes, you can set the desired folder name in the Prefix field while setting up your BigQuery destination in RudderStack.

Does open source RudderStack support near real-time syncing to BigQuery and event replay?

The near-realtime BigQuery syncing feature is currently under development and is planned to be released in the coming months. Unfortunately, Event Replay is not a part of open-source RudderStack currently.

What is the current sync frequency for BigQuery?

If you’re using open source RudderStack, the minimum sync frequency is 30 minutes. If you’re self-hosting the data plane or using RudderStack’s Enterprise plan, you can assign the required value for sync frequency to the uploadFreqInS parameter in config.yaml file. Note that the minimum value can be 1800 (30 minutes).

For more information, refer to this FAQ.

Do I need to stop the running pipeline to change my sync frequency? Or will the new change be effective even without stopping the pipeline?

To change the sync frequency, you need not stop the pipeline.

When configuring the BigQuery destination, where does Google use the credentials JSON from?

BigQuery uses the credentials JSON from the dashboard configuration when setting up the destination. For more information, refer to the Setting up the service account for RudderStack section.

When configuring the BigQuery destination, should the user permissions be set for the specific dataset or the whole project?

You need to set the user permissions for the whole project. Otherwise, you may encounter issues.

How long are the failed syncs retried before being aborted?

RudderStack retries the failed syncs for up to 3 hours before aborting them. For more information, refer to this FAQ.

For a more comprehensive FAQ list, refer to the Warehouse FAQ guide.

Questions? Contact us by email or on Slack