Confluent Cloud is a cloud-native, fully-managed event streaming platform. Powered by Apache Kafka, it is simple, secure, and simplifies data ingestion and processing on all the major clouds. With Confluent Cloud, you can easily handle large-scale data workloads without compromising on performance.
RudderStack allows you to seamlessly configure Confluent Cloud as a destination to send your event data.
To enable sending data to Confluent Cloud, you will first need to add it as a destination to the source you are sending your event data. Once the destination is enabled, events from RudderStack will start flowing to Confluent Cloud.
Before configuring your source and destination on RudderStack, check whether Confluent Cloud supports the platform you are working on by referring to the table below:
Once you have confirmed that the platform supports sending events to Confluent Cloud, perform the steps mentioned below:
- Choose a source to which you would like to add Confluent Cloud as a destination.
- Select the destination as Confluent Cloud. Give your destination a name, and then click on Next.
- In the Connection Settings, fill the required fields with the relevant information and click Next.
The required fields are as follows:
- Bootstrap server: Enter your bootstrap server information here. This is in the format
port. You will get this information in your cluster settings.
- Topic Name: Enter the name of the Kafka topic in this field.
- API Key: This is the key you need to generate in the Confluent Cloud UI to give RudderStack the required API access. Enter the key in this field.
- API Secret: Enter the API Secret in this field - you can generate this in the Confluent Cloud UI.
userId as the partition key of a given message.
userId is not present in the payload, then
anonymousId is used.
If you have a multi-partitioned topic, then the records of the same
anonymousId in the absence of
userId) will always go to the same partition.