In this guide we’ll learn how to ingest data into Apache Pinot from an Apache Kafka cluster configured with SSL and SASL authentication.
Pinot Version | 0.10.0 |
---|---|
Code | startreedata/pinot-recipes/kafka-ssl-sasl |
To follow the code examples in this guide, do the following:
You can spin up Pinot cluster by running the following command:
This command will run a single instance of the Pinot Controller, Pinot Server, Pinot Broker, and Zookeeper. You can find the docker-compose.yml file on GitHub.
Let’s create a Pinot Schema and Table.
The schema is defined below:
config/schema.json
And the table configuration below:
config/table.json
You’ll need to replace <bootstrap-servers>
with the host and port of your Kafka cluster.
The credentials that we want to use are specified in the sasl.jaas.config
property.
You’ll need to replace <cluster-api-key>
and <cluster-api-secret>
with your own credentials.
If our Kafka cluster does not have SSL enabled, we would need to specify security_protocol
as SASL_PLAINTEXT
instead of SASL_SSL
. For an example of using SASL without SSL, see Connecting to Kafka with SASL authentication
Create the table and schema by running the following command:
Ingest a few messages into your Kafka cluster:
If you’re using Confluent Cloud you can ingest these messages via the UI.
Now let’s navigate to localhost:9000/#/query and copy/paste the following query:
You will see the following output:
count(*) | sum(count) |
---|---|
3 | 965 |
Query Results
In this guide we’ll learn how to ingest data into Apache Pinot from an Apache Kafka cluster configured with SSL and SASL authentication.
Pinot Version | 0.10.0 |
---|---|
Code | startreedata/pinot-recipes/kafka-ssl-sasl |
To follow the code examples in this guide, do the following:
You can spin up Pinot cluster by running the following command:
This command will run a single instance of the Pinot Controller, Pinot Server, Pinot Broker, and Zookeeper. You can find the docker-compose.yml file on GitHub.
Let’s create a Pinot Schema and Table.
The schema is defined below:
config/schema.json
And the table configuration below:
config/table.json
You’ll need to replace <bootstrap-servers>
with the host and port of your Kafka cluster.
The credentials that we want to use are specified in the sasl.jaas.config
property.
You’ll need to replace <cluster-api-key>
and <cluster-api-secret>
with your own credentials.
If our Kafka cluster does not have SSL enabled, we would need to specify security_protocol
as SASL_PLAINTEXT
instead of SASL_SSL
. For an example of using SASL without SSL, see Connecting to Kafka with SASL authentication
Create the table and schema by running the following command:
Ingest a few messages into your Kafka cluster:
If you’re using Confluent Cloud you can ingest these messages via the UI.
Now let’s navigate to localhost:9000/#/query and copy/paste the following query:
You will see the following output:
count(*) | sum(count) |
---|---|
3 | 965 |
Query Results