You can translate the document:

Goal

In this document we are going to explain how to configure the necessary elements to use Kafka in an environment with Kerberos and how we can connect to it from the Denodo Platform.

The main elements in this configuration are the Kafka nodes, the ZooKeeper server, the Kerberos server (KDC) and the Denodo Platform.

NOTE: The commands used in this document apply to Docker environments, but the concepts also apply to non-Docker environments.

Architecture

The architecture of a Kafka environment that is inside a domain is shown below:

  • kafksN refers to the nodes of a Kafka cluster.
  • ZooKeeper is an essential centralized service for the operation of Kafka.
  • KDC refers to the Kerberos service (Key Distribution Center).


krb5.conf file

This is the krb5.conf or krb5.ini file used in the example.

The domain used is TEST.CONFLUENT.IO, it will be the same file for all servers since it has to know the Kerberos realms.

[libdefaults]

default_realm = TEST.CONFLUENT.IO

ticket_lifetime = 24h

forwardable = true

rdns = false

dns_lookup_kdc   = no

dns_lookup_realm = no

[realms]

TEST.CONFLUENT.IO = {

        kdc = kdc

        admin_server = kadmin

}

[domain_realm]

.test.confluent.io = TEST.CONFLUENT.IO

test.confluent.io = TEST.CONFLUENT.IO

kerberos-demo.local = TEST.CONFLUENT.IO

.kerberos-demo.local = TEST.CONFLUENT.IO

[logging]

kdc = FILE:/var/log/kerberos/krb5kdc.log

admin_server = FILE:/var/log/kerberos/kadmin.log

default = FILE:/var/log/kerberos/krb5lib.log


ZooKeeper configuration

We will start by configuring ZooKeeper, for this we will have to carry out some actions from the KDC server, such as the creation of the principals or keytabs and a second configuration in the ZooKeeper server itself, which will consist of specifying the krb5 file, the JAAS configuration and the authentication provider.

Principal and keytab creation

The principals and the necessary keytab files will be generated in the KDC to configure the ZooKeeper service within Kerberos

  • Create a principal in the KDC for the ZooKeeper service

# Zookeeper service principal:

$ docker exec -i kdc kadmin.local -w password -q "add_principal -randkey zookeeper/zookeeper.kerberos-demo.local@TEST.CONFLUENT.IO"  > /dev/null

$ docker exec -i kdc kadmin.local -w password -q "modprinc -maxlife 11days -maxrenewlife 11days +allow_renewable zookeeper/zookeeper.kerberos-demo.local@TEST.CONFLUENT.IO" > /dev/null

  • Create a principal to connect to ZooKeeper from the brokers. IMPORTANT: Use the same credentials for all brokers.

# Create a principal to connect to Zookeeper from brokers - NB use the same credential for all brokers!

$ docker exec -i kdc kadmin.local -w password -q "add_principal -randkey zkclient@TEST.CONFLUENT.IO"  > /dev/null

$ docker exec -i kdc kadmin.local -w password -q "modprinc -maxlife 11days -maxrenewlife 11days +allow_renewable zkclient@TEST.CONFLUENT.IO"  > /dev/null

  • Create keytabs in the KDC for ZooKeeper and the brokers (zookeeper.kerberos and zkclient (for brokers))

$ docker exec -i kdc kadmin.local -w password -q "ktadd  -k /var/lib/secret/zookeeper.key -norandkey zookeeper/zookeeper.kerberos-demo.local@TEST.CONFLUENT.IO " > /dev/null

$ docker exec -i kdc kadmin.local -w password -q "ktadd  -k /var/lib/secret/zookeeper-client.key -norandkey zkclient@TEST.CONFLUENT.IO " > /dev/null

  • Check the privileges for the keytabs in the KDC

$ docker exec -i kdc chmod a+r /var/lib/secret/zookeeper.key

$ docker exec -i kdc chmod a+r /var/lib/secret/zookeeper-client.key

  • Copy the keytabs and the krb5.conf file from the KDC to ZooKeeper.

ZooKeeper configuration


As shown in the KAFKA_OPTS parameter, the jaas.config file, krb5.conf file, authProvider and requireClientAuthScheme are set.

The jaas.config file specifies Kerberos authentication parameters such as keytab and principal.

The krb5.conf file is the Kerberos configuration file. It should be the same on all machines so they know their realms.

The value of authProvider will always be “org.apache.zookeeper.server.auth.SASLAuthenticationProvider” when SASL authentication is used, for both SSL and plain configurations.

The value of requireClientAuthScheme will always be “sasl” when SASL authentication is used, for both ssl and plain configurations.

zookeeper:

    hostname: zookeeper.kerberos-demo.local

    depends_on:

      - kdc

    # Required to wait for the keytab to get generated

    restart: on-failure

    volumes:

      - secret:/var/lib/secret

      - ../../environment/kerberos/kdc/krb5.conf:/etc/krb5.conf

      - ../../environment/kerberos/zookeeper/zookeeper.sasl.jaas.config:/etc/kafka/zookeeper.sasl.jaas.config

    environment:

      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/zookeeper.sasl.jaas.config

        -Djava.security.krb5.conf=/etc/krb5.conf

        -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

        -Dsun.security.krb5.debug=true

        -Dzookeeper.allowSaslFailedClients=false

        -Dzookeeper.requireClientAuthScheme=sasl

JAAS file used by ZooKeeper

The JAAS configuration file contains information specific to the keytab and principal used to authenticate ZooKeeper against the Kerberos service.

This file is normally located under /etc/kafka/zookeeper.sasl.jaas.config.

/*

*   Service principal for Zookeeper server.

*

*   Zookeeper server accepts client connections.

*/

Server {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    keyTab="/var/lib/secret/zookeeper.key"

    principal="zookeeper/zookeeper.kerberos-demo.local@TEST.CONFLUENT.IO";

};

Kafka nodes configuration

To configure the Kafka nodes we will have to carry out some actions from the KDC server, such as the creation of the principals and keytabs and then we will configure the Kafka nodes themselves. For this we will configure the krb5 file, the JAAS configuration and some other options.

Principal and keytab creation

  • Create the principal in the KDC for the Kafka service.

### Create the required identities:

# Kafka service principal:

docker exec -i kdc kadmin.local -w password -q "add_principal -randkey kafka/broker.kerberos-demo.local@TEST.CONFLUENT.IO"  > /dev/null

docker exec -i kdc kadmin.local -w password -q "modprinc -maxlife 11days -maxrenewlife 11days +allow_renewable kafka/broker.kerberos-demo.local@TEST.CONFLUENT.IO"  > /dev/dev/null

docker exec -i kdc kadmin.local -w password -q "add_principal -randkey kafka/broker2.kerberos-demo.local@TEST.CONFLUENT.IO"  > /dev/null

docker exec -i kdc kadmin.local -w password -q "modprinc -maxlife 11days -maxrenewlife 11days +allow_renewable kafka/broker2.kerberos-demo.local@TEST.CONFLUENT.IO"  > /dev/null

  • Create the principal admin in the KDC for ACL’s (read and write permissions on topics).

# Create an admin principal for the cluster, which we'll use to setup ACLs.

# Look after this - its also declared a super user in broker config.

docker exec -i kdc kadmin.local -w password -q "add_principal -randkey admin/for-kafka@TEST.CONFLUENT.IO"  > /dev/null

docker exec -i kdc kadmin.local -w password -q "modprinc -maxlife 11days -maxrenewlife 11days +allow_renewable admin/for-kafka@TEST.CONFLUENT.IO"  > /dev/null

  • Create the keytab for brokers and admin principals.

docker exec -i kdc kadmin.local -w password -q "ktadd  -k /var/lib/secret/broker.key -norandkey kafka/broker.kerberos-demo.local@TEST.CONFLUENT.IO " > /dev/null

docker exec -i kdc kadmin.local -w password -q "ktadd  -k /var/lib/secret/broker2.key -norandkey kafka/broker2.kerberos-demo.local@TEST.CONFLUENT.IO " > /dev/null

docker exec -i kdc kadmin.local -w password -q "ktadd  -k /var/lib/secret/kafka-admin.key -norandkey admin/for-kafka@TEST.CONFLUENT.IO " > /dev/null

  • Check the privileges in the KDC.

docker exec -i kdc chmod a+r /var/lib/secret/broker.key

docker exec -i kdc chmod a+r /var/lib/secret/broker2.key

docker exec -i kdc chmod a+r /var/lib/secret/kafka-admin.key

  • Copy the krb5.conf file and keytabs from the KDC to the Kafka nodes (broker.key to node1,..., brokerN.key to nodeN), kafka-admin.key must be copied in all Kafka nodes.

Kafka nodes configuration

For the  volumes the necessary krb5.conf and jaas.config files are specified, as well as the secret directory that will contain the keytabs.

For the Kafka nodes the SASL mechanism is GSSAPI.

The KAFKA_ADVERTISED_LISTENERS property indicates which protocol and port is listening.

The SASL_PLAINTEXT property indicates that it uses SASL without SSL to authenticate and in this example the port is 9092.

In the KAFKA_OPTS property the jaas.config file of each specific node is set.

The metrics reporter properties are an optional configuration used to view metrics from a client.

Node1

broker:

    hostname: broker.kerberos-demo.local

    depends_on:

      - zookeeper

      - kdc

    # Required to wait for the keytab to get generated

    restart: on-failure

    volumes:

      - secret:/var/lib/secret

      - ../../environment/kerberos/kdc/krb5.conf:/etc/krb5.conf

      - ../../environment/kerberos/kafka/broker.sasl.jaas.config:/etc/kafka/broker.sasl.jaas.config

    environment:

      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper.kerberos-demo.local:2181'

      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT

      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://broker.kerberos-demo.local:9092

      KAFKA_LISTENERS: SASL_PLAINTEXT://:9092

      # Kerberos / GSSAPI Authentication mechanism

      KAFKA_SASL_ENABLED_MECHANISMS: GSSAPI

      KAFKA_SASL_KERBEROS_SERVICE_NAME: kafka

      # Configure replication to require Kerberos:

      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: GSSAPI

      KAFKA_INTER_BROKER_LISTENER_NAME: SASL_PLAINTEXT

      # Authorization config:

      KAFKA_AUTHORIZER_CLASS_NAME: $KAFKA_AUTHORIZER_CLASS_NAME

      KAFKA_ZOOKEEPER_SET_ACL: "true"

      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"

      KAFKA_LOG4J_LOGGERS: "kafka.authorizer.logger=INFO"

      KAFKA_SUPER_USERS: User:admin;User:kafka;User:schemaregistry;User:connect;User:controlcenter;User:ksqldb

      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker.sasl.jaas.config

                  # -Djdk.security.allowNonCaAnchor=true

                  # -Dsun.security.krb5.disableReferrals=true

      # Metrics reporter

      CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: GSSAPI

      CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_PLAINTEXT

      CONFLUENT_METRICS_REPORTER_SASL_KERBEROS_SERVICE_NAME: kafka

      CONFLUENT_METRICS_REPORTER_SASL_JAAS_CONFIG: "com.sun.security.auth.module.Krb5LoginModule required \

        useKeyTab=true \

        storeKey=true \

        keyTab=\"/var/lib/secret/kafka-admin.key\" \

        principal=\"admin/for-kafka@TEST.CONFLUENT.IO\";"

Node 2

broker2:

    image: ${CP_KAFKA_IMAGE}:${TAG}

    hostname: broker2.kerberos-demo.local

    container_name: broker2

    depends_on:

      - zookeeper

      - kdc

    # Required to wait for the keytab to get generated

    restart: on-failure

    volumes:

      - secret:/var/lib/secret

      - ../../environment/kerberos/kdc/krb5.conf:/etc/krb5.conf

      - ../../environment/kerberos/kafka/broker2.sasl.jaas.config:/etc/kafka/broker.sasl.jaas.config

    environment:

      KAFKA_BROKER_ID: 2

      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper.kerberos-demo.local:2181'

      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT

      KAFKA_LISTENERS: SASL_PLAINTEXT://:9092

      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://broker2.kerberos-demo.local:9092

      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

      # for 5.4.x:

      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1

      # for 6.0.0

      KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1

      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1

      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1

      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

      # Kerberos / GSSAPI Authentication mechanism

      KAFKA_SASL_ENABLED_MECHANISMS: GSSAPI

      KAFKA_SASL_KERBEROS_SERVICE_NAME: kafka

      # Configure replication to require Kerberos:

      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: GSSAPI

      KAFKA_INTER_BROKER_LISTENER_NAME: SASL_PLAINTEXT

      # Authorization config:

      KAFKA_AUTHORIZER_CLASS_NAME: $KAFKA_AUTHORIZER_CLASS_NAME

      KAFKA_ZOOKEEPER_SET_ACL: "true"

      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false"

      KAFKA_LOG4J_LOGGERS: "kafka.authorizer.logger=INFO"

      KAFKA_SUPER_USERS: User:admin;User:kafka;User:schemaregistry;User:connect;User:controlcenter;User:ksqldb

      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker.sasl.jaas.config

                  # -Djdk.security.allowNonCaAnchor=true

                  # -Dsun.security.krb5.disableReferrals=true

      # Confluent Metrics Reporter for Control Center Cluster Monitoring

      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter

      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker2:9092

      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1

      CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: GSSAPI

      CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_PLAINTEXT

      CONFLUENT_METRICS_REPORTER_SASL_KERBEROS_SERVICE_NAME: kafka

      CONFLUENT_METRICS_REPORTER_SASL_JAAS_CONFIG: "com.sun.security.auth.module.Krb5LoginModule required \

        useKeyTab=true \

        storeKey=true \

        keyTab=\"/var/lib/secret/kafka-admin.key\" \

        principal=\"admin/for-kafka@TEST.CONFLUENT.IO\";"

      CONFLUENT_METRICS_ENABLE: 'true'

Jaas file used by the Kafka nodes

In the jaas file, the keytab and the principal are specified.

In addition to the KafkaServer and KafkaClient it is necessary to specify the ZooKeeper client, which uses the principal zkclient, created in the ZooKeeper configuration

/*

* The service principal

*/

KafkaServer {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    keyTab="/var/lib/secret/broker.key"

    principal="kafka/broker.kerberos-demo.local@TEST.CONFLUENT.IO";

};

KafkaClient {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    keyTab="/var/lib/secret/broker.key"

    principal="kafka/broker.kerberos-demo.local@TEST.CONFLUENT.IO";

};

/*

* Zookeeper client principal

*/

Client {

    com.sun.security.auth.module.Krb5LoginModule required

    useKeyTab=true

    storeKey=true

    useTicketCache=false

    keyTab="/var/lib/secret/zookeeper-client.key"

    principal="zkclient@TEST.CONFLUENT.IO";

};

Denodo Platform Configuration

In this section we explain how to configure the Denodo Platform to be able to create a Kafka listener with Kerberos as authentication system  (SASL_GSSAPI).

For this we will have to carry out some actions in the Kerberos server, such as the creation of principals and keytabs, and in the Kafka nodes, such as the assignment of permissions to the principals, both read and write.

  • Create the principal used for connecting from Denodo Platform as client.

# Create client principals to connect in to the cluster:

docker exec -i kdc kadmin.local -w password -q "add_principal -randkey kafka_producer@TEST.CONFLUENT.IO"  > /dev/null

docker exec -i kdc kadmin.local -w password -q "modprinc -maxlife 11days -maxrenewlife 11days +allow_renewable kafka_producer@TEST.CONFLUENT.IO"  > /dev/null

docker exec -i kdc kadmin.local -w password -q "add_principal -randkey kafka_consumer@TEST.CONFLUENT.IO"  > /dev/null

docker exec -i kdc kadmin.local -w password -q "modprinc -maxlife 11days -maxrenewlife 11days +allow_renewable kafka_consumer@TEST.CONFLUENT.IO"  > /dev/null

  • Create the keytab for the producer and the consumer.

docker exec -i kdc kadmin.local -w password -q "ktadd  -k /var/lib/secret/kafka-client.key -norandkey kafka_producer@TEST.CONFLUENT.IO " > /dev/null

docker exec -i kdc kadmin.local -w password -q "ktadd  -k /var/lib/secret/kafka-client.key -norandkey kafka_consumer@TEST.CONFLUENT.IO " > /dev/null

  • Check the KDC privileges.

docker exec -i kdc chmod a+r /var/lib/secret/kafka-client.key

  • Copy the krb5.conf and kafka-client.key files to the Denodo Platform installation.
  • Create ACLs for the Kafka producer and consumer to get read (consumer) and write (producer) permissions (these commands need to be executed against a Kafka node or a client with the Kafka API installed to execute kafka-acls).

# Adding ACLs for consumer and producer user:

docker exec client bash -c "kinit -k -t /var/lib/secret/kafka-admin.key admin/for-kafka && kafka-acls --bootstrap-server broker:9092 --command-config /etc/kafka/command.properties --add --allow-principal User:kafka_producer --producer --topic=*"

docker exec client bash -c "kinit -k -t /var/lib/secret/kafka-admin.key admin/for-kafka && kafka-acls --bootstrap-server broker:9092 --command-config /etc/kafka/command.properties --add --allow-principal User:kafka_consumer --consumer --topic=* --group=*"

  • Generate the ticket cache for kafka_producer and kafka_consumer from the computer where the Denodo Platform is installed

kinit -k -t /var/lib/secret/kafka-client.key kafka_producer

kinit -k -t /var/lib/secret/kafka-client.key kafka_consumer

  • Configure a listener that only reads from a topic from the Web Design Studio (only needs read permissions). If read and write permissions are needed, these privileges must be added for the principals when creating the ACLs.

Disclaimer
The information provided in the Denodo Knowledge Base is intended to assist our users in advanced uses of Denodo. Please note that the results from the application of processes and configurations detailed in these documents may vary depending on your specific environment. Use them at your own discretion.
For an official guide of supported features, please refer to the User Manuals. For questions on critical systems or complex environments we recommend you to contact your Denodo Customer Success Manager.
Recommendation

Questions

Ask a question

You must sign in to ask a question. If you do not have an account, you can register here