You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Apache kafka allows clients to conenct over SSL . By default SSL is disabled but can be turned on as needed.

1. Generating the key and the certificate for each kafka broker

The first step of deploying HTTPS is to generate the key and the certificate for each machine in the cluster. You can use Java’s keytool utility to accomplish this task.
write into a tmp keystore initiailly so that we can export and sign it later with CA.

$ keytool -keystore {tmp.server.keystore.jks} -alias localhost -validity {validity} -genkey

 

You need to specify two parameters in the above command:

keystore: the keystore file that stores the certificate. The keystore file contains the private key of the certificate; therefore, it needs to be kept safely.
validity: the valid time of the certificate in days.

Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not the malicious one.

2. Creating your own CA

After the first step, each machine in the cluster has a public-private key pair, and a certificate to identify the machine. The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine.

Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. A certificate authority (CA) is responsible for signing certificates. CA works likes a government that issues passports—the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have high assurance that they are connecting to the authentic machines.

openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 


The generated CA is simply a public-private key pair and certificate, and it is intended to sign other certificates.

The next step is to add the generated CA to the **clients’ truststore** so that the clients can trust this CA:

keytool -keystore {client.truststore.jks} -alias CARoot -import -file {ca-cert}


**Note: If you enable client authentication required by setting sl.client.auth to be requested or required on kafka broker config than you must provide a truststore for kafka broker as well and it should have all the CA certificates that clients keys signed by.**

keytool -keystore {server.truststore.jks} -alias CARoot -import -file {ca-cert}


In contrast to the keystore in step 1 that stores each machine’s own identity, the truststore of a client stores all the certificates that the client should trust. Importing a certificate into one’s truststore also means that trusting all certificates that are signed by that certificate. As the analogy above, trusting the government (CA) also means that trusting all passports (certificates) that it has issued. This attribute is called the chains of trust, and it is particularly useful when deploying SSL on a large kafka cluster. You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way all machines can authenticate all other machines.

3. Signing the certificate

The next step is to sign all certificates generated by step 1 with the CA generated in step 2. First, you need to export the certificate from the keystore:

keytool -keystore {tmp.server.keystore.jks} -alias localhost -certreq -file {cert-file}


Then sign it with the CA:

openssl x509 -req -CA {ca-cert} -CAkey {ca-key} -in {cert-file} -out {cert-signed} -days {validity} -CAcreateserial -passin pass:{ca-password}


Finally, you need to import both the certificate of the CA and the signed certificate into the keystore:

$ keytool -keystore {server.keystore.jks} -alias CARoot -import -file {ca-cert}
$ keytool -keystore {server.keystore.jks} -alias localhost -import -file {cert-signed}


The definitions of the parameters are the following:

keystore: the location of the keystore
ca-cert: the certificate of the CA
ca-key: the private key of the CA
ca-password: the passphrase of the CA
cert-file: the exported, unsigned certificate of the server
cert-signed: the signed certificate of the server

 

4. Configuring Kafka Broker

Kafka Broker comes with the feature of listenting on multiple ports thanks to [KAFKA-1809](https://issues.apache.org/jira/browse/KAFKA-1809) .
we need to configure following property in server.properties

listeners

This property must have a PLAINTEXT port along with a SSL port. Since we don't have interbroker SSL support yet if we only configure SSL port than with-in broker communication will not work.

 

listners=PLAINTEXT://host.name:port,SSL://host.name:port



Following SSL configs are needed on the broker side

ssl.protocol = TLS
ssl.provider (Optional. The name of the security provider used for SSL connections. Default value is the defaultsecurity provider of the JVM.)
ssl.cipher.suites = "A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a networkconnection using TLS or SSL network protocol." 
ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1  (list out the SSL protocols that you are goingaccept from clients . Do note SSL is deprecated using that in production is not recommended) 
ssl.keystore.type = "JKS"
ssl.keystore.location = "/var/private/ssl/kafka.server.keystore.jks"
ssl.keystore.password = "test1234"
ssl.key.password = "test1234"
ssl.truststore.type = "JKS"
ssl.truststore.location = "/var/private/ssl/kafka.server.truststore.jks"
ssl.truststore.password = "test1234"
ssl.client.auth = none ( "required " = > client authentication is required, "requested" => client authentication is requested" )



If you want to enable any cipher suites other than the defaults comes with JVM like the ones listed here
https://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html.
One needs to install **Unlimited Strength Policy files** http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html


Once you start the broker you should be able to see in the server.log

with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)

 

4. Configuring Kafka Producer & Kafka Consumer

SSL supported only for new Kafka Producer & Consumer , older api is not supported.
The configs for SSL will be same for both producer & consumer.

security.protocol = SSL 
ssl.provider (Optional. The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.)
ssl.cipher.suites (Optional) ."A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol." 
ssl.enabled.protocols= TLSv1.2,TLSv1.1,TLSv1 **Should list atleast one of the protocols configured on the broker side**

if you are configuring client authentication than you must create keystore like step-1 otherwiser keystore config is optional for client.
ssl.keystore.type = "JKS"
ssl.keystore.location = "/var/private/ssl/kafka.client.keystore.jks"
ssl.keystore.password = "test1234"
ssl.keystore.password = "test1234"
ssl.truststore.type = "JKS"
ssl.truststore.location = "/var/private/ssl/kafka.client.truststore.jks"
ssl.truststore.password = "test1234"



Example using console-producer:

kafka-console-producer.sh --broker-list localhost:9093 --topic test --new-producer --producer-property "security.protocol=SSL"  --producer-property "ssl.truststore.location=client.truststore.jks" --producer-property "ssl.truststore.password=test1234"
  • No labels