You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »


Status

Current state: "Under Discussion"

Discussion thread: here

JIRA: KAFKA-3751

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Kafka currently supports two SASL mechanisms out-of-the-box. SASL/GSSAPI enables authentication using Kerberos and SASL/PLAIN enables simple username-password authentication. Support for more mechanisms will provide Kafka users more choice and the option to use the same security infrastructure for different services. Salted Challenge Response Authentication Mechanism (SCRAM) is a family of SASL mechanisms that addresses the security concerns with traditional mechanisms like PLAIN and DIGEST-MD5. The mechanism is defined in RFC 5802 (https://tools.ietf.org/html/rfc5802).

This KIP proposes to add support for SCRAM SASL mechanisms to Kafka clients and brokers:

  • SCRAM-SHA-1
  • SCRAM-SHA-224
  • SCRAM-SHA-256
  • SCRAM-SHA-384
  • SCRAM-SHA-512

Public Interfaces

SaslHandshakeRequest version will be increased from 0 to 1 so that clients can determine if the broker is capable of supporting SCRAM mechanisms using ApiVersionsRequest. Java clients will not be updated to use ApiVersionsRequest to choose SASL mechanism under this KIP. Java clients will continue to use their configured SASL mechanism and will fail connection if the requested mechanism is not enabled in the broker. No other public interface changes or new configuration options are required for this KIP.

Since support for SASL/SCRAM servers and clients is not available in Java, a new login module class will be added that loads and installs the SASL server and client implementations for SCRAM as Java security providers (similar to the existing SASL/PLAIN server support in Kafka). SCRAM is enabled by specifying one of the SCRAM mechanisms as the SASL mechanism (eg. sasl.mechanism=SCRAM-SHA-256) along with the new login module in the JAAS configuration. The login module and the underlying implementations can be overridden if required, for example, to integrate with existing authentication servers.

The implementation included in Kafka will store user credentials in Zookeeper as dynamically configurable properties. The credentials include a randomly generated salt, salted hash of the password (StoredKey and ServerKey), and the iteration count for each  SCRAM mechanism that is enabled. These are stored as properties for each user under /config/users/<user>. These credentials are not sufficient to impersonate a client, but in installations where Zookeeper is not secure, an alternative secure SASL server implementation may be used to protect against a brute-force attack that may recover the password if a strong cryptographic hash function and high iteration count are not used. Zookeeper is a suitable store for short-lived credentials like delegation tokens.

Proposed Changes

ScramLoginModule

The static initializer of the SCRAM login module installs the SASL/SCRAM server and client implementations as security providers for the supported SASL/SCRAM mechanisms. The module obtains username and password for client connections from the JAAS configuration options “username” and “password”  and these are set as the public and private credentials of the Subject respectively.

ScramSaslClientProvider/ScramSaslClient

ScramSaslClient implements the client-side SCRAM algorithm defined in RFC 5802.

Username and password are obtained from the Subject's public and private credentials using existing callback handlers. No other shared secrets are required.

ScramSaslServerProvider/ScramSaslServer

ScramSaslServer implements the server-side SCRAM algorithm defined in RFC 5802.

The implementation included in Kafka will obtain user credentials from Zookeeper. Dynamic config update handlers will be used to maintain a cache of valid credentials in the broker.

For production use, the login modules and server/client implementations can be replaced if required with an alternative implementation that stores credentials more securely.

JAAS configuration

The login context KafkaClient is used by clients and the context KafkaServer is used by brokers. Username/password specified in KafkaClient is used for client connections and the username/password in KafkaServer is used for inter-broker  connections. Credentials supplied by the client are validated by the SASL server in the broker against the salted, hashed passwords stored in Zookeeper using the SCRAM algorithm.

JAAS configuration for Clients
KafkaClient {
	org.apache.kafka.common.security.scram.ScramLoginModule required
	username="alice"
	password="alice-secret";
};

KafkaServer {
	org.apache.kafka.common.security.scram.ScramLoginModule required
	username="admin"
	password="admin-secret";
}

 

Credential configuration in Zookeeper

User credentials are stored in Zookeeper as dynamically configurable properties in the path /config/users/<encoded-user>. User names will be URL-encoded using the same encoding scheme as KIP-55.

Sample configuration for user credentials
// SCRAM credentials for user alice: Zookeeper persistence path /config/users/alice
{
        "version":1,
        "config": {
          "scram-sha-1" : "s=ejlmaTYxemJtMTF6ZnJvaGhiOWkxYTQ2eQ==,t=QPIPb541liI8JKRwO3X/iei6cQk=,k=ArO8uZvH2PQEh2u30/OcxzkTTwE=,i=4096",
          "scram-sha-256" : "s=10ibs0z7xzlu6w5ns0n188sis5,t=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,k=nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k=,i=4096"
        }
};

 

Tools

kafka-configs.sh will be extended to support management of credentials in Zookeeper as dynamic properties of users. For ease of use, the tool will take a password and an optional iteration count and generate a random salt, ServerKey and StoredKey as specified in in RFC 5802. For example:

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'scram_sha-256=[iterations=4096,password=alice-secret],scram_sha-1=[password=alice-secret]--entity-type users --entity-name alice

When the above config command is run, the tool generates a random salt for each requested SCRAM mechanism (SCRAM-SHA-256 and SCRAM-SHA-1 in the example). The tool then generates stored key and server key as described in SCRAM Algorithm Overview using the SCRAM message formatter implementation that is used to salt/hash during SCRAM exchanges.

  • SaltedPassword  := Hi(Normalize(password), salt, i)
  • ClientKey       := HMAC(SaltedPassword, "Client Key")
  • StoredKey       := H(ClientKey)
  • ServerKey       := HMAC(SaltedPassword, "Server Key")

Default iteration count will be 4096. The actual password "alice-secret" is not stored in Zookeeper and is not known to Zookeeper or Kafka brokers. The hashed properties stored in Zookeeper can be retrieved using the --describe option of kafka-configs.sh. For example:

bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type users --entity-name alice

Configs for user-principal 'alice' are scram-sha1=[s=ejlmaTYxemJtMTF6ZnJvaGhiOWkxYTQ2eQ==,t=QPIPb541liI8JKRwO3X/iei6cQk=,k=ArO8uZvH2PQEh2u30/OcxzkTTwE=,i=4096],scram-sha-256=[s=10ibs0z7xzlu6w5ns0n188sis5,t=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,k=nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k=,i=4096]

Credentials can be deleted using the --delete option. For example:

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --delete-config 'scram_sha-256,scram_sha-1--entity-type users --entity-name alice

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?

None

  • If we are changing behavior how will we phase out the older behavior?

Existing mechanisms will continue to be supported. The new mechanisms can be enabled in the broker along with SASL/GSSAPI and SASL/PLAIN. Existing upgrade procedures for new SASL mechanisms (as currently described in the documentation) can be used to switch to SCRAM.

Test Plan

One integration test and a system test will be added to test the good path for SASL/SCRAM. A system test will also be added for the upgrade scenario to test rolling upgrade and multiple broker mechanisms that include SCRAM. Unit tests will be added for failure scenarios and to test all supported SCRAM mechanisms.

Rejected Alternatives

Specify username, password as Kafka client properties instead of the JAAS configuration 

JAAS configuration is the standard Java way of specifying security properties and since Kafka already relies on JAAS configuration for SASL, it makes sense to store the options in jaas.conf. This is also consistent with SASL/PLAIN implementation in Kafka and similar mechanisms in Zookeeper. However, JAAS configuration is not particularly flexible  and hence providing credentials as properties may provide an interface that is simpler to use. But this should be looked at in the context of all SASL mechanisms rather than just SCRAM.

Make the credential provider in ScramSaslServer pluggable

Some Kafka users may want to replace Zookeeper-based credential store with an external secure store. It may be useful to make the credential provider in ScramSaslServer pluggable to enable this easily. Since it is possible to plug in new login modules and SaslServer implementations using standard Java security extension mechanisms, this KIP does not propose to make the credential provider a plugabble public interface. A generic solution to configure callback handlers for any mechanism is being addressed in KIP-86.

 


  • No labels