You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This page is meant as a template for writing a KIP. To create a KIP choose Tools->Copy on this page and modify with your content and replace the heading with the next KIP number and a description of your issue. Replace anything in italics with your own description.

Status

Current state: "Under Discussion"

Discussion thread: here [Change the link from the KIP proposal email archive to your own email thread]

JIRA: KAFKA-3751

Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).

Motivation

Kafka currently supports two SASL mechanisms out-of-the-box. SASL/GSSAPI enables authentication using Kerberos and SASL/PLAIN enables simple username-password authentication. Support for more mechanisms will provide Kafka users more choice and the option to use the same security infrastructure for different services. Salted Challenge Response Authentication Mechanism (SCRAM) is a family of SASL mechanisms that addresses the security concerns with traditional mechanisms like PLAIN and DIGEST-MD5. The mechanism is defined in RFC 5802 (https://tools.ietf.org/html/rfc5802).

This KIP proposes to add support for two new SASL mechanisms to Kafka clients and brokers: SCRAM-SHA-1 and SCRAM-SHA-256. 

Public Interfaces

No public interface changes or new configuration options are required for this KIP.

Since support for SASL/SCRAM servers and clients is not available in Java, a new login module class will be added that loads and installs the SASL server and client implementations for SCRAM as Java security providers (similar to the existing SASL/PLAIN server support in Kafka). SCRAM is enabled by specifying one of the SCRAM mechanisms as the SASL mechanism (eg. sasl.mechanism=SCRAM-SHA-256) along with the new login module in the JAAS configuration. The login module and the underlying implementations can be overridden if required, for example, to integrate with existing authentication servers.

The implementation included in Kafka will store user credentials in Zookeeper as dynamically configurable properties. The credentials include a randomly generated salt, salted hash of the password (StoredKey and ServerKey), and the iteration count. These are stored as properties for each user under /config/users/<user>. These credentials are not sufficient to impersonate a client, but in installations where Zookeeper is not secure, an alternative secure SASL server implementation may be used to protect against a brute-force attack that may recover the password if a strong cryptographic hash function and high iteration count are not used. Zookeeper is a suitable store for short-lived credentials like delegation tokens.

Proposed Changes

ScramLoginModule

The static initializer of the SCRAM login module installs the SASL/SCRAM server and client implementations as security providers for SASL mechanisms SCRAM-SHA-1 and SCRAM-SHA-256. The module obtains username and password for client connections from the JAAS configuration options “username” and “password”  and these are set as the public and private credentials of the Subject respectively.

ScramSaslClientProvider/ScramSaslClient

ScramSaslClient implements the client-side SCRAM algorithm defined in RFC 5802.

Username and password are obtained from the Subject's public and private credentials using existing callback handlers. No other shared secrets are required.

ScramSaslServerProvider/ScramSaslServer

ScramSaslServer implements the server-side SCRAM algorithm defined in RFC 5802.

The implementation included in Kafka will obtain user credentials from Zookeeper. Credentials will not be cached in the broker, since they are only required to authenticate new client connections. For production use, the login modules and server/client implementations can be replaced if required with an alternative implementation that stores credentials more securely.

JAAS configuration

The login context KafkaClient is used by clients and the context KafkaServer is used by brokers. Username/password specified in KafkaClient is used for client connections and the username/password in KafkaServer is used for inter-broker  connections. Credentials supplied by the client are validated by the SASL server in the broker against the salted, hashed passwords stored in Zookeeper using the SCRAM algorithm.

JAAS configuration for Clients
KafkaClient {
	org.apache.kafka.common.security.scram.ScramLoginModule required
	username="alice"
	password="alice-secret";
};

KafkaServer {
	org.apache.kafka.common.security.scram.ScramLoginModule required
	username="admin"
	password="admin-secret";
}

 

Credential configuration in Zookeeper

User credentials are stored in Zookeeper as dynamically configurable properties in the path /config/users/<user>.

Sample configuration for user credentials
// SCRAM credentials for user alice: Zookeeper persistence path /config/users/alice
{
        "version":1,
        "config": {
          "scram_salt" : "10ibs0z7xzlu6w5ns0n188sis5"
          "scram_server_key" : "nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k="
          "scram_stored_key" : "+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo="
          "scram_iteration" : "4096"
        }
};

 

Tools

kafka-configs.sh will be extended to support management of credentials in Zookeeper as dynamic properties of users. For ease of use, the tools will take a password and an optional iteration count and generate a random salt, ServerKey and StoredKey as specified in in RFC 5802. For example:

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'scram_password=alice-secret,scram_iteration-4096' --entity-type users --entity-name alice


The actual password "alice-secret" is not stored in Zookeeper and is not known to Zookeeper or Kafka brokers. The properties stored in Zookeeper can be retrieved using the --describe option of kafka-configs.sh.

Compatibility, Deprecation, and Migration Plan

  • What impact (if any) will there be on existing users?

None

  • If we are changing behavior how will we phase out the older behavior?

Existing mechanisms will continue to be supported. The new mechanisms can be enabled in the broker along with SASL/GSSAPI and SASL/PLAIN. Existing upgrade procedures for new SASL mechanisms (as currently described in the documentation) can be used to switch to SCRAM.

Test Plan

One integration test and a system test will be added to test the good path for SCRAM-SHA-1 and SCRAM-SHA-256. A system test will also be added for the upgrade scenario to test rolling upgrade and multiple broker mechanisms that include SCRAM. Unit tests will be added for failure scenarios.

Rejected Alternatives

Specify username, password as Kafka client properties instead of the JAAS configuration 

JAAS configuration is the standard Java way of specifying security properties and since Kafka already relies on JAAS configuration for SASL, it makes sense to store the options in jaas.conf. This is also consistent with SASL/PLAIN implementation in Kafka and similar mechanisms in Zookeeper. However, JAAS configuration is not particularly flexible  and hence providing credentials as properties may provide an interface that is simpler to use. But this should be looked at in the context of all SASL mechanisms rather than just SCRAM.

Make the credential provider in ScramSaslServer pluggable

Some Kafka users may want to replace Zookeeper-based credential store with an external secure store. It may be useful to make the credential provider in ScramSaslServer pluggable to enable this easily. Since it is possible to plug in new login modules and SaslServer implementations using standard Java security extension mechanisms, this KIP does not propose to make the credential provider a plugabble public interface.

 


  • No labels