Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The implementation included in Kafka will obtain user credentials from Zookeeper. Credentials will not be cached in the broker in the initial implementation, since they are only required to authenticate new client connections. Caching may be added in future if required.

For production use, the login modules and server/client implementations can be replaced if required with an alternative implementation that stores credentials more securely.

...

User credentials are stored in Zookeeper as dynamically configurable properties in the path /config/users/<user><encoded-user>. User names will be URL-encoded using the same encoding scheme as KIP-55.

Code Block
languagejava
titleSample configuration for user credentials
// SCRAM credentials for user alice: Zookeeper persistence path /config/users/alice
{
        "version":1,
        "config": {
          "scram-sha-1" : "s=ejlmaTYxemJtMTF6ZnJvaGhiOWkxYTQ2eQ==,t=QPIPb541liI8JKRwO3X/iei6cQk=,k=ArO8uZvH2PQEh2u30/OcxzkTTwE=,i=4096",
          "scram-sha-256" : "s=10ibs0z7xzlu6w5ns0n188sis5,t=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,k=nN+fZauE6vG0hmFAEj/49+2yk0803y67WSXMYkgh77k=,t=+Acl/wi1vLZ95Uqj8rRHVcSp6qrdfQIwZbaZBwM0yvo=,i=4096"
        }
};

 

Tools

kafka-configs.sh will be extended to support management of credentials in Zookeeper as dynamic properties of users. For ease of use, the tools tool will take a password and an optional iteration count and generate a random salt, ServerKey and StoredKey as specified in in RFC 5802. For example:

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'scram_sha-256-password=alice-secret,scram_iterationsha-256-iterations=4096,scram_mechanism=SCRAMsha-SHA-1,scram_mechanism=SCRAM-SHA-2561-password=alice-secret--entity-type users --entity-name alice


The actual password "alice-secret" is not stored in Zookeeper and is not known to Zookeeper or Kafka brokers. The hashed properties stored in Zookeeper can be retrieved using the --describe option of kafka-configs.sh.

When the above config command is run, the tool generates a random salt for each requested SCRAM mechanism (SCRAM-SHA-256 and SCRAM-SHA-1 in the example). The tool then generates stored key and server key as described in SCRAM Algorithm Overview using the SCRAM message formatter implementation that is used to salt/hash during SCRAM exchanges.

  • SaltedPassword  := Hi(Normalize(password), salt, i)
  • ClientKey       := HMAC(SaltedPassword, "Client Key")
  • StoredKey       := H(ClientKey)
  • ServerKey       := HMAC(SaltedPassword, "Server Key")


Compatibility, Deprecation, and Migration Plan

...

One integration test and a system test will be added to test the good path for SCRAM-SHA-1 and SCRAM-SHA-256SASL/SCRAM. A system test will also be added for the upgrade scenario to test rolling upgrade and multiple broker mechanisms that include SCRAM. Unit tests will be added for failure scenarios and to test all supported SCRAM mechanisms.

Rejected Alternatives

Specify username, password as Kafka client properties instead of the JAAS configuration 

...

Some Kafka users may want to replace Zookeeper-based credential store with an external secure store. It may be useful to make the credential provider in ScramSaslServer pluggable to enable this easily. Since it is possible to plug in new login modules and SaslServer implementations using standard Java security extension mechanisms, this KIP does not propose to make the credential provider a plugabble public interface. A generic solution to configure callback handlers for any mechanism is being addressed in KIP-86.