...
The original proposal was for adding raw records to the __cluster_metadata. These records would be describe as a single key value where the key is the name of the ApiMessageAndVersion record and the value a JSON encoding of the fields for that record. An example would be the following for a SCRAM credential record.
Code Block UserScramCredentialsRecord={"name":"alice","mechanism":1,"salt":"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=","SaltedPassword":"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=","iterations":8192}'
This proposal is not very customer friendly and has the issue that the underlying record format could change in the future requiring the command line to change. It is desired that even if the record format changes, the argument parsing shouldn't be affected.
- Update
kafka-config
to take a format directory option and use the same arguments for altering SCRAM credentials to add them to the__cluster_metadata
topic for bootstrap. The issues with this is that it requires multiple commands to format each node in a cluster. It also has the problem of adding a whole new block of code to kafka-config just to handle the bootstrap.checkpoint file and it would need logic to understand if the bootstrap had completed. - Update k
afka-storage
to append records to bootstrap.checkpoint with multiple invocations of the tool. This would allow the use of the same command line arguments fromkafka-config
to be used. It was deemed a requirement that a single invocation ofkafka-storage
format all the records for bootstrap.