Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

# Kafka Logging related configuration
isRunningOnAws= false - Set to true if you are running Airavata on AWS
kafka.broker.list= localhost:9092 - One or more kafka broker node address with the port, Giving one is enough because KafkaProducer will find out the addresses of other nodes
kafka.topic.prefix= localstaging - Topic prefix you want to use because Airavata will create the topic names for you
enable.kafka.logging= true - Enable kafka Appender to register as a log Appender.

...

Code Block
{
        "serverId" => {
        "serverId" => "192.168.59.3",
        "hostName" => "192.168.59.3",
         "version" => "airavata-0.16-135-gac0cae6",
           "roles" => [
            [0] "gfac"
        ]
    },
           "message" => "Skipping Zookeeper embedded startup ...",
         "timestamp" => "2016-09-09T20:57:08.329Z",
             "level" => "INFO",
        "loggerName" => "org.apache.airavata.common.utils.AiravataZKUtils",
              "mdc"  => {
      				"gateway_id": "21845d02-7d2c-11e6-ae22-562311499611",
      				"experiment_id": "21845d02-7d2c-11e6-ae22-34b6b6499611",
     				"process_id": "21845d02-7d2c-11e6-ae22-56b6b6499611",
      				"token_id": "21845d02-7d2c-11e6-ae22-56b6b6499611"
    			},
        "threadName" => "main",
          "@version" => "1",
        "@timestamp" => "2016-09-09T20:57:11.678Z",
              "type" => "gfac_logs",
              "tags" => [
        [0] "local",
        [1] "CoreOS-899.13.0"
    ],
    "timestamp_usec" => 0
}

How airavata create kafka topic names by given topic prefix

Airavata has few services and its completely flexible for you to deploy in the way you like, you can deploy all the services (Apache thrift services) in on JVM or you can create one JVM for each component or you can merge only few of them to one JVM. In the above log you can see the roles section contains only gfac which means above log was taken from a gfac server node and no other component was running in that JVM. So topic creation logic is based on the role of the JVM. To keep the deployment clean we recommend to deploy one component per JVM so that its easier to scale

and diagnose the system. So during topic creation we check the number of roles configured in the JVM

if the number of roles is greater than 4  => <kafka_topic_prefix>_all_logs  ex: staging_all_logs (kafka.topic.prefix = staging)

Otherwise we pick the first role             => <kafka_topic_prefix>_<first role>_logs ex: staging_gfac_logs (kafka.topic.prefix = staging)

http://kafka.apache.org/documentation.html

...