Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

Bug Reference

https://issues.apache.org/jira/browse/CLOUDSTACK-3471

Branch

masterNone

Introduction

As of cloudstack 4.3 there is no api that can aggregate log messages by the job id. An api to extract logs by the jobid would make it easier to identify the sequence of steps that have been executed to complete a particular job. In case of failures it would aid in quickly identifying the associated commands/steps that have resulted in the failure.of cloudstack 4.3 there is no api that can aggregate log messages by the job id. An api to extract logs by the jobid would make it easier to identify the sequence of steps that have been executed to complete a particular job. In case of failures it would aid in quickly identifying the associated commands/steps that have resulted in the failure.

Purpose

Since logging is a typically a service that can be and is usually available outside the IaaS core, it was felt that instead of integrating this as an api within cloudstack, it would be better to provide a generic logsearch service that could be invoked by cloudstack to retrieve log messages. We describe the mechanism on how to achieve this in what follows.

Purpose

This is typically useful to root admin usersIn terms of the functionality available to end users, this will provide a cloudstack api called extractLogsByJobid() which will be available only as a ROOT admin API.

References

Document History

Author

Description

Date

Saurav Lahiri

Inital Draft

12/14/2013

Saurav LahiriChanges to describe how the service will be deployed and used

2/11/2013

Glossary

Feature Specifications

Use cases

  1. Root Admin users can

    use

    query this

    API

    service to quickly identify the sequence of steps related to a particular Job.

  2. QA can use this

    api

    service to link the log messages related to automated test failures.

...

Layout Description

The system will comprise of a log shipping layer. This layer will be responsible for collecting logs from each of the management server shipping them to a centralized a centralized place. In the current design we are proposing logstash We describe how logstash can be used as the shipping layer.  It It will be configured to use rabbitmq redis to ship individual log files to files to a centralized location. Fluentd could be another option.

The shipping phase will interact with another layer called the indexer/search layer. This layer will also enable storing the logs in a format that will help in help in writing search queries. In the current implementation we are proposing Here we describe the use of logstash to recieve the individual log files and elasticsearch to search through search through these. Before logstash outputs the recieved messages to elasticsearch, it will apply a specific grok filter that will split the input messages into key into key value pairs. The key value pair will allow creation of search queries by (key,value). Via the elasticsearch REST api , search queries can be constructed for constructed for required jobid.

Instances of Logstash:

Logstash can aggregate log messages from multiple nodes and multiple log files. In a typical production environment,  cloudstack cloudstack is configured with multiple management server instances for scalability and redundancy. One instance of logstash will be configured to run on each of each of the management server and will ship the log to a AMQP brokerredis. The logstash process is reasonably light in terms of memory consumption and should not impact not impact the management server. 

Instances of elasticsearch and AMQP broker:redis

Elasticsearch runs as a horizontal scale out cluster. The clusters node can be created in two different modes.

...

We describe the process of creating and using seperate Elastic search nodes:

In this configuration any linux user template can be used to spawn elasticsearch nodes. The number of such nodes should

...

be configurable via a global parameter. One of the

...

node will be designated as the master node, which will also run the

...

redis instance.

Using systemvm for elasticsearch nodes:

 

 Currently cloudstack does not allow deployment of default system vms. The only supported types are (virtual router, secondary storage, consoleproxy, internal and external load balancer). These specifc types are handle in their specific Manager code. To enable systemvms to be started by admin, Default System VM manager and VO class will require to be implemented.

Logstash Configuration on the log shipping layer.

 

Code Block
input {

...


  file

...

 {
        type => "apache"

...


        path => [ "/var/log/cloudstack/management/management-server.log" ]

...


  }
}
output {
  stdout { codec => rubydebug }
  redis { host => "192.168.56.100" data_type => "list" key => "logstash" }
}



Logstash configuration on the index/search layer.

Code Block
input

...

 {
  redis {
    host => "<host>"
    # these settings should match the output of the agent
    data_type => "list"
    key => "logstash"
    # We use the 'json' codec here because we expect to read
    # json events from redis.
    codec => json
  }
}
filter
{
        grok
        {
                match => [ "message","%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY}[T ]%{HOUR}

...

:?%{MINUTE}

...

:?%{SECOND}[T ]INFO%{GREEDYDATA}job[\-]+%{INT

...

:jobid}\s*=\s*\[\s*%{UUID

...

:uuid}\s*

...

\]%{GREEDYDATA}"

...

]
                named_captures_only => true
        }
}
output
{
  stdout { debug => true debug_format => "json"}
  elasticsearch {
  }
}

 

Steps in setting up the service:

  1. Deploy a vm intance from any existing linux based user template
  2. setup logstash on this guest instance
    1. There is a an apt repository and a yum repository from which logstash can be deployed.
  3. setup elasticsearch on this guest instance
    1. There is a an apt repository and a yum repository from which logstash can be deployed.
  4. Setup redis on this guest instance
    1. For yum : This is available from the REMI repository
    2. For apt : This is available from the  ppa:rwky/redis
  5. Create a template from this virtual machine
  6. Subsequently this template can be used to spawn elasticsearch master and slave nodes.

 

Config for example elasticsearch cluster:

This will help configuring unicast discovery of master nodes. Multicast discovery can also be used but is not described here. By default all nodes are enabled to function as master. The actual master is elected through a elasticsearch master election process.

 

On each elasticsearch node:

  • Edit the file /etc/elasticsearch/elasticsearch.yml, replace host1, host2 and so on with the actual ip address of the nodes.

discovery.zen.ping.unicast.hosts: ["host1", "host2"]

  • Edit the file /etc/elasticsearch/elasticsearch.yml, replace host1 with the nodes ip address

network.publish_host: host1

Automation:

The following two scripts can automate steps 1 to 5 for the above process.

  • setup_es_template.sh
  • linux_flow.sh

This script takes all the paramters in a setup config file. The only line param this requires is the config file. Default setup.conf is adequately commented
to describe the required params.

~# ./setup_es_template.sh --config setup.conf

Default setup.conf:

#########################################################################################################
#This is the base template which will be used to provision a guest with logstash,elasticsearch and redis
templateid=

#The diskoffering fo the guest
diskofferingid=

#The serviceofferingid
serviceofferingid=

#The zoneid
zoneid=

#Currently only linux, dont change this
ostype=linux

#The default is to use the username/pass, fill this up if the ssh keyfile will be used
keyfile=""

#Username with which the various steps will be carried out
user=root
#Default is to use the password to automate the provisioning into the vm
passwd=

#The yum repo url for logstash
logstash_baseurl="http://packages.elasticsearch.org/logstash/1.3"
#Description for logstash software
logstash_desc="logstash repository for 1.3.x packages"
#Gpg key for the logstash repo
logstash_gpgkey="http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
#Logstash reponame
logstash_reponame="\[logstash-1.3\]"
#The conf file which will be used to specify the cloudstack specfic config info
logstash_conf=./logstash-indexer.conf


#The yum repo url for elasticsearch
elasticsearch_baseurl="http://packages.elasticsearch.org/elasticsearch/0.90/centos"

#Description for elasticsearch software
elasticsearch_desc="Elasticsearch repository for 0.90.x packages"

#The gpg key for repo
elasticsearch_gpgkey="http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
#The elasticsearch reponame
elasticsearch_reponame="\[elasticsearch-0.90\]"

#The epel repo rpm
epelrpm=http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
#The remi repo rpm
remirpm=http://rpms.famillecollet.com/enterprise/remi-release-6.rpm

#########################################################################################################

output {
  elasticsearch {
    host => "<elasticsearch_master>"
  }
}

API Command :

A new API command ExtractLogByJobIdCmd will be introduced. This will be implemented as a synchronous command.

Manager:

The manager class will implement the actual functionality of querying elastic search for log messages that match the specified filters. For doing this the Elasticsearch REST api queries will be used. Post method will be used with elasticsearch DSL to specify the required query. DSL is quite flexible and in future if support is required to filter by time stamp and other values DSL would help achieve that with ease.
DSL query for searching logs by jobid

{
    "query": {
        "query_string": {
            "query": "\<jobid\>",
   "fields" : "jobid"
        }
    }
}

Web Services APIs

A new API will introduced which can be accessed as
http://<host>:8080/client/api?command=extractLogByJobId&jobid=<jobid>

 

UI flow

Sample Java Code to retrieve logstatements:

 

Code Block
import org.apache.commons.httpclient.HttpStatus;
import org.apache.http.entity.StringEntity;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.HttpResponse;
import org.apache.http.impl.client.DefaultHttpClient;
import java.io.*;
class SampleSearch
{
        public static void main(String []args)
        {
                try{
                DefaultHttpClient httpClient = new DefaultHttpClient();
                String url  = "http://192.168.56.100:9200/_search" ;
                StringEntity data = new StringEntity("{ \"query\": { \"query_string\": { \"query\": \"2005\", \"fields\" : [\"jobid\"] } } }");
                HttpPost request  = new HttpPost(url);
                request.addHeader("content-type", "application/x-www-form-urlencoded");
                request.setEntity(data);
                HttpResponse response = httpClient.execute(request);
                         if(response.toString().contains("200 OK"))
                        {
                                InputStream is = response.getEntity().getContent();
                                BufferedReader in = new BufferedReader(new InputStreamReader(is));
                                String line = null;
                                StringBuilder responseData = new StringBuilder();
                                while((line = in.readLine()) != null)
                                {
                                        responseData.append(line);
                                }
                                System.out.println("RESPONSE : " + responseData.toString());
                        }
                }
                catch(Exception exp)
                {
                        System.out.println("Error ocurred:" + exp.toString());
                }
        }
}

 None

IP Clearance

  • Logstash which is an opensource log management tool and is covered under Apache 2.0 license
  • Elasticsearch which is a search and analytics tool and is again covered under Apache 2.0 license. 

Appendix

Appendix A:Appendix B: 

Labels: