...
https://issues.apache.org/jira/browse/CLOUDSTACK-3471
masterNone
As of cloudstack 4.3 there is no api that can aggregate log messages by the job id. Since logging is a typically a service that can be and is usually available outside the IaaS core, it was felt that instead of integrating this as an api within cloudstack, it would be better to provide a generic logsearch service that could be invoked by cloudstack to retrieve log messages. We describe the mechanism on how to achieve this in what follows.
...
In this configuration any linux user template can be used to spawn elasticsearch nodes. The number of such nodes should be configurable via a global parameter. One of the node will be designated as the master node, which will also run the redis instance.
Logstash Configuration on the log shipping layer.
...
Code Block |
---|
input {
redis {
host => "<host>"
# these settings should match the output of the agent
data_type => "list"
key => "logstash"
# We use the 'json' codec here because we expect to read
# json events from redis.
codec => json
}
}
filter
{
grok
{
match => [ "message","%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}:?%{SECOND}[T ]INFO%{GREEDYDATA}job[\-]+%{INT:jobid}\s*=\s*\[\s*%{UUID:uuid}\s*\]%{GREEDYDATA}"]
named_captures_only => true
}
}
output
{
stdout { debug => true debug_format => "json"}
elasticsearch {
host => "<host>"
}
}
|
This will help configuring unicast discovery of master nodes. Multicast discovery can also be used but is not described here. By default all nodes are enabled to function as master. The actual master is elected through a elasticsearch master election process.
On each elasticsearch node:
discovery.zen.ping.unicast.hosts: ["host1", "host2"]
network.publish_host: host1
The following two scripts can automate steps 1 to 5 for the above process.
...
#The yum repo url for elasticsearch
elasticsearch_baseurl="http://packages.elasticsearch.org/elasticsearch/10.090/centos"
#Description for elasticsearch software
elasticsearch_desc="Elasticsearch repository for 1 0.090.x packages"
#The gpg key for repo
elasticsearch_gpgkey="http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
#The elasticsearch reponame
elasticsearch_reponame="\[elasticsearch-10.090\]"
#The epel repo rpm
epelrpm=http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
#The remi repo rpm
remirpm=http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
...
Code Block |
---|
import org.apache.commons.httpclient.HttpStatus; import org.apache.http.entity.StringEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.HttpResponse; import org.apache.http.impl.client.DefaultHttpClient; import java.io.*; class SampleSearch { public static void main(String []args) { try{ DefaultHttpClient httpClient = new DefaultHttpClient(); String url = "http://192.168.56.100:9200/_search" ; StringEntity data = new StringEntity("{ \"query\": { \"query_string\": { \"query\": \"2005\", \"fields\" : [\"jobid\"] } } }"); HttpPost request = new HttpPost(url); request.addHeader("content-type", "application/x-www-form-urlencoded"); request.setEntity(data); HttpResponse response = httpClient.execute(request); if(response.toString().contains("200 OK")) { InputStream is = response.getEntity().getContent(); BufferedReader in = new BufferedReader(new InputStreamReader(is)); String line = null; StringBuilder responseData = new StringBuilder(); while((line = in.readLine()) != null) { responseData.append(line); } System.out.println("RESPONSE : " + responseData.toString()); } } catch(Exception exp) { System.out.println("Error ocurred:" + exp.toString()); } } } |
...