{"id":288,"date":"2015-09-07T16:26:30","date_gmt":"2015-09-07T16:26:30","guid":{"rendered":"http:\/\/onlinelab.info\/?p=288"},"modified":"2015-09-07T16:26:30","modified_gmt":"2015-09-07T16:26:30","slug":"how-to-install-elasticsearch-logstash-and-kibana-4-on-ubuntu-14-04-15-04","status":"publish","type":"post","link":"https:\/\/www.asianux.org.vn\/index.php\/2015\/09\/07\/how-to-install-elasticsearch-logstash-and-kibana-4-on-ubuntu-14-04-15-04\/","title":{"rendered":"How to install Elasticsearch, Logstash and Kibana 4 on Ubuntu 14.04 \/ 15.04"},"content":{"rendered":"<figure id=\"attachment_8588\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-thumbnail wp-image-8588\" src=\"http:\/\/www.onlinelab.info\/wp-content\/uploads\/2015\/04\/ELK-150x145.jpg\" alt=\"ELK Stack\" width=\"150\" height=\"145\" title=\"\"><figcaption class=\"wp-caption-text\">ELK Stack<\/figcaption><\/figure>\n<p>In this post, will look how to install Elasticsearch, Logstash and Kibana 4 on <a href=\"http:\/\/www.onlinelab.info\/tag\/ubuntu-14.04\" target=\"_blank\" rel=\"noopener\">Ubuntu 14.04 \/ 15.04<\/a>. This ELK stack help us to store and manage the logs in a centralized location. ELK stack consists of four vital components that makes a wonderful stack to analyze the problems by correlating the events on a particular time.<\/p>\n<p>Centralizing logs will make a system admin life easier to analyze the problems and issues without going to each machine for logs, and visualize those logs to management for business requirements.<\/p>\n<h2>Components:<\/h2>\n<p>Logstash \u2013 It does the processing (Collect, parse and send it to Elasticsearch) of incoming logs.<\/p>\n<p><a href=\"http:\/\/www.onlinelab.info\/tag\/analytics\" target=\"_blank\" rel=\"noopener\">Elasticsearch<\/a> \u2013 Stores the logs coming from Logstash.<\/p>\n<p>Kibana 4 \u2013 Web interface for visualizing the logs (has an own interface). The above three are installed on server.<\/p>\n<p>Logstash-forwarder \u2013 Installed on client machines, sends log to Logstash through lumberjack protocol.<\/p>\n<h2>Application versions:<\/h2>\n<p>This article uses below version of softwares for ELK stack.<\/p>\n<p>Elastisearch 1.7.0<\/p>\n<p>logstash-1.5.3<\/p>\n<p>Kibana 4.1.1<\/p>\n<p>logstash-forwarder-0.4.0<\/p>\n<h2>Prerequisites:<\/h2>\n<p>1. We would require to install either openJDK or Oracle JDK, It is recommended to <a title=\"How to install Java SDK 1.8 on RHEL 7\/ CentOS 7\" href=\"http:\/\/www.onlinelab.info\/how-tos\/linux\/ubuntu-how-tos\/install-java-jdk-8-on-ubuntu-14-10-linux-mint-17-1.html\" target=\"_blank\" rel=\"noopener\">install Oracle JDK.<\/a>Verify the java version by using the following command.<\/p>\n<pre>$ java -version\n\njava version \"1.8.0_11\"\nJava(TM) SE Runtime Environment (build 1.8.0_11-b12)\nJava HotSpot(TM) 64-Bit Server VM (build 25.11-b03, mixed mode)<\/pre>\n<p>2. Install wget.<\/p>\n<pre>$ sudo su -\n# apt-get update\n# apt-get install wget<\/pre>\n<h2>Install Elasticsearch:<\/h2>\n<p>Elasticsearch is an open source search server, it offers a realtime distributed search and analytics with RESTful web interface. Elasticsearch stores all the logs sent by the logstash server and displays the messages when the kibana4 requests for full filling user request over the web interface.<\/p>\n<p>This topic covers configuration settings that is required for ELK, you can also take a look on <a title=\"Install Elasticsearch on CentOS 7 \/ Ubuntu 14.10 \/ Linux Mint 17.1\" href=\"http:\/\/www.onlinelab.info\/how-tos\/linux\/ubuntu-how-tos\/install-elasticsearch-on-centos-7-ubuntu-14-10-linux-mint-17-1.html\" target=\"_blank\" rel=\"noopener\">Install Elasticsearch on CentOS 7 \/ Ubuntu 14.10 \/ Linux Mint 17.1<\/a> for detailed instruction.<\/p>\n<p>Let\u2019s install the Elasticsearch, it can be downloaded from <a href=\"https:\/\/www.elastic.co\/downloads\/elasticsearch\" target=\"_blank\" rel=\"noopener\">official website<\/a>. Setup repository and install the latest version of Elasticsearch.<\/p>\n<pre># wget -qO - https:\/\/packages.elastic.co\/GPG-KEY-elasticsearch | sudo apt-key add -\n\n# echo \"deb http:\/\/packages.elastic.co\/elasticsearch\/1.7\/debian stable main\" | sudo tee -a \/etc\/apt\/sources.list.d\/elasticsearch-1.7.list\n\n# apt-get update &amp;&amp; apt-get install elasticsearch<\/pre>\n<p>Configure Elasticsearch to start during system startup.<\/p>\n<pre># systemctl daemon-reload\n# systemctl enable elasticsearch.service\n# systemctl start elasticsearch.service<\/pre>\n<p>Wait, at least a minute to let the Elasticsearch get fully restarted, otherwise testing will fail. Elastisearch should be now listen on 9200 for processing HTTP request, we can use CURL to get the response.<\/p>\n<pre># curl -X GET http:\/\/localhost:9200\n{\n\u00a0 \"status\" : 200,\n\u00a0 \"name\" : \"Thermo\",\n\u00a0 \"cluster_name\" : \"elasticsearch\",\n\u00a0 \"version\" : {\n\u00a0\u00a0\u00a0 \"number\" : \"1.7.0\",\n\u00a0\u00a0\u00a0 \"build_hash\" : \"929b9739cae115e73c346cb5f9a6f24ba735a743\",\n\u00a0\u00a0\u00a0 \"build_timestamp\" : \"2015-07-16T14:31:07Z\",\n\u00a0\u00a0\u00a0 \"build_snapshot\" : false,\n\u00a0\u00a0\u00a0 \"lucene_version\" : \"4.10.4\"\n\u00a0 },\n\u00a0 \"tagline\" : \"You Know, for Search\"\n}<\/pre>\n<h2>Install Logstash:<\/h2>\n<p>Logstash is an open source tool, used for collecting logs, parsing and storing them searching. Yes, logstash comes with a web interface (kibana3 is built-in) for visualizing logs which we are not going to discuss here instead we use kibana4. Processing of various types of events can be extended by adding plugins to it, over 160 plugins are available as of now. Lets will go directly to the installation.<\/p>\n<pre># echo \"deb http:\/\/packages.elasticsearch.org\/logstash\/1.5\/debian stable main\" | sudo tee -a \/etc\/apt\/sources.list\n\n# apt-get update &amp;&amp; apt-get install logstash<\/pre>\n<p>Once the logstash server is installed, lets move on to next section<\/p>\n<h2>Create SSL certificate:<\/h2>\n<p>Logstash-forwarder which will be installed on client-server to ship the logs requires SSL certificate to validate identity of logstash server. We have a two options to create a SSL certificate and it depends on logstash-forwarder configuration; if you use hostname ( \u201cservers\u201d: [ \u201cserver.itzgeek.local:5050\u2033 ]), subject name of SSL should match \u201cserver.itzgeek.local\u201d. If you use ( \u201cservers\u201d: [ \u201c192.168.12.10:5050\u2033 ]) an ip address, you must create a SSL certificate with IP SAN with value 192.168.12.10.<\/p>\n<p>Follow any one of the method to create a SSL certificate.<\/p>\n<h4>Option 1: (Hostname FQDN)<\/h4>\n<p>Before creating a certificate, make sure you have A record for logstash server; ensure that client servers are able to resolve the hostname of the logstash server. If you do not have DNS, kindly add the host entry for logstash server; where 192.168.12.10 is the ip address of logstash server and itzgeek is the hostname of your logstash server.<\/p>\n<pre># vi \/etc\/hosts\n\n192.168.12.10 server.itzgeek.local<\/pre>\n<p>Lets create a SSl certificate. Go to OpenSSL directory.<\/p>\n<pre># cd \/etc\/ssl\/<\/pre>\n<p>Execute the following command to create a SSL certificate, replace \u201cred\u201d one in with your real logstash server.<\/p>\n<pre># openssl req -x509 -nodes -newkey rsa:2048 -days 365 -keyout logstash-forwarder.key -out logstash-forwarder.crt -subj \/CN=<strong>server.itzgeek.local<\/strong><\/pre>\n<h4>Option 2: (IP Address)<\/h4>\n<p>Before creating a SSL certificate, we would require to an add ip address of logstash server to SubjectAltName in the OpenSSL config file.<\/p>\n<pre># vi \/etc\/pki\/tls\/openssl.cnf<\/pre>\n<p>Goto \u201c[ v3_ca ]\u201d section and replace \u201cred\u201d one with your logstash server ip.<\/p>\n<pre>subjectAltName = IP:192.168.12.10<\/pre>\n<p>Goto OpenSSL directory.<\/p>\n<pre># cd \/etc\/ssl\/<\/pre>\n<p>Execute the following command to create a SSL certificate.<\/p>\n<pre># openssl req -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt<\/pre>\n<p>This logstash-forwarder.crt should be copied to all client servers those who send logs to logstash server.<\/p>\n<h2>Configure Logstash:<\/h2>\n<p>Logstash configuration files can be found in \/etc\/logstash\/conf.d\/, just an empty folder. We would need to create a file, logstash configuration files consist of three section input, filter and output; all three section can be found either in single file or each section will have separate file ends with .conf.<\/p>\n<p>Here we will use a single file to place an input, filter and output sections.<\/p>\n<pre># vi \/etc\/logstash\/conf.d\/logstash.conf<\/pre>\n<p>In the first section, we will put an entry for input configuration. The following configuration sets lumberjack to listen on port 5050 for incoming logs from the logstash-forwarder that sits in client servers, also it will use the SSL certificate that we created earlier.<\/p>\n<pre>input {\nlumberjack {\nport =&gt; 5050\ntype =&gt; \"logs\"\nssl_certificate =&gt; \"\/etc\/ssl\/logstash-forwarder.crt\"\nssl_key =&gt; \"\/etc\/ssl\/logstash-forwarder.key\"\n}\n}<\/pre>\n<p>In the second section, we will put an entry for filter configuration. Grok is a filter in logstash, which does parsing of logs before sending it to Elasticsearch for storing. The following grok filter will look for the logs that are labeled as \u2018syslog\u201d and tries to parse them to make a structured index.<\/p>\n<pre>filter {\nif [type] == \"syslog\" {\n    grok {\n      match =&gt; { \"message\" =&gt; \"%{SYSLOGLINE}\" }\n    }\n\n    date {\nmatch =&gt; [ \"timestamp\", \"MMM  d HH:mm:ss\", \"MMM dd HH:mm:ss\" ]\n}\n  }\n\n}\n\n<\/pre>\n<p>Consider visiting <a href=\"http:\/\/grokdebug.herokuapp.com\/\" target=\"_blank\" rel=\"noopener\">grokdebugger<\/a> for filter patterns.<\/p>\n<p>In the third section, we will put an entry of output configuration. This section defines location where the logs get stored; obviously it should be Elasticsearch.<\/p>\n<pre>output {\nelasticsearch { host =&gt; localhost index =&gt; \"logstash-%{+YYYY.MM.dd}\" }\nstdout { codec =&gt; rubydebug }\n}<\/pre>\n<p>Now start the logstash service.<\/p>\n<pre># systemctl start logstash.service<\/pre>\n<p>Logstash server logs are stored in the following file, will help us to troubleshoot the issues.<\/p>\n<pre># cat \/var\/log\/logstash\/logstash.log<\/pre>\n<p>Next we will configure a logstash-forwarder to ship logs to logstash server.<\/p>\n<h2>Configure Logstash-forwarder.<\/h2>\n<p>Logstash-forwarder is a client software which ship logs to a logstash server, it should be installed on all client servers. Logstash-forwarder can be downloaded from <a href=\"https:\/\/www.elastic.co\/downloads\/logstash\" target=\"_blank\" rel=\"noopener\">official website<\/a> or you can use the following command to download it in terminal and install it.<\/p>\n<pre># wget https:\/\/download.elastic.co\/logstash-forwarder\/binaries\/logstash-forwarder_0.4.0_amd64.deb\n\n# dpkg -i logstash-forwarder_0.4.0_amd64.deb<\/pre>\n<p>Logstash-forwader uses SSL certificate for validating logstash server identity, so copy the logstash-forwarder.crt that we created earlier from the logstash server to the client.<\/p>\n<pre># scp -pr root@192.168.12.10:\/\/etc\/ssl\/logstash-forwarder.crt \/etc\/ssl<\/pre>\n<p>Open up the configuration file.<\/p>\n<pre># vi \/etc\/logstash-forwarder.conf<\/pre>\n<p>In the \u201cnetwork\u201d section, mention the logstash server with port number and path to the logstash-forwarder certificate that you copied from logstash server.<\/p>\n<p>This section defines the logstash-forwarder to send a logs to logstash server \u201cserver.itzgeek.local\u201d on port 5050 and client validates the server identity with the help of SSL certificate. Note: Replace \u201cserver.itzgeek.local\u201d with ip address incase if you are using IP SAN.<\/p>\n<pre>\"servers\": [ \"server.itzgeek.local:5050\" ],\n\n\"ssl ca\": \"\/etc\/ssl\/logstash-forwarder.crt\",\n\n\"timeout\": 15<\/pre>\n<p>In the \u201cfiles\u201d section, configures what all are files to be shipped. In this article we will configure a logstash-forwarder to send a logs (\/var\/log\/syslog) to logstash server with \u201csyslog\u201d as type.<\/p>\n<pre>{\n\"paths\": [\n\"\/var\/log\/syslog\"\n],\n\n\"fields\": { \"type\": \"syslog\" }\n}<\/pre>\n<p>Restart the service.<\/p>\n<pre># systemctl start logstash-forwarder.service<\/pre>\n<p>You can look at a log file in case of any issue.<\/p>\n<pre># cat \/var\/log\/logstash-forwarder\/logstash-forwarder.err<\/pre>\n<h2>Configure Kibana 4:<\/h2>\n<p>Kidbana provides visualization of logs, download it from <a href=\"https:\/\/www.elastic.co\/downloads\/kibana\" target=\"_blank\" rel=\"noopener\">official website<\/a>. Use following command to download it in terminal.<\/p>\n<pre># wget https:\/\/download.elastic.co\/kibana\/kibana\/kibana-4.1.1-linux-x64.tar.gz<\/pre>\n<p>Extract and move it to \/opt\/<\/p>\n<pre># tar -zxvf kibana-4.1.1-linux-x64.tar.gz\n\n# mv kibana-4.1.1-linux-x64 \/opt\/kibana4<\/pre>\n<p>Enable PID file for Kibana, this is required to create a systemd init file.<\/p>\n<pre># sed -i 's\/#pid_file\/pid_file\/g' \/opt\/kibana4\/config\/kibana.yml<\/pre>\n<p>Kibana can be started by running \/opt\/kibana4\/bin\/kibana, to run kibana as a server we will create a systemd file.<\/p>\n<pre># vi \/etc\/systemd\/system\/kibana4.service\n\n[Unit]\nDescription=Kibana 4 Web Interface\nAfter=elasticsearch.service\nAfter=logstash.service\n[Service]\nExecStartPre=\/bin\/rm -rf \/var\/run\/kibana.pid\nExecStart=\/opt\/kibana4\/bin\/kibana\nExecReload=\/bin\/kill -9 $(cat \/var\/run\/kibana.pid) &amp;&amp; \/bin\/rm -rf \/var\/run\/kibana.pid &amp;&amp; \/opt\/kibana4\/bin\/kibana\nExecStop=\/bin\/kill -9 $(cat \/var\/run\/kibana.pid)\n[Install]\nWantedBy=multi-user.target<\/pre>\n<p>Start and enable kibana to start automatically at system startup.<\/p>\n<pre># systemctl start kibana4.service\n\n# systemctl enable kibana4.service<\/pre>\n<p>Access your kibana portal by visiting the following link<\/p>\n<pre>http:\/\/your-ip-address:5601\/<\/pre>\n<p>You will get a following page where you have to map logstash index to use kibana. Scroll down on Time-field name and select<\/p>\n<pre>@timestamp<\/pre>\n<figure id=\"attachment_9041\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-9041\" src=\"http:\/\/www.onlinelab.info\/wp-content\/uploads\/2015\/07\/install-Elasticsearch-Logstash-and-Kibana-4-on-Ubuntu-14.04-Index-Pattern.png\" alt=\"Install Elasticsearch, Logstash and Kibana 4 on Ubuntu 14.04 - Index Pattern\" width=\"640\" height=\"341\" title=\"\"><figcaption class=\"wp-caption-text\">Install Elasticsearch, Logstash and Kibana 4 on Ubuntu 14.04 \u2013 Index Pattern<\/figcaption><\/figure>\n<p>Once you selected, it will redirect you to kibana main page.<\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_9042\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-9042\" src=\"http:\/\/www.onlinelab.info\/wp-content\/uploads\/2015\/07\/install-Elasticsearch-Logstash-and-Kibana-4-on-Ubuntu-14.04-Kibana-Discover-the-Logs.png\" alt=\"Install Elasticsearch, Logstash and Kibana 4 on Ubuntu 14.04 - Kibana Discover the Logs\" width=\"640\" height=\"343\" title=\"\"><figcaption class=\"wp-caption-text\">Install Elasticsearch, Logstash and Kibana 4 on Ubuntu 14.04 \u2013 Kibana Discover the Logs<\/figcaption><\/figure>\n<p>Kibana does not comes with any kind of password protected access to portal. With Nginx, we can configure in such a way that the user should fulfill authentication mechanism before entering to portal.<\/p>\n<p>That\u2019s All, you have successfully configured ELK stack for centralized log management.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>ELK Stack In this post, will look how to install Elasticsearch, Logstash and Kibana 4 on Ubuntu 14.04 \/ 15.04. This ELK stack help us to store and manage the logs in a centralized location.&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-288","post","type-post","status-publish","format-standard","hentry","category-solution"],"_links":{"self":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts\/288","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/comments?post=288"}],"version-history":[{"count":0,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/posts\/288\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/media?parent=288"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/categories?post=288"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.asianux.org.vn\/index.php\/wp-json\/wp\/v2\/tags?post=288"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}