最新消息: 生命不止,奋斗不息……

ELK+redis搭建日志分析系统

研发 admin 352浏览

简介

工作中,不论是开发还是运维,都会遇到各种各样的日志,主要包括系统日志、应用程序日志和安全日志,对于开发人员来说,查看日志,可以实时查看程序的运行错误,以及性能分析,通常,一个大中型的应用程序会被部署到多台服务器,那日志文件也会分散到不同的机器上,这样查看日志难道要一台一台去查看?显然是太麻烦了,开源的日志分析系统 ELK 完美的解决了这个问题。
ELK 并不是一个独立的系统,它是由 ElasticSearch、Logstash、Kibana 三个开源的工具组成。

  • ElasticSearchElasticSearch是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引擎。设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。
  • LogstashLogstash 是一个开源的日志分析、收集工具,并将日志存储以供以后使用。
  • KibanaKibana 是一个为 Logstash 和 ElasticSearch 提供的日志分析的 Web 接口。可使用它对日志进行高效的搜索、可视化、分析等各种操作。

elk-structure

搭建方法

基于一台主机的搭建,没有使用多台集群,logstah 收集日志后直接写入 elasticseach,可以用 redis 来作为日志队列

  • jdk 安装

    jdk 1.8 安装

  • elasticseach 安装

    下载地址:https://www.elastic.co/downloads,选择相应的版本
    我这里的版本是 elasticsearch-2.4.0, 注意不能安装在自己的home目录下

    解压目录:

    [user@localhost elk]$ tar -zxf elasticsearch-2.4.0.tar.gz
     [user@localhost elasticsearch-2.4.0]$
    

    安装 head 插件

    [user@localhost elasticsearch-2.4.0]$./bin/plugin install mobz/elasticsearch-head
    [user@localhost elasticsearch-2.4.0]$ ls plugins/
    head
    

    编辑 ElasticSearch的配置文件

[user@localhost elasticsearch-2.4.0]$ vim config/elasticseach.yml
13# ---------------------------------- Cluster -----------------------------------
14 #
15 # Use a descriptive name for your cluster:
16 #
17  cluster.name: es_cluster #这里是你的el集群的名称
18 #
19 # ------------------------------------ Node ------------------------------------
20 #
21 # Use a descriptive name for the node:
22 #
23  node.name: node0 # elseach 集群中的节点
24 #
25 # Add custom attributes to the node:
26 #
27 # node.rack: r1
28 #
29 # ----------------------------------- Paths ------------------------------------
30 #
31 # Path to directory where to store the data (separate multiple locations by comma):
32 #
33  path.data: /tmp/elasticseach/data #设置 data 目录
34 #
35 # Path to log files:
36 #
37  path.logs: /tmp/elasticseach/logs # 设置 logs 目录
#
39 # ----------------------------------- Memory -----------------------------------
40 #
41 # Lock the memory on startup:
42 #
43 # bootstrap.memory_lock: true
44 #
45 # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
46 # available on the system and that the owner of the process is allowed to use this limit.
47 #
48 # Elasticsearch performs poorly when the system is swapping the memory.
49 #
50 # ---------------------------------- Network -----------------------------------
51 #
52 # Set the bind address to a specific IP (IPv4 or IPv6):
53 #
54 # network.host: 192.168.0.1
55  network.host: 10.69.218.61  # 这里配置本机的 ip 地址,这个是我的虚拟机的 ip 
56 #
57 # Set a custom port for HTTP:
58 #
59  http.port: 9200 # 默认的端口

其他配置可先不设置
启动 ElasticSearch

[root@localhost elasticsearch-2.4.0]$ ./bin/elasticsearch

注意,这里肯定会报错:

[root@localhost elasticsearch-2.4.0]# ./bin/elasticsearch
   Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.
   at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:94)
   at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:160)
   at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286)
   at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
 Refer to the log for complete error details.

提示的原因已经说的很清楚了,不能以 root 权限来安装 ElasticSearch
为 ElasticSearch 添加专门的用户和用户用户组

[root@localhost elasticsearch-2.4.0]# groupadd elseach
[root@localhost elasticsearch-2.4.0]# adduser -G elseach elaseach
[root@localhost elasticsearch-2.4.0]# passwd elaseach 123456

将 ElasticSearch 的安装目录设置为 elaseach 用户和用户组所有

 [root@localhost elk]# chown -R elaseach:elseach elasticsearch-2.4.0/

别忘了将 /tmp/elasticseach/data 和 /tmp/elasticseach/logs 目录也设置为 elseach 用户所有,要不然会没有权限读写

 [root@localhost tmp]# chown -R elaseach:elseach elasticseach/

好了,切换到 elaseach 重新启动,如果ElasticSearch安装在user的用户目录,这时elaseach用户是没有权限之行的。会报错:Could not find or load the main class: org.elasticsearch.bootstrap.Elasticsearch

  [elseach@localhost elasticsearch-2.4.0]# ./bin/elasticsearch

 [2016-09-22 01:51:42,102][WARN ][bootstrap] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ 
with CONFIG_SECCOMP andCONFIG_SECCOMP_FILTER compiled in
[2016-09-22 01:51:42,496][INFO ][node] [node0] version[2.4.0], pid[4205], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-22 01:51:42,496][INFO ][node] [node0] initializing ...
[2016-09-22 01:51:43,266][INFO ][plugins] [node0] modules [reindex, lang-expression, lang-groovy], plugins [head], 
sites [head]
[2016-09-22 01:51:43,290][INFO ][env] [node0] using [1] data paths, mounts [[/ (/dev/sda5)]], net usable_space [8.4gb], 
net total_space [14.6gb], spins?[possibly], types [ext4]
[2016-09-22 01:51:43,290][INFO ][env] [node0] heap size [998.4mb], compressed ordinary object pointers [unknown]
[2016-09-22 01:51:43,290][WARN ][env] [node0] max file descriptors [4096] for elasticsearch process likely too low, consider 
increasing to at least[65536]
[2016-09-22 01:51:45,697][INFO ][node] [node0] initialized
[2016-09-22 01:51:45,697][INFO ][node] [node0] starting ...
[2016-09-22 01:51:45,832][INFO ][transport] [node0] publish_address {10.69.218.61:9300}, bound_addresses {10.69.218.61:9300}
[2016-09-22 01:51:45,839][INFO ][discovery] [node0] es_cluster/kJMDfFMwQXGrigfknNs-_g
[2016-09-22 01:51:49,039][INFO ][cluster.service] [node0] new_master {node0}{kJMDfFMwQXGrigfknNs-_g}{10.69.218.61}
{10.69.218.61:9300}, reason:zen-disco-join(elected_as_master, [0] joins received)
[2016-09-22 01:51:49,109][INFO ][http] [node0] publish_address {10.69.218.61:9200}, bound_addresses {10.69.218.61:9200}
[2016-09-22 01:51:49,109][INFO ][node] [node0] started
[2016-09-22 01:51:49,232][INFO ][gateway] [node0] recovered [2] indices into cluster_state

启动成功
在本机浏览器访问 http://10.69.218.61:9200

elasticsearch

说明搜索引擎 API 返回正常。注意要在服务器将 9200 端口打开,否则访问失败。

打开我们刚刚安装的 head 插件

http://10.69.218.61:9200/_plugin/head/

858E-D50472FF8086

如果是第一次搭建好,里面是没有数据的,node0 节点也没有集群信息,这里我搭建完成后已经添加了数据。所以显示的有信息

    • Logstash安装

      下载地址:https://www.elastic.co/downloads,选择相应的版本
      我这里的版本是 logstash-2.4.0.tar.gz
      解压目录:

      [root@localhost elk]# tar -zxvf logstash-2.4.0.tar.gz
      [root@localhost elk]# cd logstash-2.4.0
      

      编辑 logstash_agent 配置文件,收集日志写入redis对列:

      input {
      # For detail config for log4j as input,
      # See: https://www.elastic.co/guide/en/logstash/
        file {
          type => "nginx-access-log" # log 名
          path => "/server/tengine/logs/api-v.dev.gomeplus.com.access.log" # log 路径
        }
      }
      #filter {
      # #Only matched data are send to output. 这里主要是用来过滤数据
      #}
      output {
          redis {
            host => "10.10.1.34"
            port => "6879"
            data_type => "list" #The index to write data to.
            key => "logstash:redis"
          }
      }
      
      

启动 logstash_agent 来收集日志

 [root@localhost logstash-2.4.0]# ./bin/logstash -f config/logstash_agent.conf
 Settings: Default pipeline workers: 1
 Pipeline main started

接下来配置logstash的indexer端,用来将对列的消息写入到ElasticSearch搜索引擎中,生成索引。

input {
# For detail config for log4j as input,
# See: https://www.elastic.co/guide/en/logstash/
  redis {
         host => "10.10.1.34"
         port => 6879
         data_type => "list"
         key => "logstash:redis"
         type => "redis-input"
  }
}
filter {
#Only matched data are send to output. 这里主要是用来过滤数据
}
output {
# For detail config for elasticsearch as output,
# See: https://www.elastic.co/guide/en/logstash/current
     elasticsearch {
       action => "index"
       hosts => "10.69.218.61:9200"#ElasticSearch host, can be array. # elasticseach 的 host
       index => "nginxaccesslog" #The index to write data to.
     }
}

启动 logstash_indexer 来收集日志

 [root@localhost logstash-2.4.0]# ./bin/logstash -f config/logstash_indexer.conf
 Settings: Default pipeline workers: 1
 Pipeline main started

刷新 http://10.69.218.61:9200/_plugin/head/  是不是就出现如上图中显示的nginxaccesslog的索引啦至此我们已经能够收集日志,并进行搜索,接下来我们来将搜索数据可视化成图表

  • Kibana 的安装

    下载:https://www.elastic.co/downloads 对应自己的版本
    这里我的版本是:kibana-4.6.1-linux-x86

    解压目录:

    [root@localhost elk]# tar -zxvf kibana-4.6.1-linux-x86.tar.gz
    [root@localhost elk]# cd kibana-4.6.1-linux-x86
    

    编辑配置文件:

     [root@localhost kibana-4.6.1-linux-x86]# vim config/kibana.yml
     # Kibana is served by a back end server. This controls which port to use.
     server.port: 5601  # kibaba 服务 port 
     # The host to bind the server to.
     server.host: "10.69.218.61"  # 你的kibaba 的服务host
     # If you are running kibana behind a proxy, and want to mount it at a path,
     # specify that path here. The basePath can't end in a slash.
     # server.basePath: ""
     # The maximum payload size in bytes on incoming server requests.
     # server.maxPayloadBytes: 1048576
     # The Elasticsearch instance to use for all your queries.
     elasticsearch.url: "http://10.69.218.61:9200"  # elastaticseach 的host
     # preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
     # then the host you use to connect to *this* Kibana instance will be sent.
     # elasticsearch.preserveHost: true
    
    # Kibana uses an index in Elasticsearch to store saved searches, visualizations
    # and dashboards. It will create a new index if it doesn't already exist.
    kibana.index: ".kibana" # kibana
    
    # The default application to load.
    # kibana.defaultAppId: "discover"
    
    # If your Elasticsearch is protected with basic auth, these are the user credentials
    # used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
    

    配置比较简单
    配置完成后开始运行

    [root@localhost kibana-4.6.1-linux-x86]# ./bin/kibana
    log   [02:48:34.732] [info][status][plugin:kibana@1.0.0] Status changed from uninitialized to green - Ready
    log   [02:48:34.771] [info][status][plugin:elasticsearch@1.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
    log   [02:48:34.803] [info][status][plugin:kbn_vislib_vis_types@1.0.0] Status changed from uninitialized to green - Ready
    log   [02:48:34.823] [info][status][plugin:markdown_vis@1.0.0] Status changed from uninitialized to green - Ready
    log   [02:48:34.827] [info][status][plugin:metric_vis@1.0.0] Status changed from uninitialized to green - Ready
    log   [02:48:34.835] [info][status][plugin:elasticsearch@1.0.0] Status changed from yellow to green - Kibana index ready
    log   [02:48:34.840] [info][status][plugin:spyModes@1.0.0] Status changed from uninitialized to green - Ready
    log   [02:48:34.847] [info][status][plugin:statusPage@1.0.0] Status changed from uninitialized to green - Ready
    log   [02:48:34.857] [info][status][plugin:table_vis@1.0.0] Status changed from uninitialized to green - Ready
    log   [02:48:34.867] [info][listening] Server running at http://10.69.218.61:5601
    

    在浏览器运行 http://10.69.218.61:5601

    kibana

  • 好啦,接下来就靠我们定制化的输出图表啦,贴出一张统计接口访问频率的统计分析图

CCC253D8-92C6-4C13-BE4C-E3798ECF7EEC

哈哈,是不是有点小激动!

转载请注明:IT世界 » ELK+redis搭建日志分析系统