用ElasticSearch,LogStash,Kibana搭建实时日志收集系统

zhoulilan 8年前

来自: http://blog.csdn.net//jiao_fuyou/article/details/46694125


用ElasticSearch,LogStash,Kibana搭建实时日志收集系统

介绍

  • 这套系统,logstash负责收集处理日志文件内容存储到elasticsearch搜索引擎数据库中。kibana负责查询elasticsearch并在web中展示。
  • logstash收集进程收获日志文件内容后,先输出到redis中缓存,另一logstash处理进程从redis中读出并转存到elasticsearch中,以解决读快写慢速度不一致问题。
  • 官方在线文档:https://www.elastic.co/guide/index.html

一、安装jdk7

  • ElasticSearch,LogStash均是java程序,所以需要jdk环境。
    需要注意的是,多节点通讯,必须保证JDK版本一致,不然可能会导致连接失败。

  • 下载:jdk-7u71-linux-x64.rpm
    http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

  • rpm -ivh jdk-7u71-linux-x64.rpm

  • 配置JDK
    编辑/etc/profile文件,在开头加入:

    export JAVA_HOME=/usr/java/jdk1.7.0_71  export JRE_HOME=$JAVA_HOME/jre  export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH  export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
  • 检查JDK环境
    使用source /etc/profile命令,使环境变量立即生效。
    查看当前安装的JDK版本,命令:java -version
    检查环境变量,echo $PATH

二、安装elasticsearch

bootstrap.mlockall: true    index.number_of_shards: 1 index.number_of_replicas: 0 #index.translog.flush_threshold_ops: 100000 #index.refresh_interval: -1 index.translog.flush_threshold_ops: 5000 index.refresh_interval: 1 network.bind_host: 172.16.18.114 #节点间通讯发布到其它节点的IP地址 #如果不设置由ES自己决定它可能会发现一个地址,但是其它节点可能访问不了,这样节点间通讯将失败  network.publish_host: 172.16.18.114 # Security 允许所有http请求 http.cors.enabled: true http.cors.allow-origin: "/.*/" 
  • 修改bin/elasticsearch文件
# 使jvm使用os,max-open-files  es_parms="-Delasticsearch -Des.max-open-files=ture"    # Start up the service  # 修改OS打开最大文件数  ulimit -n 1000000  ulimit -l unlimited  launch_service "$pidfile" "$daemonized" "$properties"
  • 修改bin/elasticsearch.in.sh文件
......    if [ "x$ES_MIN_MEM" = "x" ]; then      ES_MIN_MEM=256m  fi  if [ "x$ES_MAX_MEM" = "x" ]; then      ES_MAX_MEM=1g  fi  if [ "x$ES_HEAP_SIZE" != "x" ]; then      ES_MIN_MEM=$ES_HEAP_SIZE      ES_MAX_MEM=$ES_HEAP_SIZE  fi    #set min memory as 2g  ES_MIN_MEM=2g  #set max memory as 2g  ES_MAX_MEM=2g    ......
  • 运行
    ./bin/elasticsearch -d
    ./logs下为日志文件

  • 检查节点状态
    curl -XGET ‘http://localhost:9200/_nodes?os=true&process=true&pretty=true

    {    "cluster_name" : "elasticsearch",    "nodes" : {      "7PEaZbvxToCL2O2KuMGRYQ" : {        "name" : "Gertrude Yorkes",        "transport_address" : "inet[/172.16.18.116:9300]",        "host" : "casimbak",        "ip" : "172.16.18.116",        "version" : "1.4.4",        "build" : "c88f77f",        "http_address" : "inet[/172.16.18.116:9200]",        "settings" : {          "index": {              "number_of_replicas": "0",              "translog": {                  "flush_threshold_ops": "5000"              },              "number_of_shards": "1",              "refresh_interval": "1"          },                "path" : {            "logs" : "/home/jfy/soft/elasticsearch-1.4.4/logs",            "home" : "/home/jfy/soft/elasticsearch-1.4.4"          },          "cluster" : {            "name" : "elasticsearch"          },          "bootstrap" : {            "mlockall" : "true"          },          "client" : {            "type" : "node"          },          "http" : {            "cors" : {              "enabled" : "true",              "allow-origin" : "/.*/"            }          },          "foreground" : "yes",          "name" : "Gertrude Yorkes",          "max-open-files" : "ture"        },        "process" : {          "refresh_interval_in_millis" : 1000,          "id" : 13896,          "max_file_descriptors" : 1000000,          "mlockall" : true        },          ...        }    }  }
  • 表明ElasticSearch已运行,状态与配置相符

            "index": {              "number_of_replicas": "0",              "translog": {                  "flush_threshold_ops": "5000"              },              "number_of_shards": "1",              "refresh_interval": "1"          },           "process" : {          "refresh_interval_in_millis" : 1000,          "id" : 13896,          "max_file_descriptors" : 1000000,          "mlockall" : true        },
  • 安装head插件操作elasticsearch
    elasticsearch/bin/plugin -install mobz/elasticsearch-head
    http://172.16.18.116:9200/_plugin/head/

  • 安装marvel插件监控elasticsearch状态
    elasticsearch/bin/plugin -i elasticsearch/marvel/latest
    http://172.16.18.116:9200/_plugin/marvel/

三、安装logstash

  • logstash一个日志收集处理过滤程序。

  • LogStash分为日志收集端进程和日志处理端进程,收集端负责收集多个日志文件实时的将日志内容输出到redis队列缓存,处理端负责将redis队列缓存中的内容输出到ElasticSarch中存储。收集端进程运行在产生日志文件的服务器上,处理端进程运行在redis,elasticsearch同一服务器上。

  • 下载
    wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz

  • redis安装配置略,但要注意监控redis队列长度,如果长时间堆集说明elasticsearch出问题了
    每2S检查一下redis中数据列表长度,100次
    redis-cli -r 100 -i 2 llen logstash:redis

  • 配置Logstash日志收集进程
    vi ./lib/logstash/config/shipper.conf

input {      #file {      # type => "mysql_log"      # path => "/usr/local/mysql/data/localhost.log"      # codec => plain{      # charset => "GBK"      # }      #}      file {          type => "hostapd_log"          path => "/root/hostapd/hostapd.log"          sincedb_path => "/home/jfy/soft/logstash-1.4.2/sincedb_hostapd.access"          #start_position => "beginning"          #http://logstash.net/docs/1.4.2/codecs/plain          codec => plain{              charset => "GBK"          }      }      file {          type => "hkt_log"          path => "/usr1/app/log/bsapp.tr"          sincedb_path => "/home/jfy/soft/logstash-1.4.2/sincedb_hkt.access"          start_position => "beginning"          codec => plain{              charset => "GBK"          }      }  # stdin {  # type => "hostapd_log"  # }  }    #filter {  # grep {  # match => [ "@message", "mysql|GET|error" ]  # }  #}    output {      redis {          host => '172.16.18.116'          data_type => 'list'          key => 'logstash:redis'  # codec => plain{  # charset => "UTF-8"  # }      }  # elasticsearch {  # #embedded => true  # host => "172.16.18.116"  # }  }
  • 运行收集端进程
    ./bin/logstash agent -f ./lib/logstash/config/shipper.conf

  • 配置Logstash日志处理进程
    vi ./lib/logstash/config/indexer.conf

    input {    redis {      host => '127.0.0.1'      data_type => 'list'      key => 'logstash:redis'      #threads => 10      #batch_count => 1000    }  }    output {    elasticsearch {      #embedded => true      host => localhost      #workers => 10    }  }
  • 运行处理端进程
    ./bin/logstash agent -f ./lib/logstash/config/indexer.conf
    处理端从redis读出缓存的日志内容,输出到ElasticSarch中存储

四、安装kibana

  • kibana是elasticsearch搜索引擎的web展示界面,一套在webserver下的js脚本,可以定制复杂的查询过滤条件检索elasticsearch,并以多种方式(表格,图表)展示。

  • 下载
    wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
    解压后将kibana目录放到webserver能访问到的地方

  • 配置
    修改kibana/config.js:

如果kibana与elasticsearch不在同一机器则修改:  elasticsearch: "http://192.168.91.128:9200",  #这里实际上是浏览器直接访问该地址连接elasticsearch    否则默认,一定不要修改

如果出现connection failed,则修改elasticsearch/config/elasticsearch.yml,增加:

http.cors.enabled: true   http.cors.allow-origin: "/.*/"

具体含义参见:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html