Linux部署Logstash
2021/8/23 7:28:37
本文主要是介绍Linux部署Logstash,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
1 下载
[root@localhost ~]# cd /home/elk
1.1 ELK7.8.1
[root@localhost elk]# wget https://artifacts.elastic.co/downloads/logstash/logstash-7.8.1.tar.gz
[root@localhost elk]# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.8.1-linux-x86_64.tar.gz
[root@localhost elk]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.8.1-linux-x86_64.tar.gz
1.2 ELK 7.6.2
[root@localhost elk]# wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.2.tar.gz
[root@localhost elk]# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.2-linux-x86_64.tar.gz
[root@localhost elk]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-linux-x86_64.tar.gz
2 Logstash安装
2.1 解压
[root@localhost elk]# tar -zxvf logstash-7.8.1.tar.gz -C /home/elk/
[root@localhost elk]# cd logstash-7.8.1/
2.2 修改配置
2.2.1 jvm.options调整内存大小
[root@localhost logstash-7.8.1]# vi config/jvm.options
#-Xms1g #-Xmx1g -Xms512m -Xmx512m
2.2.2 logstash.yml配置远程访问
[root@localhost elasticsearch-7.8.1]# vi config/logstash.yml
注意:是 Metrics Settings 下的 http.host。
# ------------ Metrics Settings -------------- # # Bind address for the metrics REST endpoint # # http.host: "127.0.0.1" http.host: "0.0.0.0" # # Bind port for the metrics REST endpoint, this option also accept a range # (9600-9700) and logstash will pick up the first available ports. # # http.port: 9600-9700
2.2.3 查询Kafka的订阅Topic
#使用kafka-topics.sh创建topic [root@localhost kafka]# docker exec -it kafka_kafka_1 /bin/bash bash-4.4# cd bin/ bash-4.4# kafka-topics.sh --create --zookeeper 192.168.56.13:2181 --replication-factor 1 --partitions 1 --topic springboot-lifecycle #容器启动的kafka,查看kafka的topic列表 [root@localhost kafka]# docker exec -it kafka_kafka_1 /bin/bash bash-4.4# cd bin/ bash-4.4# kafka-topics.sh --zookeeper 192.168.56.13:2181 --list gsdss-boss gsdss-test bash-4.4# exit exit [root@localhost kafka]#
2.2.4 创建通道
[root@localhost logstash-7.8.1]# vi bin/logstash.conf
input { kafka { type => "gsdss-test" id => "gsdss-test" bootstrap_servers => ["192.168.56.13:9092"] topics => ["gsdss-test"] auto_offset_reset => "latest" } kafka { type => "springboot-lifecycle" id => "springboot-lifecycle" bootstrap_servers => ["192.168.56.13:9092"] topics => ["springboot-lifecycle"] auto_offset_reset => "latest" } } filter{ if [type] == "springboot-lifecycle" { grok{ match => { "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYDATA:logThread} %{LOGLEVEL:logLevel} %{GREEDYDATA:loggerClass} - %{GREEDYDATA:logContent}" } } } } output { if [type] == "gsdss-test" { elasticsearch { hosts => ["192.168.56.13:9200", "192.168.56.13:9201", "192.168.56.13:9202"] index => "gsdss-test-%{+YYYY.MM.dd}" } } if [type] == "springboot-lifecycle" { elasticsearch { hosts => ["192.168.56.13:9200", "192.168.56.13:9201", "192.168.56.13:9202"] index => "springboot-lifecycle-%{+YYYY.MM.dd}" } } }
2.3 启动
#后台启动 [root@localhost logstash-7.8.1]# nohup ./bin/logstash -f ./bin/logstash.conf & #查看控制台 [root@localhost logstash-7.8.1]# tail -f nohup.out #关闭,通过发送SIGTERM给Logstash进程来停止它 [root@localhost logstash-7.8.1]# kill -TERM {logstash_pid}
2.4 访问
浏览器请求http://192.168.56.13:9600
{ "host":"localhost.localdomain", "version":"7.8.1", "http_address":"0.0.0.0:9600", "id":"19b40881-c2f0-4e21-8ef5-74446632ea98", "name":"localhost.localdomain", "ephemeral_id":"b1cf5bed-14c4-4cf1-90d5-4b3b437e8656", "status":"green", "snapshot":false, "pipeline":{ "workers":2, "batch_size":125, "batch_delay":50 }, "build_date":"2020-07-21T19:19:46+00:00", "build_sha":"5dcccb963be4c163647232fe4b67bdf4b8efc2cb", "build_snapshot":false }
2.5解决es分片不足
若出现以下问题,表示elasticsearch分片不足,原因是elasticsearch7以上版本,默认只允许1000个分片,问题是因为集群分片数不足引起的:
[2020-08-18T09:44:48,529][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"gsdss-boss-2020.08.18", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x16168ab8>], :response=>{"index"=>{"_index"=>"gsdss-boss-2020.08.18", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1243]/[1000] maximum shards open;"}}}}
解决办法:
修改集群对分片数量的设置,使用Head插件或者Kiabana的Dev Tools 执行如下命令:
PUT /_cluster/settings { "transient": { "cluster": { "max_shards_per_node":10000 } } } 或者: [root@elk logstash-7.8.1]# curl -XPUT -H "Content-Type:application/json" http://172.18.56.13:9200/_cluster/settings -d '{"transient":{"cluster":{"max_shards_per_node":10000}}}'
这篇关于Linux部署Logstash的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-10-22原生鸿蒙操作系统HarmonyOS NEXT(HarmonyOS 5)正式发布
- 2024-10-18操作系统入门教程:新手必看的基本操作指南
- 2024-10-18初学者必看:操作系统入门全攻略
- 2024-10-17操作系统入门教程:轻松掌握操作系统基础知识
- 2024-09-11Linux部署Scrapy学习:入门级指南
- 2024-09-11Linux部署Scrapy:入门级指南
- 2024-08-21【Linux】分区向左扩容的方法
- 2024-08-21【Linux】gnome桌面环境切换KDE Plasma
- 2024-08-19如何安装 VMware Tools (macOS, Linux, Windows)
- 2024-08-15Linux部署Scrapy教程:入门级指南