elfk收集k8s日志(二)

2021/4/25 10:56:35

本文主要是介绍elfk收集k8s日志(二),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

本文介绍通过elk + filebeat方式收集k8s日志,其中filebeat以sidecar方式部署。elfk最新版本:7.6.2


k8s日志收集方案

  • 3种日志收集方案:
1. node上部署一个日志收集程序

    Daemonset方式部署日志收集程序,对本节点 /var/log 和 /var/lib/docker/containers 两个目录下的日志进行采集
    
2. sidecar方式部署日志收集程序

    每个运行应用程序的pod中附加一个日志收集的容器,使用 emptyDir 共享日志目录让日志容器收集日志
    
3. 应用程序直接推送日志

    常见的如 graylog 工具,直接修改代码推送日志到es,然后在graylog上展示出来

  • 3种收集方案的优缺点:
方案优点缺点
1. node上部署一个日志收集程序每个node仅需部署一个日志收集程序,消耗资源少,对应用无侵入应用程序日志需要写到标准输出和标准错误输出,不支持多行日志
2. pod中附加一个日志收集容器低耦合每个pod启动一个日志收集容器,增加资源消耗
3. 应用程序直接推送日志无需额外收集工具侵入应用,增加应用复杂度

下面测试第1种方案:每个node上部署一个日志收集程序,注意elfk版本保持一致。


SideCar方式收集k8s日志

  • 主机说明:
系统ip角色cpu内存hostname
CentOS 7.8192.168.30.128master、deploy>=2>=2Gmaster1
CentOS 7.8192.168.30.129master>=2>=2Gmaster2
CentOS 7.8192.168.30.130node>=2>=2Gnode1
CentOS 7.8192.168.30.131node>=2>=2Gnode2
CentOS 7.8192.168.30.132node>=2>=2Gnode3
  • 搭建k8s集群:

搭建过程省略,具体参考:Kubeadm方式搭建k8s集群 或 二进制方式搭建k8s集群

搭建完成后,查看集群:

kubectl get nodes

NAME      STATUS   ROLES    AGE     VERSION
master1   Ready    master   4d16h   v1.14.0
master2   Ready    master   4d16h   v1.14.0
node1     Ready    <none>   4d16h   v1.14.0
node2     Ready    <none>   4d16h   v1.14.0
node3     Ready    <none>   4d16h   v1.14.0

这里为了方便,直接使用之前的k8s集群,注意删除之前实验的k8s资源对象。

  • 部署es集群:
mkdir /elfk & cd /elfk

vim elasticsearch.yaml

apiVersion: v1kind: Servicemetadata:
  name: elasticsearch  namespace: default  labels:
    app: elasticsearchspec:
  selector:
    app: elasticsearch  clusterIP: None  ports:
  - name: api    port: 9200
  - name: discovery    port: 9300---apiVersion: apps/v1kind: StatefulSetmetadata:
  name: elasticsearch  namespace: defaultspec:
  serviceName: elasticsearch  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch  template:
    metadata:
      labels:
        app: elasticsearch    spec:
      containers:
      - name: elasticsearch        image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.2        resources:
            limits:
              cpu: 1000m            requests:
              cpu: 100m        ports:
        - containerPort: 9200
          name: api          protocol: TCP        - containerPort: 9300
          name: discovery          protocol: TCP        env:
          - name: "http.host"
            value: "0.0.0.0"
          - name: "network.host"
            value: "_eth0_"
          - name: "cluster.name"
            value: "es-cluster"
          - name: node.name            valueFrom:
              fieldRef:
                fieldPath: metadata.name          - name: "bootstrap.memory_lock"
            value: "false"
          - name: "discovery.seed_hosts"
            value: "elasticsearch"
          - name: "cluster.initial_master_nodes"
            value: "elasticsearch-0,elasticsearch-1,elasticsearch-2"
          - name: "discovery.seed_resolver.timeout"
            value: "10s"
          - name: "discovery.zen.minimum_master_nodes"
            value: "2"
          - name: "ES_JAVA_OPTS"
            value: "-Xms512m -Xmx512m"
        volumeMounts:
        - name: data          mountPath: /usr/share/elasticsearch/data      terminationGracePeriodSeconds: 30
      volumes:
      - name: data        hostPath:
          path: /home/elasticsearch/data                #该路径为es数据存储目录,自动创建
      initContainers:
      - name: fix-permissions        image: busybox        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data          mountPath: /usr/share/elasticsearch/data      - name: increase-vm-max-map        image: busybox        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit        image: busybox        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true

为了方便,建议提前在所有node节点上拉取elasticsearch镜像:docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.2

kubectl apply -f elasticsearch.yaml

kubectl get pod

NAME              READY   STATUS    RESTARTS   AGE
elasticsearch-0   1/1     Running   0          110s
elasticsearch-1   1/1     Running   0          97s
elasticsearch-2   1/1     Running   0          83s

kubectl get sts

NAME            READY   AGE
elasticsearch   3/3     2m13s

  • 部署kibana:

这里为了方便,直接使用NodePort暴露kibana端口。

vim kibana.yaml

apiVersion: v1kind: Servicemetadata:
  name: kibana  namespace: default  labels:
    app: kibanaspec:
  selector:
    app: kibana  ports:
  - port: 5601
    nodePort: 30080
  type: NodePort    
---apiVersion: apps/v1kind: Deploymentmetadata:
  name: kibana  namespace: default  labels:
    app: kibanaspec:
  selector:
    matchLabels:
      app: kibana  template:
    metadata:
      labels:
        app: kibana    spec:
      containers:
      - name: kibana        image: docker.elastic.co/kibana/kibana-oss:7.6.2        resources:
          limits:
            cpu: 1000m          requests:
            cpu: 100m        env:
          - name: ELASTICSEARCH_HOSTS            value: "http://elasticsearch:9200"
        ports:
        - containerPort: 5601

kubectl apply -f kibana.yaml

kubectl get pod |grep kibana

kibana-84d7449d95-jg5nt   1/1     Running   0          29s

kubectl get deploy |grep kibana

kibana   1/1     1            1           53s

部署没问题的话,是可以正常访问kibana页面的,浏览器访问ip:30080

在这里插入图片描述

接下来部署应用程序,收集日志。

  • 以sidecar方式部署filebeat收集nginx日志:
vim filebeat-nginx.yaml

apiVersion: v1kind: Servicemetadata:
  name: nginx  namespace: default  labels:
    app: nginxspec:
  selector:
    app: nginx  ports:
  - port: 80
    nodePort: 30090
  type: NodePort---apiVersion: v1kind: ConfigMapmetadata:
  name: filebeat-config  namespace: default  labels:
    app: filebeatdata:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml        reload.enabled: false

      modules:
        path: ${path.config}/modules.d/*.yml        reload.enabled: false
    filebeat.inputs:
    - type: log      paths:
        - /logdata/*.log      tail_files: true
      fields:
        pod_name: '${pod_name}'
        POD_IP: '${POD_IP}'
    setup.template.name: "nginx-logs"
    setup.template.pattern: "nginx-logs-*"

    output.elasticsearch:
      hosts: ["elasticsearch:9200"]
      index: "nginx-logs"---apiVersion: apps/v1kind: Deploymentmetadata:
  name: nginx  namespace: defaultspec:
  replicas: 1
  minReadySeconds: 15
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: nginx  template:
    metadata:
      labels:
        app: nginx    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat        image: docker.elastic.co/beats/filebeat-oss:7.6.2        args: [
          "-c", "/etc/filebeat/filebeat.yml",
          "-e",
        ]
        env:
        - name: POD_IP          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: status.podIP        - name: pod_name          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: metadata.name        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi          requests:
            cpu: 200m            memory: 200Mi        volumeMounts:
        - name: config          mountPath: /etc/filebeat/        - name: data          mountPath: /usr/share/filebeat/data        - name: logdata          mountPath: /logdata      - name: nginx        image: nginx:1.17.0        ports:
        - containerPort: 80
        volumeMounts:
        - name: logdata          mountPath: /var/log/nginx      volumes:
      - name: data        emptyDir: {}
      - name: logdata        emptyDir: {}
      - name: config        configMap:
          name: filebeat-config          items:
          - key: filebeat.yml            path: filebeat.yml

kubectl apply -f filebeat-nginx.yaml

kubectl get pod |grep nginx

nginx-865f745bdd-q6xwm    2/2     Running   0          16s

kubectl describe pod nginx-865f745bdd-q6xwm

  Normal  Scheduled  51s   default-scheduler  Successfully assigned default/nginx-865f745bdd-q6xwm to node2
  Normal  Pulled     50s   kubelet, node2     Container image "docker.elastic.co/beats/filebeat-oss:7.6.2" already present on machine
  Normal  Created    50s   kubelet, node2     Created container filebeat
  Normal  Started    50s   kubelet, node2     Started container filebeat
  Normal  Pulled     50s   kubelet, node2     Container image "nginx:1.17.0" already present on machine
  Normal  Created    50s   kubelet, node2     Created container nginx
  Normal  Started    50s   kubelet, node2     Started container nginx

访问nginx页面以产生日志:ip:30090

在这里插入图片描述

  • kibana创建索引,查看nginx日志:

在这里插入图片描述

可以看到index就是在filebeat配置文件中指定的index——nginx-logs,添加可用的fields:log.file.path后,显示日志来源,

在这里插入图片描述

因为上面filebeat配置文件中收集的日志路径是/var/log/nginx/*.log,也可以只指定单个日志(具体的日志路径)并指定index。

  • 以sidecar方式部署filebeat收集tomcat日志:
vim filebeat-tomcat.yaml

apiVersion: v1kind: Servicemetadata:
  name: tomcat  namespace: default  labels:
    app: tomcatspec:
  selector:
    app: tomcat  ports:
  - port: 8080
    nodePort: 30100
  type: NodePort---apiVersion: v1kind: ConfigMapmetadata:
  name: filebeat-config-tomcat  namespace: default  labels:
    app: filebeatdata:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml        reload.enabled: false

      modules:
        path: ${path.config}/modules.d/*.yml        reload.enabled: false
    filebeat.inputs:
    - type: log      paths:
        - /logdata/*.log      tail_files: true
      fields:
        pod_name: '${pod_name}'
        POD_IP: '${POD_IP}'
    setup.template.name: "tomcat-logs"
    setup.template.pattern: "tomcat-logs-*"

    output.elasticsearch:
      hosts: ["elasticsearch:9200"]
      index: "tomcat-logs"---apiVersion: apps/v1kind: Deploymentmetadata:
  name: tomcat  namespace: defaultspec:
  replicas: 1
  minReadySeconds: 15
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: tomcat  template:
    metadata:
      labels:
        app: tomcat    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat        image: docker.elastic.co/beats/filebeat-oss:7.6.2        args: [
          "-c", "/etc/filebeat/filebeat.yml",
          "-e",
        ]
        env:
        - name: POD_IP          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: status.podIP        - name: pod_name          valueFrom:
            fieldRef:
              apiVersion: v1              fieldPath: metadata.name        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi          requests:
            cpu: 200m            memory: 200Mi        volumeMounts:
        - name: config          mountPath: /etc/filebeat/        - name: data          mountPath: /usr/share/filebeat/data        - name: logdata          mountPath: /logdata      - name: tomcat        image: tomcat:8.0.51-alpine        ports:
        - containerPort: 8080
        volumeMounts:
        - name: logdata          mountPath: /usr/local/tomcat/logs      volumes:
      - name: data        emptyDir: {}
      - name: logdata        emptyDir: {}
      - name: config        configMap:
          name: filebeat-config-tomcat          items:
          - key: filebeat.yml            path: filebeat.yml

kubectl apply -f filebeat-tomcat.yaml

kubectl get pod |grep tomcat

tomcat-5c7b6644f4-9hslh   2/2     Running   0          16s

kubectl describe pod tomcat-5c7b6644f4-9hslh

  Normal  Scheduled  34s   default-scheduler  Successfully assigned default/tomcat-5c7b6644f4-9hslh to node1
  Normal  Pulled     33s   kubelet, node1     Container image "docker.elastic.co/beats/filebeat-oss:7.6.2" already present on machine
  Normal  Created    33s   kubelet, node1     Created container filebeat
  Normal  Started    33s   kubelet, node1     Started container filebeat
  Normal  Pulled     33s   kubelet, node1     Container image "tomcat:8.0.51-alpine" already present on machine
  Normal  Created    33s   kubelet, node1     Created container tomcat
  Normal  Started    33s   kubelet, node1     Started container tomcat

访问tomcat页面以产生日志:ip:30100

在这里插入图片描述

  • kibana查看tomcat日志:

在这里插入图片描述

可以看到index就是在filebeat配置文件中指定的index——tomcat-logs,添加可用的fields:log.file.path后,显示日志来源,

在这里插入图片描述

因为上面filebeat配置文件中收集的日志路径是/usr/local/tomcat/logs/*.log,也可以只指定单个日志(具体的日志路径)并指定index。

  • 总结:

上面通过sidecar方式部署filebeat收集k8s日志,比使用logagent方式部署filebeat收集k8s日志更加直接,可以指定具体路径并自定义index,在查看日志时也更加方便,缺点就是会增加资源消耗,不过资源消耗在可接受的范围内。

需要注意的是,filebeat对不同应用程序日志收集的ConfigMap名称尽量不要相同,避免冲突和误删除。

上面为了方便,filebeat收集日志是直接输出到es中的,也可以在集群中再部署一个logstash,由filebeat传输到logstash之后进行日志处理再输出到es中,这里就不演示了。

logstash.yaml(参考):

apiVersion: v1kind: ConfigMapmetadata:
  name: logstash-config  namespace: defaultdata:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    
    filter {
      #multiline {
        #pattern => "^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}"
        #negate => true
        #what => "previous"
      #}
      grok {
        match => [ "message", "%{TIMESTAMP_ISO8601:logtime} %{LOGLEVEL:level}" ]
      }
    }
    
    output {
      elasticsearch {
        hosts => ["elasticsearch:9200"]
        index => "your-index-%{+YYYY.MM.dd}"        }
    }
    ---kind: ServiceapiVersion: v1metadata:
  name: logstash  namespace: defaultspec:
  selector:
    app: logstash  ports:
  - protocol: TCP    port: 5044
    targetPort: 5044
  type: ClusterIP  
---apiVersion: apps/v1kind: Deploymentmetadata:
  name: logstash  namespace: defaultspec:
  selector:
    matchLabels:
      app: logstash  replicas: 1
  template:
    metadata:
      labels:
        app: logstash    spec:
      containers:
      - name: logstash        image: docker.elastic.co/logstash/logstash-oss:7.6.2        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config            mountPath: /usr/share/logstash/config          - name: pipeline            mountPath: /usr/share/logstash/pipeline      volumes:
      - name: config        configMap:
          name: logstash-config          items:
            - key: logstash.yml              path: logstash.yml      - name: pipeline        configMap:
          name: logstash-config          items:
            - key: logstash.conf              path: logstash.conf

kubectl get pod |grep logstash

logstash-6b475db5f6-mxxsj   1/1     Running   0          26s

kubectl logs -f logstash-6b475db5f6-mxxsj[INFO ] 2020-05-17 04:46:22.488 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}[INFO ] 2020-05-17 04:46:22.501 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}[INFO ] 2020-05-17 04:46:22.532 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}[INFO ] 2020-05-17 04:46:22.615 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}[INFO ] 2020-05-17 04:46:22.639 [[main]<beats] Server - Starting server on port: 5044

kubectl get svc |grep logstash

logstash        ClusterIP   10.96.103.207    <none>        5044/TCP            1m34s

kubectl get pod |grep nginx

nginx-865f745bdd-q6xwm      2/2     Running   0          62m

kubectl exec -it nginx-865f745bdd-q6xwm bash[root@nginx-865f745bdd-q6xwm filebeat]# yum install -y telnet[root@nginx-865f745bdd-q6xwm filebeat]# telnet logstash 5044Trying 10.96.103.207...
Connected to logstash.
Escape character is '^]'.

简单测试部署没问题,filebeat容器测试端口连接也没问题,此处省略进一步配置logstash过滤处理日志的过程。

前面提过,sidecar方式部署filebeat的话,elk组件是否部署在k8s集群内无关紧要,只需要连接没问题即可。上面部署的elk组件是在k8s集群内,接下来将elk部署于k8s集群外测试日志收集。




这篇关于elfk收集k8s日志(二)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程