Kubernetes StatefulSet with zookeeper

2021/8/26 23:09:52

本文主要是介绍Kubernetes StatefulSet with zookeeper,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

介绍

官方文档https://kubernetes.io/zh/docs/tutorials/stateful-application/zookeeper/

在部署之前,你需要熟悉以下Kubernetes概念

    • Pods
    • Cluster DNS
    • Headless Services
    • PersistentVolumes
    • PersistentVolume Provisioning
    • StatefulSets
    • PodDisruptionBudgets
    • PodAntiAffinity
    • kubectl CLI

根据官方要求,zookeeper部署在Kubernetes集群需要具备如下要求

    • node节点资源至少四个节点的集群,每个节点至少 2 CPUs 和 4 GiB 内存

如果在云平台部署zookeeper集群,需要学习相关云厂商的持久化存储,如何与Kubernetes集群融合

配置过程


 

整个部署分为以下几个步骤(为了细化官方文档部署过程,以下具体的分段),官方文档给出的example.yaml有缺少遗漏的部分,在下文一一指出,如下

  1. 创建StorageClass
  2. 创建PersistentVolumeClaim
  3. 创建Headless Service
  4. 创建PodDisruptionBudget
  5. 创建StatefulSets
    # zookeeper.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: pre-zk-hs
      namespace: pre
      labels:
        app: pre-zk
    spec:
      ports:
      - port: 2888
        name: server
      - port: 3888
        name: leader-election
      clusterIP: None
      selector:
        app: pre-zk
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: pre-zk-cs
      namespace: pre
      labels:
        app: pre-zk
    spec:
      ports:
      - port: 2181
        name: client
      selector:
        app: pre-zk
    ---
    apiVersion: policy/v1beta1
    kind: PodDisruptionBudget
    metadata:
      name: pre-zk-pdb
      namespace: pre
    spec:
      selector:
        matchLabels:
          app: pre-zk
      maxUnavailable: 1
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: pre-zk
      namespace: pre
    spec:
      selector:
        matchLabels:
          app: pre-zk
      serviceName: pre-zk-hs
      replicas: 3
      updateStrategy:
        type: RollingUpdate
      podManagementPolicy: OrderedReady
      template:
        metadata:
          labels:
            app: pre-zk
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/resource
                    operator: In
                    values:
                    - pre-base
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      - key: "app"
                        operator: In
                        values:
                        - pre-zk
                  topologyKey: "kubernetes.io/hostname"
          containers:
          - name: pre-zk
            imagePullPolicy: Always
            image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
            resources:
              requests:
                memory: "1Gi"
                cpu: "0.5"
              limits:
                cpu: "1"
                memory: "1Gi"
            ports:
            - containerPort: 2181
              name: client
            - containerPort: 2888
              name: server
            - containerPort: 3888
              name: leader-election
            command:
            - sh
            - -c
            - "start-zookeeper \
              --servers=3 \
              --data_dir=/var/lib/zookeeper/data \
              --data_log_dir=/var/lib/zookeeper/data/log \
              --conf_dir=/opt/zookeeper/conf \
              --client_port=2181 \
              --election_port=3888 \
              --server_port=2888 \
              --tick_time=2000 \
              --init_limit=10 \
              --sync_limit=5 \
              --heap=512M \
              --max_client_cnxns=60 \
              --snap_retain_count=3 \
              --purge_interval=12 \
              --max_session_timeout=40000 \
              --min_session_timeout=4000 \
              --log_level=INFO"
            readinessProbe:
              exec:
                command:
                - sh
                - -c
                - "zookeeper-ready 2181"
              initialDelaySeconds: 10
              timeoutSeconds: 5
            livenessProbe:
              exec:
                command:
                - sh
                - -c
                - "zookeeper-ready 2181"
              initialDelaySeconds: 10
              timeoutSeconds: 5
            volumeMounts:
            - name: datadir
              mountPath: /var/lib/zookeeper
          # 如果是简单的测试sattefulSet安装zookeeper集群,使用临时存储即可
          # StatefulSet 控制器为 StatefulSet 中的每个 Pod 生成一个 PersistentVolumeClaim
          #volumes:
          #- name: datadir
          #  emptyDir: {}
          securityContext:
            runAsUser: 1000
            fsGroup: 1000
      # 正式环境,需要部署持久化存储
      volumeClaimTemplates:
      - metadata:
          name: datadir
          annotations:
            volume.alpha.kubernetes.io/storage-class: anything
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 20Gi
          storageClassName: alicloud-disk-essd

     



这篇关于Kubernetes StatefulSet with zookeeper的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程