helm安装Sentry

2021/5/18 10:57:38

本文主要是介绍helm安装Sentry,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

文中的--kubeconfig ~/.kube/sentry,是指k8s的配置,添加配置后,可以访问指定k8s,如不需要,自行去除。

1.安装helm

2.设置镜像

helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator
helm repo update

3.检测镜像

helm search repo sentry
#NAME                                 CHART VERSION    APP VERSION    DESCRIPTION
#stable/sentry                        4.2.0            9.1.2          Sentry is a cross-platform crash reporting and ...
#看到sentry,说明镜像没问题

3.创建k8s命名空间

kubectl create namespace sentry

4.安装

helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--wait

参数说明

说明必须
–kubeconfig ~/.kube/sentrykube的配置文件,可以指定使用哪个k8strue
user.email管理员邮箱true
user.password管理员密码true
ingress.hostnamesentry的域名(上报时必须使用域名)true
email.host、email.port邮箱发站地址、端口true
email.user、email.password自己的邮箱(sentry使用这个发送邮件)true
email.use_tls可以在具体的邮箱设置中查看是否设置truetrue
redis.primary.persistence.storageClassRedis的SC使用哪个(也可以不设置,我这个是因为没有PV\PVC)false
postgresql.persistence.storageClasspostgresql的SC使用哪个(也可以不设置,我这个是因为没有PV\PVC)false

如果安装成功,此时,可以看到3个Deployment和三个StatefulSet都启动了。过一会,访问域名就行了。

5.卸载sentry

helm --kubeconfig ~/.kube/sentry uninstall sentry -n sentry

6.安装的一个坑

安装后,我的Redis和PG一直启动不起来,提示。

Pending: pod has unbound immediate PersistentVolumeClaims

大概就是说,PVC绑定不上,所以启动不了。

解决方法

1.先卸载Sentry

2.安装SC

yml太长,贴在最后了。

在yml同级目录执行

kubectl --kubeconfig ~/.kube/sentry apply -f local-path-storage.yaml 

将local-path这只为默认sc
kubectl --kubeconfig ~/.kube/cls-saas-prod patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

3.再次安装sentry

添加参数
helm --kubeconfig ~/.kube/sentry install sentry stable/sentry \
-n sentry \
--set persistence.enabled=true,user.email=ltz@qq.com,user.password=ltz \
--set ingress.enabled=true,ingress.hostname=sentry.ltz.com,service.type=ClusterIP \
--set email.host=smtp.exmail.qq.com,email.port=465 \
--set email.user=ltz@ltz.com,email.password=ltz,email.use_tls=false \
--set redis.primary.persistence.storageClass=local-path \
--set postgresql.persistence.storageClass=local-path \
--wait

4.访问域名,正常显示。

7.数据的一个坑。

正常情况下,启动后,会自动初始化数据库信息。然鹅,我这个没有,所以需要登录到Sentry-web的机器上手动执行下初始化命令。

kubectl --kubeconfig ~/.kube/sentry exec -it -n sentry $(kubectl get pods  -n sentry  |grep sentry-web |awk '{print $1}') bash
sentry upgrade

8.管理员的一个坑

同上,管理员如果没自动创建的话,可以在Sentry-web手动执行。

kubectl exec -it -n sentry $(kubectl get pods  -n sentry  |grep sentry-web |awk '{print $1}') bash
sentry createuser

9.Email的一个坑

上面的安装参数,email要写对,然后,在pod中的环境变量,也要配置对。

sentry-web的环境变量。

- name: SENTRY_EMAIL_HOST
  value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
  value: "465"
- name: SENTRY_EMAIL_USER
  value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
  valueFrom:
 secretKeyRef:
   key: smtp-password
   name: sentry
   optional: false
- name: SENTRY_EMAIL_USE_TLS
  value: "false"
- name: SENTRY_SERVER_EMAIL
  value: ltz@ltz.com

sentry-worker的环境变量

- name: SENTRY_EMAIL_HOST
  value: smtp.exmail.qq.com
- name: SENTRY_EMAIL_PORT
  value: "587"
- name: SENTRY_EMAIL_USER
  value: ltz@ltz.com
- name: SENTRY_EMAIL_PASSWORD
  valueFrom:
    secretKeyRef:
      key: smtp-password
      name: sentry
      optional: false
- name: SENTRY_EMAIL_USE_TLS
  value: "true"
- name: SENTRY_SERVER_EMAIL
  value: ltz@ltz.com
- name: SENTRY_EMAIL_USE_SSL
  value: "false"

配置好后,可以发送一封测试邮件,如果没收到,可以查看sentry-worker的日志。

经过测试,SENTRY_SERVER_EMAIL的配置,使用的是sentry-web中的环境变量!修改完后,两个应用都要重启!!

10.local-path.yml(其中的name、namespace按需替换)

apiVersion: v1
kind: Namespace
metadata:
  name: local-path-storage

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-path-provisioner-service-account
  namespace: local-path-storage

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
  - apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  - apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  - apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  - apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-path-provisioner-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: local-path-provisioner-role
subjects:
  - kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: local-path-storage

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-path-provisioner
  namespace: local-path-storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      serviceAccountName: local-path-provisioner-service-account
      containers:
        - name: local-path-provisioner
          image: rancher/local-path-provisioner:v0.0.19
          imagePullPolicy: IfNotPresent
          command:
            - local-path-provisioner
            - --debug
            - start
            - --config
            - /etc/config/config.json
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config/
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      volumes:
        - name: config-volume
          configMap:
            name: local-path-config

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":["/opt/local-path-provisioner"]
            }
            ]
    }
  setup: |-
    #!/bin/sh
    while getopts "m:s:p:" opt
    do
        case $opt in
            p)
            absolutePath=$OPTARG
            ;;
            s)
            sizeInBytes=$OPTARG
            ;;
            m)
            volMode=$OPTARG
            ;;
        esac
    done

    mkdir -m 0777 -p ${absolutePath}
  teardown: |-
    #!/bin/sh
    while getopts "m:s:p:" opt
    do
        case $opt in
            p)
            absolutePath=$OPTARG
            ;;
            s)
            sizeInBytes=$OPTARG
            ;;
            m)
            volMode=$OPTARG
            ;;
        esac
    done

    rm -rf ${absolutePath}
  helperPod.yaml: |-
    apiVersion: v1
    kind: Pod
    metadata:
      name: helper-pod
    spec:
      containers:
      - name: helper-pod
        image: busybox
        imagePullPolicy: IfNotPresent


这篇关于helm安装Sentry的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程