k8s入坑之路(12)ingress-nginx安装配置四层代理
2021/7/16 7:08:12
本文主要是介绍k8s入坑之路(12)ingress-nginx安装配置四层代理,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
ingress官方文档地址:http://docs.kubernetes.org.cn/ https://feisky.gitbooks.io/kubernetes/content/plugins/ingress.html
什么是 Ingress?
通常情况下,service和pod的IP仅可在集群内部访问。集群外部的请求需要通过负载均衡转发到service在Node上暴露的NodePort上,然后再由kube-proxy将其转发给相关的Pod。
而Ingress就是为进入集群的请求提供路由规则的集合,如下图所示
internet | [ Ingress ] --|-----|-- [ Services ]
Ingress可以给service提供集群外部访问的URL、负载均衡、SSL终止、HTTP路由等。为了配置这些Ingress规则,集群管理员需要部署一个Ingress controller,它监听Ingress和service的变化,并根据规则配置负载均衡并提供访问入口。
新版写法
#ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: springboot-ssl namespace: default spec: tls: - hosts: - csk8s.mingcloud.net secretName: zs-tls rules: - host: csk8s.mingcloud.net http: paths: - pathType: Prefix path: / backend: service: name: springboot-ssl port: number: 80ssl.yaml
Ingress格式
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
每个Ingress都需要配置rules,目前Kubernetes仅支持http规则。上面的示例表示请求/testpath时转发到服务test的80端口。
根据Ingress Spec配置的不同,Ingress可以分为以下几种类型:
注:单个服务还可以通过设置Service.Type=NodePort或者Service.Type=LoadBalancer来对外暴露。
路由到多服务的Ingress
路由到多服务的Ingress即根据请求路径的不同转发到不同的后端服务上,比如
foo.bar.com -> 178.91.123.132 -> / foo s1:80 / bar s2:80
可以通过下面的Ingress来定义:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80
使用kubectl create -f创建完ingress后:
kubectl get ing NAME RULE BACKEND ADDRESS test - foo.bar.com /foo s1:80 /bar s2:80
虚拟主机Ingress
虚拟主机Ingress即根据名字的不同转发到不同的后端服务上,而他们共用同一个的IP地址,如下所示
foo.bar.com --| |-> foo.bar.com s1:80 | 178.91.123.132 | bar.foo.com --| |-> bar.foo.com s2:80
下面是一个基于Host header路由请求的Ingress:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80
注:没有定义规则的后端服务称为默认后端服务,可以用来方便的处理404页面。
TLS Ingress
TLS Ingress通过Secret获取TLS私钥和证书(名为tls.crt和tls.key),来执行TLS终止。如果Ingress中的TLS配置部分指定了不同的主机,则它们将根据通过SNI TLS扩展指定的主机名(假如Ingress controller支持SNI)在多个相同端口上进行复用。
定义一个包含tls.crt和tls.key的secret:
apiVersion: v1 data: tls.crt: base64 encoded cert tls.key: base64 encoded key kind: Secret metadata: name: testsecret namespace: default type: Opaque
Ingress中引用secret:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: no-rules-map spec: tls: - secretName: testsecret backend: serviceName: s1 servicePort: 80
更新Ingress
可以通过kubectl edit ing name的方法来更新ingress:
kubectl get ing NAME RULE BACKEND ADDRESS test - 178.91.123.132 foo.bar.com /foo s1:80 $ kubectl edit ing test
这会弹出一个包含已有IngressSpec yaml文件的编辑器,修改并保存就会将其更新到kubernetes API server,进而触发Ingress Controller重新配置负载均衡:
spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 path: /foo - host: bar.baz.com http: paths: - backend: serviceName: s2 servicePort: 80 path: /foo ..
更新后:
kubectl get ing NAME RULE BACKEND ADDRESS test - 178.91.123.132 foo.bar.com /foo s1:80 bar.baz.com /foo s2:80
当然,也可以通过kubectl replace -f new-ingress.yaml命令来更新,其中new-ingress.yaml是修改过的Ingress yaml。
新版本写法
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-monitoring-service namespace: monitorin annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: prometheus.msinikube.com http: paths: - path: / pathType: Prefix backend: service: name: prom-prometheus-operator-prometheus port: number: 9090 - host: alertmanager.csminikube.com http: paths: - path: / pathType: Prefix backend: service: name: prom-prometheus-operator-alertmanager port: number: 9093 - host: grafana.csminikube.com http: paths: - path: / pathType: Prefix backend: service: name: prom-grafana port: number: 80示例yaml
kubectl create secret tls zs-tls --key SSL.key --cert FullSSL.crt kubectl create secret tls zs-tls --key SSL.key --cert FullSSL.crt -n default
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-demo namespace: dev spec: rules: - host: web-dev.mooc.com http: paths: - backend: serviceName: web-demo servicePort: 80 path: / tls: - hosts: - web-dev.mooc.com secretName: mooc-tlstls示例
ingress-nginx安装
安装文档地址https://kubernetes.github.io/ingress-nginx/deploy/
--- apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: # wait up to five minutes for the drain of connections terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true nodeSelector: #kubernetes.io/os: linux app : ingress containers: - name: nginx-ingress-controller image: 172.17.166.172/kubenetes/nginx-ingress-controller:0.30.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io - --default-ssl-certificate=default/zs-tls securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 101 runAsUser: 101 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 lifecycle: preStop: exec: command: - /wait-shutdown --- apiVersion: v1 kind: LimitRange metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: limits: - min: memory: 90Mi cpu: 100m type: Containeringress-controller0.30.0安装
注意事项:
k8s1.20以上需更换api版本#可使用s/具体内容/替换内容/g批量进行替换
修改controller镜像地址下载后上传自己库 修改地址
修改replicas数量,需要高可用几个
修改controller网络模式为hostNetwork,默认为NodePort,调度策略修改为指定node。
给指定node,打上标签,部署controller
kubectl label node nodename app=ingress
深入Ingress-nginx
- 1.deployment
- 2.四层代理
- 3.定制配置
- 4.https
- 5.访问控制
1.deployment修改为Daemonset
将deployment yaml文件导出
kubectl get deploy -n ingress-nginx nginx-ingress-controller -o yaml > nginx-ingress-controller.yaml
修改文件
apiVersion: apps/v1 kind: Deamonset metadata: annotations: deployment.kubernetes.io/revision: "1" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"nginx-ingress-controller","namespace":"ingress-nginx"},"spec":{"replicas":2,"selector":{"matchLabels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"10254","prometheus.io/scrape":"true"},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"spec":{"containers":[{"args":["/nginx-ingress-controller","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--publish-service=$(POD_NAMESPACE)/ingress-nginx","--annotations-prefix=nginx.ingress.kubernetes.io","--default-ssl-certificate=default/zs-tls"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"172.17.166.172/kubenetes/nginx-ingress-controller:0.30.0","lifecycle":{"preStop":{"exec":{"command":["/wait-shutdown"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"nginx-ingress-controller","ports":[{"containerPort":80,"name":"http","protocol":"TCP"},{"containerPort":443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"securityContext":{"allowPrivilegeEscalation":true,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":101}}],"hostNetwork":true,"nodeSelector":{"app":"ingress"},"serviceAccountName":"nginx-ingress-serviceaccount","terminationGracePeriodSeconds":300}}}} creationTimestamp: "2021-07-07T09:34:24Z" generation: 1 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:progressDeadlineSeconds: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:annotations: .: {} f:prometheus.io/port: {} f:prometheus.io/scrape: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:containers: k:{"name":"nginx-ingress-controller"}: .: {} f:args: {} f:env: .: {} k:{"name":"POD_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} k:{"name":"POD_NAMESPACE"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} f:image: {} f:imagePullPolicy: {} f:lifecycle: .: {} f:preStop: .: {} f:exec: .: {} f:command: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":80,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} k:{"containerPort":443,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:add: {} f:drop: {} f:runAsUser: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:hostNetwork: {} f:nodeSelector: .: {} f:app: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} manager: kubectl-client-side-apply operation: Update time: "2021-07-07T09:34:24Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:deployment.kubernetes.io/revision: {} f:status: f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:readyReplicas: {} f:replicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update time: "2021-07-07T09:34:35Z" name: nginx-ingress-controller namespace: ingress-nginx resourceVersion: "1100470" uid: 9651c048-0a73-46f3-9753-affd00074ddb spec: revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx updatestrategy: rollingUpdate: maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" creationTimestamp: null labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: containers: - args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io - --default-ssl-certificate=default/zs-tls env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: 172.17.166.172/kubenetes/nginx-ingress-controller:0.30.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 name: nginx-ingress-controller ports: - containerPort: 80 hostPort: 80 name: http protocol: TCP - containerPort: 443 hostPort: 443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 resources: {} securityContext: allowPrivilegeEscalation: true capabilities: add: - NET_BIND_SERVICE drop: - ALL runAsUser: 101 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst hostNetwork: true nodeSelector: app: ingress restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: nginx-ingress-serviceaccount serviceAccountName: nginx-ingress-serviceaccount terminationGracePeriodSeconds: 300nginx-ingress-controller.yaml
#需要将deployment中不支持Deamonset的参数进行删除
查看是否安装成功
kubectl describe ingress --all-namespaces kubectl get daemonsets.apps ingress_nginx_controller kubectl get pods -n ingress -l app=ingress
扩展nginx只需要给node打上标签deamonset会自动安装
kubectl label node node-2 app=ingress 去掉ingress 只需要去掉label kubectl label node node app-
2.四层代理服务发现
查看当前ingress-nginx下的configmap
kubectl get cm -n ingress-nginx
导出tcp configmap
kubectl get cm -n ingress-nginx tcp-services -o yaml >tcp-service.yaml
编辑文件
apiVersion: v1 kind: ConfigMap metadata: name: pr-services namespace: monitorin data: "30000": monitorin/prometheus-operator-prometheustcp-service.yaml
##配置数据端口及需要转发到命名空间下的某个service
3.自定义配置
进入controller容器中查看nginx.conf文件
kubectl exec -it -n ingress-nginx nginx-ingress-controller-697b7b8655-4zkj7 -- /bin/bash
##在新的版本中采用了lua模块不用频繁的去reload。lua模块对应了脚本和指令能够动态的给conf文件传参
- 创建一个config文件修改默认配置
kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app: ingress-nginx data: proxy-body-size: "64m" proxy-read-timeout: "180" proxy-send-timeout: "180"
- 定义添加一些head
apiVersion: v1 kind: ConfigMap data: proxy-set-headers: "ingress-nginx/custom-headers" metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ConfigMap data: X-Different-Name: "true" X-Request-Start: t=${msec} X-Using-Nginx-Controller: "true" metadata: name: custom-headers namespace: ingress-nginx
#引用ingress-nginx/custom-headers在其之下添加一些新的head
只在某个域名下添加head
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers "Request-Id: $req_id"; name: web-demo namespace: dev spec: rules: - host: web-dev.mooc.com http: paths: - backend: serviceName: web-demo servicePort: 80 path: /cust-header-spec-ingress.yaml
- 定义模板文件
#将模板文件挂载到containerd
- 创建configmap
将容器中配置文件取出
kubectl exec -n ingress-nginx nginx-ingress-controller-697b7b8655-zcpxq -- tar cf - template/nginx.tmpl | tar xf - -C nginx.tmpl
kubectl cp ingress-nginx/nginx-ingress-controller-697b7b8655-4zkj7:template/nginx.tmpl nginx.tmpl 取出文件 传入文件 kubectl cp nginx.tmpl ingress-nginx/nginx-ingress-controller-697b7b8655-4zkj7:template/
###kubectl cp使用的是tar 相对路径
创建configmap
kubectl create cm nginx-template --form-file nginx.tmpl
删除之前的
kubectl delete cm nginx-template
修改nginx-template
kubectl edit cm -n ingress-nginx nginx-template
4.nginx tls
创建secret
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mooc.key -out mooc.crt -subj "/CN=*.mooc.com/O=*.mooc.com" kubectl create secret tls mooc-tls --key mooc.key --cert mooc.crt
编辑controller文件
#添加证书secret命名空间及名称
启动tsl yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-demo namespace: dev spec: rules: - host: web-dev.mooc.com http: paths: - backend: serviceName: web-demo servicePort: 80 path: / tls: - hosts: - web-dev.mooc.com secretName: mooc-tlsweb-ingress.yaml
5.session保持
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/affinity: cookie #session保持 cookie nginx.ingress.kubernetes.io/session-cookie-hash: sha1 #算法sha1 nginx.ingress.kubernetes.io/session-cookie-name: route #session名称 name: springboot-ssl namespace: default spec: rules: - host: csk8s.mingcloud.net http: paths: - backend: serviceName: springboot-ssl servicePort: 80 path: / ~session.yaml
6.流量控制
需要指向相同的域名,ingress会把域名指向两个service
架构图:
- 权重
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "90" spec: rules: - host: canary.mooc.com http: paths: - path: / backend: serviceName: web-canary-b servicePort: 80ingress-weight.yaml
- cookie流量定向控制
#ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-cookie: "web-canary" spec: rules: - host: canary.mooc.com http: paths: - path: / backend: serviceName: web-canary-b servicePort: 80ingress-cookie.yaml
添加cookie进行访问
- 通过header定向流量
#ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "web-canary" spec: rules: - host: canary.mooc.com http: paths: - path: / backend: serviceName: web-canary-b servicePort: 80ingress-header.yaml
通过自定义head访问
- 组合方式
#ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "web-canary" nginx.ingress.kubernetes.io/canary-by-cookie: "web-canary" nginx.ingress.kubernetes.io/canary-weight: "90" spec: rules: - host: canary.mooc.com http: paths: - pathType: Prefix path: / backend: service: name: web-canary-b port: number: 80ingress-compose.yaml
优先级最高的为head
其次为cookie
最后为权重
这篇关于k8s入坑之路(12)ingress-nginx安装配置四层代理的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-12-20/kubernetes 1.32版本更新解读:新特性和变化一目了然
- 2024-12-19拒绝 Helm? 如何在 K8s 上部署 KRaft 模式 Kafka 集群?
- 2024-12-16云原生周刊:Kubernetes v1.32 正式发布
- 2024-12-13Kubernetes上运行Minecraft:打造开发者平台的例子
- 2024-12-12深入 Kubernetes 的健康奥秘:探针(Probe)究竟有多强?
- 2024-12-10运维实战:K8s 上的 Doris 高可用集群最佳实践
- 2024-12-022024年最好用的十大Kubernetes工具
- 2024-12-02OPA守门人:Kubernetes集群策略编写指南
- 2024-11-26云原生周刊:K8s 严重漏洞
- 2024-11-15在Kubernetes (k8s) 中搭建三台 Nginx 服务器怎么实现?-icode9专业技术文章分享