Life is too bug

k8s 七层访问入口 Ingress

2019.03.12

k8s初期的时候,只有Service作为四层负载均衡,后来才推出了专门用作七层的负载均衡–Ingress。第一批实现Ingress Controller的就是traefik以及Nginx Ingress,后来随着Envory的出现,也逐渐涌现出了越来越多的Ingress Controller实现,比如:

  1. https://github.com/heptio/contour
  2. https://github.com/datawire/ambassador
  3. https://github.com/istio/istio/

但是这些实现的原理说到底就是watch后端的Service,然后创建对于的访问规则。

Install

以Nginx Ingress为例,由Go,Lua和C三种语言组成,Go负责与k8s API交互,下面简单介绍一些安装过程。

#首先安装控制器,
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
#然后创建Service
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml

以上是属于Bate-metal的配置,如果你是在Public CLoud可以替换为LoadBalancer。仔细观察一下第一个yaml文件,可以发现有些特殊的地方,如下所示,有一个特殊的annotations,看来和APIServer类似,支持多副本高可用。

I0411 09:11:06.783582 7 leaderelection.go:227] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx

kubectl -n ingress-nginx get cm ingress-controller-leader-nginx -oyaml
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"nginx-ingress-controller-85f9f75759-4wb8k","leaseDurationSeconds":30,"acquireTime":"2019-04-11T09:11:06Z","renewTime":"2019-04-11T13:12:56Z","leaderTransitions":0}'
  creationTimestamp: "2019-04-11T09:11:06Z"
  name: ingress-controller-leader-nginx
  namespace: ingress-nginx
  resourceVersion: "28998"
  selfLink: /api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx
  uid: bd25d320-5c39-11e9-af65-00163e132347

因为Nginx原生支持四层负载均衡,所有Nginx Ingress也是支持四层的。

除此之外k8s也支持多Ingress比如同时安装traefik和Nginx,Nginx Ingress 也支持多Ingress,比如一个内网一个外网。(https://yq.aliyun.com/articles/645856)

配置

对于k8s来说,当你需要一个访问入口的时候,就需要声明一个Ingress资源,如下所示

最简单的

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    # 这个在多ingress时才需要配置
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: ing.vsxen.me
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80

多host

cat <<EOF | kubectl create -f - 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: host
spec:
  rules:
  - host: echo.vsxen.me
    http:
      paths:
      - path: /
        backend:
          serviceName: echo
          servicePort: 8080
  - host: nginx.vsxen.me
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80    
EOF

https

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=foo.bar.com/O=foo.bar.com"

kubectl  create secret tls tlskey --key=tls.key --cert=tls.crt

cat <<EOF | kubectl create -f - 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tls-fanout
spec:
  tls:
  - hosts:
    - foo.bar.com
    secretName: tlskey
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
EOF

前后端分离

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: nginx
  name: demo
spec:
  rules:
  - host: vsxen.me
    http:
      paths:
      - backend:
          serviceName: backend
          servicePort: 8080
        path: /api
      - backend:
          serviceName: front
          servicePort: 80
        path: /

进阶

作为整个集群的入口,Ng支持很多注解以及插件。比如用于灰度发布的注解nginx.ingress.kubernetes.io/canary,支持Jaeger的Opentracing插件(在configmap配置),同时Ng也支持自定义nginx.conf,但是需要用go template来写,默认使用的配置在这https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl

        volumeMounts:
          - mountPath: /etc/nginx/template
            name: nginx-template-volume
            readOnly: true
      volumes:
        - name: nginx-template-volume
          configMap:
            name: nginx-template
            items:
            - key: nginx.tmpl
              path: nginx.tmpl

所有的注解在这 https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

所有的插件在这 https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/

对于不同的Ingress对象,Ng会生成不同的sever配置。下面是生成的对应的Nginx Server段,熟悉Nginx的人应该不陌生。

        ## start server xxx.com
        server {
                server_name ssl.vsxen.me ;
                listen 80;
                set $proxy_upstream_name "-";
                location / {
                        set $namespace      "default";
                        set $ingress_name   "nginx";
                        set $service_name   "nginx";
                        set $service_port   "80";
                        set $location_path  "/";
                        rewrite_by_lua_block {
                                balancer.rewrite()
                        }
                        header_filter_by_lua_block {
                        }
                        body_filter_by_lua_block {
                        }
                        log_by_lua_block {
                                balancer.log()
                                monitor.call()
                        }
                        port_in_redirect off;
                        set $proxy_upstream_name    "quanke-test-gateway-serv-8092";
                        set $proxy_host             $proxy_upstream_name;
                        client_max_body_size                    20m;
                        proxy_set_header Host                   $best_http_host;
                        # Pass the extracted client certificate to the backend
                        # Allow websocket connections
                        proxy_set_header                        Upgrade           $http_upgrade;
                        proxy_set_header                        Connection        $connection_upgrade;
                        proxy_set_header X-Request-ID           $req_id;
                        proxy_set_header X-Real-IP              $the_real_ip;
                        proxy_set_header X-Forwarded-For        $the_real_ip;
                        proxy_set_header X-Forwarded-Host       $best_http_host;
                        proxy_set_header X-Forwarded-Port       $pass_port;
                        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                        proxy_set_header X-Original-URI         $request_uri;
                        proxy_set_header X-Scheme               $pass_access_scheme;
                        # Pass the original X-Forwarded-For
                        proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
                        # mitigate HTTPoxy Vulnerability
                        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                        proxy_set_header Proxy                  "";
                        # Custom headers to proxied server
                        proxy_connect_timeout                   5s;
                        proxy_send_timeout                      60s;
                        proxy_read_timeout                      60s;
                        proxy_buffering                         off;
                        proxy_buffer_size                       4k;
                        proxy_buffers                           4 4k;
                        proxy_request_buffering                 on;
                        proxy_http_version                      1.1;
                        proxy_cookie_domain                     off;
                        proxy_cookie_path                       off;
                        # In case of errors try the next upstream server before returning an error
                        proxy_next_upstream                     error timeout;
                        proxy_next_upstream_tries               3;
                        proxy_pass http://upstream_balancer;
                        proxy_redirect                          off;
                }
        }
        ## end server xxx.com

Access Log

默认的日志输出如下

log_format upstreaminfo
    '{{ if $cfg.useProxyProtocol }}$proxy_protocol_addr{{ else }}$remote_addr{{ end }} - '
    '[$the_real_ip] - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" "$http_user_agent" '
    '$request_length $request_time [$proxy_upstream_name] $upstream_addr '
    '$upstream_response_length $upstream_response_time $upstream_status $req_id';

  log-format-upstream: '{"proxy_protocol_addr": "$proxy_protocol_addr","remote_addr":
    "$remote_addr", "proxy_add_x_forwarded_for": "$proxy_add_x_forwarded_for", "remote_user":
    "$remote_user", "time_local": "$time_local", "request" : "$request", "status":
    "$status", "body_bytes_sent": "$body_bytes_sent", "http_referer":  "$http_referer",
    "http_user_agent": "$http_user_agent", "request_length" : "$request_length", "request_time"
    : "$request_time", "proxy_upstream_name": "$proxy_upstream_name", "upstream_addr":
    "$upstream_addr",  "upstream_response_length": "$upstream_response_length", "upstream_response_time":
    "$upstream_response_time", "upstream_status": "$upstream_status", "req_id":"$req_id"}'

203.110.179.243 - [203.110.179.243] - - [11/Jan/2019:14:44:55 +0800] "GET /favicon.ico HTTP/1.1" 404 196 "http://nginx.cf86fc3209f4e483493e504197bfdd22f.cn-hangzhou.alicontainer.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 458 0.001 [default-nginx-80] 172.16.1.173:80 555 0.000 404 7b02134be96609d93aa4318a826a40b3    

获取客户端Real IP,对于Service Spec来说有一个externalTrafficPolicy字段,效果如下所示

  externalTrafficPolicy: Cluster
10.244.0.1 - [10.244.0.1] - - [11/Apr/2019:13:54:51 +0000] "GET / HTTP/2.0" 200 392 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" 431 0.001 [default-nginx-80] 10.244.0.144:80 612 0.000 200 42ce047ab4b6909266aeca53af3e6b08
  externalTrafficPolicy: Local
23.10.19.23 - [23.10.19.23] - - [11/Jan/2019:14:44:55 +0800] "GET /favicon.ico HTTP/1.1" 404 196 "http://nginx.cf86fc3209f4e483493e504197bfdd22f.cn-hangzhou.alicontainer.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 458 0.001 [default-nginx-80] 172.16.1.173:80 555 0.000 404 7b02134be96609d93aa4318a826a40b3    
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>

    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>

    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>

    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>

    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>

    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      type_name _doc
      include_tag_key true
      host 1.1.1.1
      port 9200
      logstash_format true
      logstash_prefix audit
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        queue_limit_length 8
        overflow_action block
      </buffer>
    </match>

Ref

kubedex 出品的对比表格

https://docs.google.com/spreadsheets/d/16bxRgpO1H_Bn-5xVZ1WrR_I-0A-GOI6egmhvqqLMOmg/edit#gid=1612037324

comments powered by Disqus