[k8s]nginx-ingress配置4/7層測試


基本原理

default-backend提供了2個功能:

1. 404報錯頁面
2. healthz頁面

# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint

創建svc,外面訪問80 映射到容器的8080.

deploy+svc
kubectl create -f default-backend.yaml

http://192.168.x.x/

nginx-ingress搭建

參考: https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml \
    | kubectl apply -f -

默認使用的鏡像

quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
gcr.io/google_containers/defaultbackend:1.4

docker pull lanny/gcr.io_google_containers_defaultbackend_1.4:v1.4 
docker tag lanny/gcr.io_google_containers_defaultbackend_1.4:v1.4  gcr.io/google_containers/defaultbackend:1.4

貼上ingress的yaml

主要修改點:

  1. 通過18080訪問狀態頁面(ingress-controller的nginx.conf決定)
    http://192.168.x,x:18080/nginx_status

  1. ingress-controller需要開啟 hostNetwork: true
    便於暴漏ingress的80端口和其他ingress-controller的nginx.conf暴漏的端口

namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

default-backend.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend

without-rbac.yaml

這個yaml官網已經更新了, 多了個

sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"

而我這里還沒更新.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true' 
    spec:
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1


ingress-controller的本質是:
/nginx-ingress-controller 加啟動參數

          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io

tcp-services-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx

udp-services-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx

啟動nginx測試ingress的http 7層負載

kubectl run --image=nginx nginx --replicas=2
kubectl expose deployment nginx --port=80  ## 這里是svc端口,默認和容器的端口一致

nginx-ingress.conf

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app-nginx-ingress
  namespace: default
spec:
  rules:
  - host: mynginx.maotai.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80

注意: ingress雖然調用的是svc,貌似轉發是client--nginx--svc--pod; 實際上ingress監控svc 自動將svc下的podip填充到nginx.conf.轉發是client--nginx--pod

測試4層負載

參考: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md

udp-services-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
data:
  53: "kube-system/kube-dns:53"

修改后,apply即可,nginx-ingress可以熱更新

kubectl apply -f udp-services-configmap.yaml
$ host -t A nginx.default.svc.cluster.local 192.168.14.132
Using domain server:
Name: 192.168.x.x
Address: 192.168.x.x#53
Aliases: 

nginx.default.svc.cluster.local has address 10.254.160.155

另一個tcp的示例

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
data:
  2200: "default/gitlab:22"
  3306: "kube-public/mysql:3306"
  2202: "kube-public/centos:22"
  2203: "kube-public/mongodb:27017"

以下是nginx-ingress鏡像的dockerfile.進這個容器可以看到
Dockerfile

FROM quay.io/kubernetes-ingress-controller/nginx-amd64:0.30


RUN clean-install \
  diffutils \
  dumb-init

# Create symlinks to redirect nginx logs to stdout and stderr docker log collector
# This only works if nginx is started with CMD or ENTRYPOINT
RUN mkdir -p /var/log/nginx \
  && ln -sf /dev/stdout /var/log/nginx/access.log \
  && ln -sf /dev/stderr /var/log/nginx/error.log

COPY . /

ENTRYPOINT ["/usr/bin/dumb-init"]

CMD ["/nginx-ingress-controller"]

默認啟動后nginx-ingres的nginx.conf

root@n1:/etc/nginx# cat nginx.conf

daemon off;

worker_processes 4;
pid /run/nginx.pid;

worker_rlimit_nofile 15360;

worker_shutdown_timeout 10s ;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {

    real_ip_header      X-Forwarded-For;

    real_ip_recursive   on;

    set_real_ip_from    0.0.0.0/0;

    geoip_country       /etc/nginx/GeoIP.dat;
    geoip_city          /etc/nginx/GeoLiteCity.dat;
    geoip_proxy_recursive on;

    vhost_traffic_status_zone shared:vhost_traffic_status:10m;
    vhost_traffic_status_filter_by_set_key $geoip_country_code country::*;

    sendfile            on;

    aio                 threads;
    aio_write           on;

    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;

    keepalive_timeout  75s;
    keepalive_requests 100;

    client_header_buffer_size       1k;
    client_header_timeout           60s;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;
    client_body_timeout             60s;

    http2_max_field_size            4k;
    http2_max_header_size           16k;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   32;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      128;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    include /etc/nginx/mime.types;
    default_type text/html;

    brotli on;
    brotli_comp_level 4;
    brotli_types application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;

    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
    gzip_proxied any;
    gzip_vary on;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    # Additional available variables:
    # $namespace
    # $ingress_name
    # $service_name
    log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';

    map $request_uri $loggable {

        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo if=$loggable;

    error_log  /var/log/nginx/error.log notice;

    resolver 192.168.14.2 valid=30s;

    # Retain the default nginx handling of requests without a "Connection" header
    map $http_upgrade $connection_upgrade {
        default          upgrade;
    ''
        close;
    }

    map $http_x_forwarded_for $the_real_ip {

        default          $remote_addr;

    }

    # trust http_x_forwarded_proto headers correctly indicate ssl offloading
    map $http_x_forwarded_proto $pass_access_scheme {
        default          $http_x_forwarded_proto;
    ''
        $scheme;
    }

    map $http_x_forwarded_port $pass_server_port {
        default           $http_x_forwarded_port;
    ''
        $server_port;
    }

    map $http_x_forwarded_host $best_http_host {
        default          $http_x_forwarded_host;
    ''
        $this_host;
    }

    map $pass_server_port $pass_port {
        443              443;
        default          $pass_server_port;
    }

    # Obtain best http host
    map $http_host $this_host {
        default          $http_host;
    ''
        $host;
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1.2;

    # turn on session caching to drastically improve performance

    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets on;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve auto;

    proxy_ssl_session_reuse on;

    upstream upstream-default-backend {
        # Load balance algorithm; empty for round robin, which is the default

        least_conn;

        keepalive 32;

        server 10.2.98.3:8080 max_fails=0 fail_timeout=0;

    }

    ## start server _
    server {
        server_name _ ;

        listen 80 default_server reuseport backlog=511;

        listen [::]:80 default_server reuseport backlog=511;

        set $proxy_upstream_name "-";

        listen 443  default_server reuseport backlog=511 ssl http2;

        listen [::]:443  default_server reuseport backlog=511 ssl http2;

        # PEM sha: 479f4653ff7d901e313895dbaafbbe64b0805346
        ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";

        location / {

            set $proxy_upstream_name "upstream-default-backend";

            set $namespace      "";
            set $ingress_name   "";
            set $service_name   "";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;

            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://upstream-default-backend;

        }

        # health checks in cloud providers require the use of port 80
        location /healthz {
            access_log off;
            return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {
            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }

    }
    ## end server _

    # default server, used for NGINX healthcheck and access to nginx stats
    server {
        # Use the port 18080 (random value just to avoid known ports) as default port for nginx.
        # Changing this value requires a change in:
        # https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go
        listen 18080 default_server reuseport backlog=511;
        listen [::]:18080 default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        location /healthz {
            access_log off;
            return 200;
        }

        location /nginx_status {
            set $proxy_upstream_name "internal";

            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;

        }

        location / {

            set $proxy_upstream_name "upstream-default-backend";
            proxy_pass          http://upstream-default-backend;
        }

    }
}

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;

    access_log /var/log/nginx/access.log log_stream;

    error_log  /var/log/nginx/error.log;

    # TCP services

    # UDP services

}

多路徑ingress

參考: https://github.com/kubernetes-helm/monocular/blob/master/deployment/monocular/templates/ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: stultified-snail-monocular
  annotations:
    ingress.kubernetes.io/rewrite-target: "/"
    kubernetes.io/ingress.class: "nginx"
  labels:
    app: stultified-snail-monocular
    chart: "monocular-0.5.0"
    release: "stultified-snail"
    heritage: "Tiller"
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: stultified-snail-monocular-ui
          servicePort: 80
        path: /
      - backend:
          serviceName: stultified-snail-monocular-api
          servicePort: 80
        path: /api/
    host:

DNS Policy導致hostnetwork網絡問題

參考: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
遇到個坑: nginx-ingress是hostnetwork模式, 而這種模式的pod的/etc/resolve.conf的dns是繼承宿主機的(相當於dokcer run --net=host,共享宿主機網絡協議棧,因此也繼承了網卡dns),導致訪問集群里的 api svc無法解析

By default, DNS policy for a pod is ‘ClusterFirst’. So pods running with hostNetwork cannot resolve DNS names. To have DNS options set along with hostNetwork, you should specify DNS policy explicitly to ‘ClusterFirstWithHostNet’. Update the busybox.yaml as following:

nginx-ingress生產中使用方法(因為我沒加證書,所以只能手動引導ingress連http的api)

args:
- /nginx-ingress-controller
- --apiserver-host=http://kube-api-http.kube-public
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --tcp-services-configmap=$(POD_NAMESPACE)/nginx-tcp-ingress-configmap
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet

使用mysql測試nginx的4層負載

mysql的deploy和4層端口ingress創建

[root@m1 yaml]# cat mysql/mysql-deploy.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "123456"

[root@m1 yaml]# cat ingress/tcp-services-configmap.yaml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  3306: "default/mysql:3306"

注:

1.這里發現apply還有,ingress的nginx.conf配置沒發生變化,因為我重啟過了啟動,重新建了下ingress正常了.
2.ingress創建的端口只在ingress-controller的節點上監聽.

nginx配置參考

[root@m1 yaml]# cat ingress/tcp-services-configmap.yaml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  3306: "default/mysql:3306"

nginx其他功能參考

錯誤日志記錄
日志json格式
stub_status & 開啟認證
404錯誤頁配置,並重定向
某些后綴文件拒絕訪問(default.conf)
配置include(簡化)

nginx日志json格式

log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr",
    "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$request_id", "remote_user":
    "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":
    $status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri",
    "request_query": "$args", "request_length": $request_length, "duration": $request_time,
    "method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":
    "$http_user_agent" }'

nginx json日志格式字段說明

字段名 字段解釋
$remote_addr, $http_x_forwarded_for 記錄客戶端IP地址
$remote_user 記錄客戶端用戶名稱
$request 記錄請求的URL和HTTP協議
$status 記錄請求狀態
$body_bytes_sent 發送給客戶端的字節數,不包括響應頭的大小; 該變量與Apache模塊mod_log_config里的“%B”參數兼容。
$bytes_sent 發送給客戶端的總字節數。
$connection 連接的序列號。
$connection_requests 當前通過一個連接獲得的請求數量。
$msec 日志寫入時間。單位為秒,精度是毫秒。
$pipe 如果請求是通過HTTP流水線(pipelined)發送,pipe值為“p”,否則為“.”。
$http_referer 記錄從哪個頁面鏈接訪問過來的
$http_user_agent 記錄客戶端瀏覽器相關信息
$request_length 請求的長度(包括請求行,請求頭和請求正文)。
$request_time 請求處理時間,單位為秒,精度毫秒; 從讀入客戶端的第一個字節開始,直到把最后一個字符發送給客戶端后進行日志寫入為止。
$time_iso8601 ISO8601標准格式下的本地時間。
$time_local 通用日志格式下的本地時間。

nginx快速測試

mkdir -p /data/nginx-html
echo "maotai" > /data/nginx-html/index.html


docker run  -d \
    --net=host \
    --restart=always \
    -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
    -v /etc/localtime:/etc/localtime:ro \
    -v /data/nginx-html:/usr/share/nginx/html \
    --name nginx \
nginx

nginx4層負載和7層負載示例

單個端口映射負載:一對多

cat > nginx.conf <<EOF
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
        upstream kube_apiserver {
            least_conn;
            server 192.168.8.161:6443;
            server 192.168.8.162:6443;
            server 192.168.8.163:6443;
                    }

        server {
            listen        127.0.0.1:6443;
            proxy_pass    kube_apiserver;
            proxy_timeout 10m;
            proxy_connect_timeout 1s;

        }
}
EOF

單個端口映射:一對一

cat > nginx.conf <<EOF
error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}
stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
    access_log /var/log/nginx/access.log log_stream;
    error_log  /var/log/nginx/error.log;
    # TCP services
    # UDP services
    upstream udp-53-kube-system-kube-dns-53 {
        server                  10.2.54.9:53;
    }
    server {
        listen                  53 udp;
        listen                  [::]:53 udp;
        proxy_responses         1;
        proxy_timeout           600s;
        proxy_pass              udp-53-kube-system-kube-dns-53;
    }
}
EOF

一個tcp&一個udp端口映射

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;

    access_log /var/log/nginx/access.log log_stream;

    error_log  /var/log/nginx/error.log;

    # TCP services
    upstream tcp-3306-default-mysql-3306 {
        server                  10.2.30.2:3306;
    }
    server {
        listen                  3306;
        listen                  [::]:3306;
        proxy_timeout           600s;
        proxy_pass              tcp-3306-default-mysql-3306;
    }

    # UDP services
    upstream udp-53-kube-system-kube-dns-53 {
        server                  10.2.54.9:53;
    }

    server {
        listen                  53 udp;
        listen                  [::]:53 udp;
        proxy_responses         1;
        proxy_timeout           600s;
        proxy_pass              udp-53-kube-system-kube-dns-53;
    }
}

nginx的proxy-pass7層負載配置

user nginx nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
error_log /usr/local/nginxlogs/error.log;

# pid logs/nginx.pid;

events {
    use epoll;
    worker_connections  51200;
}

http {
    include mime.types;
    default_type application/octet-stream;
    log_format main '$remote_addr $remote_user [$time_local] "$request" $http_host '
    '$status $upstream_status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" $ssl_protocol $ssl_cipher $upstream_addr '
    '$request_time $upstream_response_time';
    server_info off;
    server_tag off;
    server_name_in_redirect off;
    access_log /usr/local/tengine-2.1.2/logs/access.log main;
    client_max_body_size 80m;
    client_header_buffer_size 16k;
    large_client_header_buffers 4 16k;
    sendfile on;
    tcp_nopush on;
    keepalive_timeout 65;
    server_tokens on;
    gzip on;
    gzip_min_length 1k;
    gzip_buffers 4 16k;
    gzip_proxied any;
    gzip_http_version 1.1;
    gzip_comp_level 3;
    gzip_types text/plain application/x-javascript text/css application/xml;
    gzip_vary on;

    upstream kube-api-vip {
        least_conn;
        server 192.168.x.20:8080;
        server 192.168.x.21:8080;
        server 192.168.x.22:8080;
    }
    server {
        listen 80;
        server_name kube-api-vip.maotai.net;
        proxy_connect_timeout 1s;
        # proxy_read_timeout 600;
        # proxy_send_timeout 600;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
        location / {
            proxy_next_upstream error timeout invalid_header http_500 http_503 http_404 http_502 http_504;
            proxy_pass http://kube-api-vip;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

混合:端口映射和7層映射放在一起(這是個ingress的例子)

root@n2:/# cat /etc/nginx/nginx.conf                                                                                                                                                        
daemon off;
worker_processes 4;
pid /run/nginx.pid;
worker_rlimit_nofile 15360;
worker_shutdown_timeout 10s ;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {

    real_ip_header      X-Forwarded-For;

    real_ip_recursive   on;

    set_real_ip_from    0.0.0.0/0;

    geoip_country       /etc/nginx/GeoIP.dat;
    geoip_city          /etc/nginx/GeoLiteCity.dat;
    geoip_proxy_recursive on;

    vhost_traffic_status_zone shared:vhost_traffic_status:10m;
    vhost_traffic_status_filter_by_set_key $geoip_country_code country::*;

    sendfile            on;

    aio                 threads;
    aio_write           on;

    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;

    keepalive_timeout  75s;
    keepalive_requests 100;

    client_header_buffer_size       1k;
    client_header_timeout           60s;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;
    client_body_timeout             60s;

    http2_max_field_size            4k;
    http2_max_header_size           16k;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   64;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      128;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    include /etc/nginx/mime.types;
    default_type text/html;

    brotli on;
    brotli_comp_level 4;
    brotli_types application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;

    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
    gzip_proxied any;
    gzip_vary on;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    # Additional available variables:
    # $namespace
    # $ingress_name
    # $service_name
    log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';

    map $request_uri $loggable {

        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo if=$loggable;

    error_log  /var/log/nginx/error.log notice;

    resolver 192.168.14.2 valid=30s;

    # Retain the default nginx handling of requests without a "Connection" header
    map $http_upgrade $connection_upgrade {
        default          upgrade;
        ''               close;
    }

    map $http_x_forwarded_for $the_real_ip {

        default          $remote_addr;

    }

    # trust http_x_forwarded_proto headers correctly indicate ssl offloading
    map $http_x_forwarded_proto $pass_access_scheme {
        default          $http_x_forwarded_proto;
        ''               $scheme;
    }

    map $http_x_forwarded_port $pass_server_port {
        default           $http_x_forwarded_port;
        ''                $server_port;
    }

    map $http_x_forwarded_host $best_http_host {
        default          $http_x_forwarded_host;
        ''               $this_host;
    }

    map $pass_server_port $pass_port {
        443              443;
        default          $pass_server_port;
    }

    # Obtain best http host
    map $http_host $this_host {
        default          $http_host;
        ''               $host;
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1.2;

    # turn on session caching to drastically improve performance

    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets on;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve auto;

    proxy_ssl_session_reuse on;

    upstream kube-public-spring-web {
        # Load balance algorithm; empty for round robin, which is the default

        least_conn;

        keepalive 32;

        server 10.2.54.4:8080 max_fails=0 fail_timeout=0;

    }

    upstream upstream-default-backend {
        # Load balance algorithm; empty for round robin, which is the default

        least_conn;

        keepalive 32;

        server 10.2.30.3:8080 max_fails=0 fail_timeout=0;

    }

    ## start server _
    server {
        server_name _ ;

        listen 80 default_server reuseport backlog=511;

        listen [::]:80 default_server reuseport backlog=511;

        set $proxy_upstream_name "-";

        listen 443  default_server reuseport backlog=511 ssl http2;

        listen [::]:443  default_server reuseport backlog=511 ssl http2;

        # PEM sha: 20103470e60aa51135afee9244c7c831559f04b8
        ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";

        location / {

            set $proxy_upstream_name "upstream-default-backend";

            set $namespace      "";
            set $ingress_name   "";
            set $service_name   "";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;

            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://upstream-default-backend;

        }

        # health checks in cloud providers require the use of port 80
        location /healthz {
            access_log off;
            return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {
            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }

    }
    ## end server _

    ## start server spring.maotai.com
    server {
        server_name spring.maotai.com ;

        listen 80;

        listen [::]:80;

        set $proxy_upstream_name "-";

        location / {

            set $proxy_upstream_name "kube-public-spring-web";

            set $namespace      "kube-public";
            set $ingress_name   "spring";
            set $service_name   "spring";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;

            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://kube-public-spring-web;

        }

    }
    ## end server spring.maotai.com

    # default server, used for NGINX healthcheck and access to nginx stats
    server {
        # Use the port 18080 (random value just to avoid known ports) as default port for nginx.
        # Changing this value requires a change in:
        # https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go
        listen 18080 default_server reuseport backlog=511;
        listen [::]:18080 default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        location /healthz {
            access_log off;
            return 200;
        }

        location /nginx_status {
            set $proxy_upstream_name "internal";
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;

        }
        location / {

            set $proxy_upstream_name "upstream-default-backend";
            proxy_pass          http://upstream-default-backend;
        }
    }
}

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
    access_log /var/log/nginx/access.log log_stream;
    error_log  /var/log/nginx/error.log;

    # TCP services
    upstream tcp-3306-default-mysql-3306 {
        server                  10.2.30.2:3306;
    }
    server {
        listen                  3306;
        listen                  [::]:3306;
        proxy_timeout           600s;
        proxy_pass              tcp-3306-default-mysql-3306;
    }

    # UDP services
    upstream udp-53-kube-system-kube-dns-53 {
        server                  10.2.54.9:53;
    }
    server {
        listen                  53 udp;
        listen                  [::]:53 udp;
        proxy_responses         1;
        proxy_timeout           600s;
        proxy_pass              udp-53-kube-system-kube-dns-53;
    }

nginx-ingress-vts里backend顯示,但是server zone不顯示

原來是本地ie瀏覽器連接里配了代理導致.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM