envoy部分五:envoy的動態配置


一、envoy動態配置介紹

動態資源,是指由envoy通過xDS協議發現所需要的各項配置的機制,相關的配置信息保存 於稱之為管理服務器(Management Server )的主機上,經由xDS API向外暴露;下面是一個 純動態資源的基礎配置框架。

{
"lds_config": "{...}",
"cds_config": "{...}",
"ads_config": "{...}"
}

xDS API為Envoy提供了資源的動態配置機制,它也被稱為Data Plane API。

Envoy支持三種類型的配置信息的動態發現機制,相關的發現服務及其相應的API聯合起來 稱為xDS API。

1)基於文件系統發現:指定要監視的文件系統路徑
2)通過查詢一到多個管理服務器(Management Server)發現:通過DiscoveryRequest協議報文發送請求,並要求服務方以DiscoveryResponse協議報文進行響應
(1)gRPC服務:啟動gRPC流
(2)REST服務:輪詢REST-JSON URL

v3 xDS支持如下幾種資源類型

envoy.config.listener.v3.Listener
envoy.config.route.v3.RouteConfiguration
envoy.config.route.v3.ScopedRouteConfiguration
envoy.config.route.v3.VirtualHost
envoy.config.cluster.v3.Cluster
envoy.config.endpoint.v3.ClusterLoadAssignment
envoy.extensions.transport_sockets.tls.v3.Secret
envoy.service.runtime.v3. Runtime

二、DS API介紹

Envoy對xDS API的管理由后端服務器實現,包括LDS、CDS、RDS、SRDS(Scoped Route)、VHDS (Virtual Host)、EDS、SDS、RTDS(Runtime )等。

1)所有這些API都提供了最終的一致性,並且彼此間不存在相互影響;
2)部分更高級別的操作(例如執行服務的A/B部署)需要進行排序以防止流量被丟棄,因此,基於一個管理服務器提供多類API時還需要使用聚合發現服務(ADS )API。
   ADS API允許所有其他API通過來自單個管理服務器的單個gRPC雙向流進行編組,從而允許對操作進行確定性排序

xDS的各API還支持增量傳輸機制,包括ADS

 

三、Bootstrap node

一個Management Server 實例可能需要同時響應多個不同的Envoy實例的資源發現請求。

1)Management Server上的配置需要為適配到不同的Envoy實例
2)Envoy 實例請求發現配置時,需要在請求報文中上報自身的信息
(1)例如id、cluster、metadata和locality等
(2)這些配置信息定義在Bootstrap配置文件中

專用的頂級配置段“node{…}”中配置

node:
  id: … # An opaque node identifier for the Envoy node. 
  cluster: … # Defines the local service cluster name where Envoy is running.
  metadata: {…} # Opaque metadata extending the node identifier. Envoy will pass this directly to the management server .
  locality: # Locality specifying where the Envoy instance is running.
    region: …
    zone: …
    sub_zone: …
  user_agent_name: … # Free-form string that identifies the entity requesting config. E.g . “envoy ” or “grpc”
  user_agent_version: … # Free-form string that identifies the version of the entity requesting config. E.g . “1.12.2” or “abcd1234” , or “SpecialEnvoyBuild ” 
  user_agent_build_version: # Structured version of the entity requesting config.
    version: …
    metadata: {…}
  extensions: [ ] # List of extensions and their versions supported by the node.
  client_features: [ ]
  listening_addresses: [ ] # Known listening ports on the node as a generic hint to the management server for filtering listeners to be returned

四、API的流程

1、對於典型的HTTP路由方案,xDS API的Management Server 需要為其客戶端(Envoy實例)配 置的核心資源類型為Listener、RouteConfiguration、Cluster和ClusterLoadAssignment四個。每個Listener資源可以指向一個RouteConfiguration資源,該資源可以指向一個或多個Cluster資源,並且每個Cluster資源可以指向一個ClusterLoadAssignment資源。

2、Envoy實例在啟動時請求加載所有Listener和Cluster資源,而后,再獲取由這些Listener和 Cluster所依賴的RouteConfiguration和ClusterLoadAssignment配置;此種場景中,Listener資源和Cluster資源分別代表着客戶端配置樹上的“根(root)”配置,因而 可並行加載。

3、類型gRPC一類的非代理式客戶端可以僅在啟動時請求加載其感興趣的Listener資源, 而后再加載這些特定Listener相關的RouteConfiguration資源;再然后,是這些 RouteConfiguration資源指向的Cluster資源,以及由這些Cluster資源依賴的 ClusterLoadAssignment資源;該種場景中,Listener資源是客戶端整個配置樹的“根”。

五、Envoy的配置方式

Envoy的架構支持非常靈活的配置方式:簡單部署場景可以使用純靜態配置,而更復 雜的部署場景則可以逐步添加需要的動態配置機制

1、純靜態配置:用戶自行提供偵聽器、過濾器鏈、集 群及HTTP路由(http代理場景),上 游端點的 發 現僅可通過DNS服務進行,且配置的重新加載必須通過內置的熱 重啟( hot restart)完成;

2、僅使用EDS:EDS提供的端點發現功能可有效規避DNS的限制( 響應中的 最大記錄 數等);

3、使用EDS和CDS:CDS能夠讓Envoy以優雅的方式添加、更新和刪除 上游集群 ,於是, 初始配置 時, Envoy無須事先了解所有上游集群;

4、EDS、CDS和RDS:動態發現路由配置;RDS與EDS、CDS一起使用時 ,為用戶 提供了構 建復雜路 由 拓撲的能力(流量轉移、藍/綠部署等);

5、EDS、CDS、RDS和LDS:動態發現偵聽器配置,包括內嵌的過濾 器鏈;啟 用此四種 發現服務 后,除 了較罕見的配置變動、證書輪替或更新Envoy程序之外,幾 乎無須再 熱重啟Envoy;

6、EDS、CDS、RDS、LDS和SDS:動態發現偵聽器密鑰相關的證書、 私鑰及TLS會 話票據, 以及對證 書 驗證邏輯的配置(受信任的根證書和撤銷機制等 );

六、Envoy資源的配置源(ConfigSource)

1、配置源(ConfigSource)用於指定資源配置數據的來源,用於為Listener、Cluster、Route、 Endpoint、Secret和VirtualHost等資源提供配置數據。

2、目前,Envoy支持的資源配置源只能是path、api_config_source或ads其中之一。

3、api_config_source或ads的數據來自於xDS API Server,即Management Server。

 

七、基於文件系統的訂閱

為Envoy提供動態配置的最簡單方法是將其放置在ConfigSource中顯式指定的文件路徑中

1)Envoy將使用inotify(Mac OS X上的kqueue)來監視文件的更改,並在更新時解析文件中的DiscoveryResponse 報文
2)二進制protobufs,JSON,YAML和proto文本都是DiscoveryResponse 所支持的數據格式

提示
1)除了統計計數器和日志以外,沒有任何機制可用於文件系統訂閱ACK/NACK更新
2)若配置更新被拒絕,xDS API的最后一個有效配置將繼續適用

1、基於eds實現Envoy基本全動態的配置方式

以EDS為例,Cluster為靜態定義,其各Endpoint通過EDS動態發現

集群定義格式

# Cluster中的endpoint配置格式

clusters:
- name:
  ...
  eds_cluster_config:
    service_name:
    eds_config:
      path: ... # ConfigSource,支持使用path, api_config_source或ads三者之一;

cluster的配置

# 類似如下純靜態格式的Cluster定義
clusters:
- name: webcluster
  connect_timeout: 0.25s
  type: STATIC #類型為靜態
  lb_policy: ROUND_ROBIN
  load_assignment:
    cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 8080

#使用了EDS的配置
clusters:
- name: targetCluster
  connect_timeout: 0.25s
  lb_policy: ROUND_ROBIN
  type: EDS #類型為EDS
  eds_cluster_config:
    service_name: webcluster
    eds_config:
      path: '/etc/envoy/eds.yaml' # 指定訂閱的文件路徑
#提示:文件后綴名為conf,則資源要以json格式定義;文件后綴名為yaml,則資源需要以yaml格式定義;另外,動態配置中,各Envoy實例需要有惟的id標識;

EDS的配置

1)文件/etc/envoy/eds.yaml中以Discovery Response報文的格式給出響應實例,例如,下面的 配置示例用於存在地址172.31.11.11某上游服務器可提供服務時

2) 響應報文需要以JSON格式給出

resources:
  - "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
    cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 8080

隨后,修改該文件,將172.31.11.12也添加進后端端點列表中,模擬配置變動

- endpoint:
        address:
          socket_address:
            address: 172.31.11.12
            port_value: 8080

2、基於lds和cds實現Envoy基本全動態的配置方式

1)各Listener的定義以Discovery Response 的標准格式保存於一個文件中。

2) 各Cluster的定義同樣以Discovery Response的標准格式保存於另一文件中。

如下面Envoy Bootstrap配置文件示例中的文件引用

node:
  id: envoy_front_proxy 
  cluster: MageEdu_Cluster

admin:
  profile_path: /tmp/envoy.prof 
  access_log_path: /tmp/admin_access.log
  address: 
    socket_address: 
      address: 0.0.0.0
      port_value: 9901

dynamic_resources:
  lds_config:
    path: /etc/envoy/conf.d/lds.yaml 
  cds_config: 
    path: /etc/envoy/conf.d/cds.yaml 

lds的Discovery Response格式的配置示例如下

resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
  name: listener_http
  address: 
    socket_address: { address: 0.0.0.0, port_value: 80 }
  filter_chains:
  - filters:
    name: envoy.http_connection_manager
    typed_config: 
      "@type":type.googleapis.com/envoy.extensions.filters.network.http_connection_manag
er.v3.HttpConnectionManager
      stat_prefix: ingress_http
      route_config:
        name: local_route 
        virtual_hosts:
        - name: local_service
          domains: ["*"]
          routes:
          - match:
              prefix: "/"
            route:
              cluster: webcluster
      http_filters:
      - name: envoy.filters.http.router

cds的Discovery Response格式的配置示例如下

resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: webcluster
  connect_timeout: 1s
  type: STRICT_DNS
  load_assignment:
  cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
          address: 
            socket_address: 
              address: webserver01
              port_value: 8080
      - endpoint:
          address: 
            socket_address: 
              address: webserver02
              port_value: 8080

八、基於gRPC的動態配置

1、gRPC的介紹

1)、Enovy支持為每個xDS API獨立指定gRPC ApiConfigSource,它指向與管理服務器對應的某 上游集群。

(1)這將為每個xDS資源類型啟動一個獨立的雙向gRPC流,可能會發送給不同的管理服務器
(2)每個流都有自己獨立維護的 資源版本,且不存在跨資源類型的共享版本機制;
(3)在不使用ADS的情況下,每個資源類型可能具有不同的版本,因為Envoy API允許指向不同的 EDS/RDS資源配置並對應不同的ConfigSources

 

2)、API的交付方式采用最終一致性機制。

2、基於gRPC的動態配置格式

以LDS為例,它配置Listener以動態方式發現和加載,而內部的路由可由發現的Listener直接 提供,也可配置再經由RDS發現。

下面為LDS配置格式,CDS等的配置格式類同:

dynamic_resouces:
  lds_config:
  api_config_source:
    api_type: ... # API 可經由REST或gRPC獲取,支持的類型包括REST、gRPC和delta_gRPC
    resource_api_version: ... # xDS 資源的API版本,對於1.19及之后的Envoy版本,要使用v3;
    rate_limit_settings: {...} # 速率限制
    grpc_services: # 提供grpc服務的一到多個服務源
      transport_api_version : ... # xDS 傳輸協議使用的API版本,對於1.19及之后的Envoy版本,要使用v3;
      envoy_grpc: # Envoy內建的grpc客戶端,envoy_grpc和google_grpc二者僅能用其一;
        cluster_name: ... # grpc集群的名稱;
      google_grpc: # Google的C++ grpc客戶端
      timeout: ... # grpc超時時長;
     
注意:
提供gRPC API服務的Management Server (控制平面)也需要定義為Envoy上的集群, 並由envoy實例通過xDS API進行請求;
(1)通常,這些管理服務器需要以靜態資源的格式提供;
(2)類似於,DHCP協議的Server 端的地址必須靜態配置,而不能經由DHCP協議獲取;

3、基於GRPC管理服務器訂閱

基於gRPC的訂閱功能需要向專用的Management Server請求配置信息。

下面的示例配置使用了lds和cds 分別動態獲取Listener和Cluster相關的配置。

node:
  id: envoy_front_proxy 
  cluster: webcluster
  
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9901

dynamic_resources: 
  lds_config:  #定義動態獲取listener
    resource_api_version: V3
    api_config_source:
      api_type: GRPC  #使用GRPC模式
      transport_api_version: V3
      grpc_services:
      - envoy_grpc:
        cluster_name: xds_cluster #調用grpc服務器
        
  cds_config: #定義動態獲取cluster
    resource_api_version: V3
    api_config_source:
      api_type: GRPC  #使用GRPC模式
      transport_api_version: V3
      grpc_services:
      - envoy_grpc:
        cluster_name: xds_cluster #調用grpc服務器

#xds_cluster需要靜態配置
static_resources:
  clusters:
    - name: xds_cluster
      connect_timeout: 0.25s
      type: STRICT_DNS
      # Used to provide extension-specific protocol options for upstream connections. 
      typed_extension_protocol_options: 
        envoy.extensions.upstreams.http.v3.HttpProtocolOptions: 
          "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions 
           explicit_http_config:
           http2_protocol_options: {}
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: xds_cluster
        endpoints:
        - lb_endpoints:
          - endpoint:
              address:
                socket_address:
                  address: xdsserver-IP
                  port_value: 18000

4、ADS

通過上面的交互順序保證MS資源分發時的流量丟棄是一項很有挑戰的工作,而ADS允許單 一MS通過單個gRPC流提供所有的API更新。

1)配合仔細規划的更新順序,ADS可規避更新過程中流量丟失
2)使用 ADS,在單個流上可通過類型 URL 來進行復用多個獨立的 DiscoveryRequest/DiscoveryResponse 序列

 

使用了ads分別動態獲取Listener和Cluster相關的配置

node:
  id: envoy_front_proxy 
  cluster: webcluster
  
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9901

dynamic_resources:
  ads_config:
    api_type: GRPC
    transport_api_version: V3
    grpc_services:
    - envoy_grpc:
      cluster_name: xds_cluster
      set_node_on_first_message_only: true

  cds_config:
    resource_api_version: V3
    ads: {}
    lds_config:
      resource_api_version: V3
      ads: {}
 


 #xds_cluster需要靜態配置
static_resources:
  clusters:
    - name: xds_cluster
      connect_timeout: 0.25s
      type: STRICT_DNS
      # Used to provide extension-specific protocol options for upstream connections. 
      typed_extension_protocol_options: 
        envoy.extensions.upstreams.http.v3.HttpProtocolOptions: 
          "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions 
           explicit_http_config:
           http2_protocol_options: {}
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: xds_cluster
        endpoints:
        - lb_endpoints:
          - endpoint:
              address:
                socket_address:
                  address: xdsserver-IP
                  port_value: 18000

九、REST-JSON輪詢訂閱

1、REST-JSON介紹

1)、通過REST端點進行的同步(長)輪詢也可用於xDS單例API

2)、上面的消息順序是類似的,除了沒有維護到管理服務器的持久流

3)、預計在任何時間點只有一個未完成的請求,因此響應nonce在REST-JSON中是可選的 p proto3的JSON規范轉換用於編碼DiscoveryRequest和DiscoveryResponse消息。

4)、ADS不適用於REST-JSON輪詢

5)、當輪詢周期設置為較小的值時,為了進行長輪詢,則還需要避免發送DiscoveryResponse ,除 非發生了對底層資源的更改

2、以LDS為例,它用於發現並配置Listener

基於REST訂閱的LDS配置格式,CDS等其它配置類似

dynamic_resources:
  lds_config:
  resource_api_version: … # xDS 資源配置遵循的API版本,v1.19版本及以后僅支持V3 ;
  api_config_source:
    transport_api_version: ... # xDS 傳輸協議中使用API版本,v1.19版本及以后僅支持V3;
    api_type: ... # API 可經由REST或gRPC獲取,支持的類型包括REST、GRPC和DELTA_GRPC
    cluster_names: ... # 提供服務的集群名稱列表,僅能與REST類型的API一起使用;多個集群用於冗余之目的,故障時將循環訪問;
    refresh_delay: ... # REST API 輪詢時間間隔;
    request_timeout: ... # REST API請求超時時長,默認為1s;

注意:提供REST API服務的管理服務器也需要定義為Envoy上的集群,並由LDS等相關 的動態發現服務進行調用;但這些管理服務器需要以靜態配置的格式提供;

3、基於REST管理服務器訂閱

使用了ads分別動態獲取Listener和Cluster相關的配置

node:
  id: envoy_front_proxy
  cluster: webcluster

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9901

dynamic_resources:
  cds_config:
    resource_api_version: V3
    api_config_source:
      api_type: REST
      transport_api_version: V3 refresh_delay: {nanos: 500000000} # 1/2s cluster_names:
      - xds_cluster

  lds_config:
    resource_api_version: V3
    api_config_source:
    api_type: REST
      transport_api_version: V3 refresh_delay: {nanos: 500000000} # 1/2s cluster_names:
      - xds_cluster

十、實驗案例

1、cluster-static-dns-discovery

實驗環境

三個Service:

envoy:Front Proxy,地址由docker-compose動態分配
webserver01:第一個后端服務,地址由docker-compose動態分配,且將webserver01解析到該地址
webserver02:第二個后端服務,地址由docker-compose動態分配,且將webserver02解析到該地址

envoy.yaml

admin:
  access_log_path: "/dev/null"
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_1
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router
          
clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: webserver01, port_value: 8080 }
        - endpoint:
            address:
              socket_address: { address: webserver02, port_value: 8080 }

docker-compose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver01
    networks:
      envoymesh:
        aliases:
        - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver02
    networks:
      envoymesh:
        aliases:
        - webserver02

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.10.0/24

實驗驗證

docker-compose up

克隆窗口來驗證

root@test:~# front_proxy_ip=$(docker container inspect --format '{{ $network := index .NetworkSettings.Networks "cluster-static-dns-discovery_envoymesh" }}{{ $network.IPAddress}}' cluster-static-dns-discovery_envoy_1)
root@test:~# echo $front_proxy_ip 
172.31.10.4
root@test:~# curl $front_proxy_ip
iKubernetes demoapp v1.0 !! ClientIP: 172.31.10.4, ServerName: webserver01, ServerIP: 172.31.10.2!
root@test:~# curl $front_proxy_ip
iKubernetes demoapp v1.0 !! ClientIP: 172.31.10.4, ServerName: webserver02, ServerIP: 172.31.10.3!

#可以通過admin interface了解集群的相關狀態,尤其是獲取的各endpoint的相關信息
root@test:~# curl http://${front_proxy_ip}:9901/clusters
local_cluster::observability_name::local_cluster
local_cluster::default_priority::max_connections::1024
local_cluster::default_priority::max_pending_requests::1024
local_cluster::default_priority::max_requests::1024
local_cluster::default_priority::max_retries::3
local_cluster::high_priority::max_connections::1024
local_cluster::high_priority::max_pending_requests::1024
local_cluster::high_priority::max_requests::1024
local_cluster::high_priority::max_retries::3
local_cluster::added_via_api::false
local_cluster::172.31.10.2:8080::cx_active::0
local_cluster::172.31.10.2:8080::cx_connect_fail::0
local_cluster::172.31.10.2:8080::cx_total::2
local_cluster::172.31.10.2:8080::rq_active::0
local_cluster::172.31.10.2:8080::rq_error::0
local_cluster::172.31.10.2:8080::rq_success::2
local_cluster::172.31.10.2:8080::rq_timeout::0
local_cluster::172.31.10.2:8080::rq_total::2
local_cluster::172.31.10.2:8080::hostname::webserver01
local_cluster::172.31.10.2:8080::health_flags::healthy
local_cluster::172.31.10.2:8080::weight::1
local_cluster::172.31.10.2:8080::region::
local_cluster::172.31.10.2:8080::zone::
local_cluster::172.31.10.2:8080::sub_zone::
local_cluster::172.31.10.2:8080::canary::false
local_cluster::172.31.10.2:8080::priority::0
local_cluster::172.31.10.2:8080::success_rate::-1.0
local_cluster::172.31.10.2:8080::local_origin_success_rate::-1.0
local_cluster::172.31.10.3:8080::cx_active::0
local_cluster::172.31.10.3:8080::cx_connect_fail::0
local_cluster::172.31.10.3:8080::cx_total::3
local_cluster::172.31.10.3:8080::rq_active::0
local_cluster::172.31.10.3:8080::rq_error::0
local_cluster::172.31.10.3:8080::rq_success::3
local_cluster::172.31.10.3:8080::rq_timeout::0
local_cluster::172.31.10.3:8080::rq_total::3
local_cluster::172.31.10.3:8080::hostname::webserver02
local_cluster::172.31.10.3:8080::health_flags::healthy
local_cluster::172.31.10.3:8080::weight::1
local_cluster::172.31.10.3:8080::region::
local_cluster::172.31.10.3:8080::zone::
local_cluster::172.31.10.3:8080::sub_zone::
local_cluster::172.31.10.3:8080::canary::false
local_cluster::172.31.10.3:8080::priority::0
local_cluster::172.31.10.3:8080::success_rate::-1.0
local_cluster::172.31.10.3:8080::local_origin_success_rate::-1.0

2、eds-filesystem

實驗環境

五個Service:

envoy:Front Proxy,地址為172.31.11.2
webserver01:第一個后端服務
webserver01-sidecar:第一個后端服務的Sidecar Proxy,地址為172.31.11.11
webserver02:第二個后端服務
webserver02-sidecar:第二個后端服務的Sidecar Proxy,地址為172.31.11.12

front-envoy.yaml

node:
  id: envoy_front_proxy
  cluster: MageEdu_Cluster

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_01
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: webcluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: webcluster
    connect_timeout: 0.25s
    type: EDS
    lb_policy: ROUND_ROBIN
    eds_cluster_config:
      service_name: webcluster
      eds_config:
        path: '/etc/envoy/eds.conf.d/eds.yaml' 

envoy-sidecar-proxy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }

eds.conf.d目錄中的文件

eds.yaml

resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 8080

eds.yaml.v1

resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 8080

eds.yaml.v2

version_info: '2'
resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 8080
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.12
            port_value: 8080

docker-compose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    - ./eds.conf.d/:/etc/envoy/eds.conf.d/
    networks:
      envoymesh:
        ipv4_address: 172.31.11.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01-sidecar
    - webserver02-sidecar

  webserver01-sidecar:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.11.11
        aliases:
        - webserver01-sidecar

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver01-sidecar"
    depends_on:
    - webserver01-sidecar

  webserver02-sidecar:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.11.12
        aliases:
        - webserver02-sidecar
        
   webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver02-sidecar"
    depends_on:
    - webserver02-sidecar

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.11.0/24

實驗驗證

docker-compose up

克隆窗口

# 查看Cluster中的Endpoint信息 
 root@test:~# curl 172.31.11.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::false
webcluster::172.31.11.11:8080::cx_active::0
webcluster::172.31.11.11:8080::cx_connect_fail::0
webcluster::172.31.11.11:8080::cx_total::0
webcluster::172.31.11.11:8080::rq_active::0
webcluster::172.31.11.11:8080::rq_error::0
webcluster::172.31.11.11:8080::rq_success::0
webcluster::172.31.11.11:8080::rq_timeout::0
webcluster::172.31.11.11:8080::rq_total::0
webcluster::172.31.11.11:8080::hostname::
webcluster::172.31.11.11:8080::health_flags::healthy
webcluster::172.31.11.11:8080::weight::1
webcluster::172.31.11.11:8080::region::
webcluster::172.31.11.11:8080::zone::
webcluster::172.31.11.11:8080::sub_zone::
webcluster::172.31.11.11:8080::canary::false
webcluster::172.31.11.11:8080::priority::0
webcluster::172.31.11.11:8080::success_rate::-1.0
webcluster::172.31.11.11:8080::local_origin_success_rate::-1.0



# 接入front proxy envoy容器的交互式接口,修改eds.conf文件中的內容,將另一個endpoint添加進文件中;
root@test:~# docker exec -it eds-filesystem_envoy_1 /bin/sh
/ # cd /etc/envoy/eds.conf.d/
/etc/envoy/eds.conf.d # cat eds.yaml.v2 > eds.yaml
# 運行下面的命令強制激活文件更改,以便基於inode監視的工作機制可被觸發
/etc/envoy/eds.conf.d # mv eds.yaml temp && mv temp eds.yaml

# 再次查看Cluster中的Endpoint信息 
root@test:~# curl 172.31.11.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::false
webcluster::172.31.11.11:8080::cx_active::0
webcluster::172.31.11.11:8080::cx_connect_fail::0
webcluster::172.31.11.11:8080::cx_total::0
webcluster::172.31.11.11:8080::rq_active::0
webcluster::172.31.11.11:8080::rq_error::0
webcluster::172.31.11.11:8080::rq_success::0
webcluster::172.31.11.11:8080::rq_timeout::0
webcluster::172.31.11.11:8080::rq_total::0
webcluster::172.31.11.11:8080::hostname::
webcluster::172.31.11.11:8080::health_flags::healthy
webcluster::172.31.11.11:8080::weight::1
webcluster::172.31.11.11:8080::region::
webcluster::172.31.11.11:8080::zone::
webcluster::172.31.11.11:8080::sub_zone::
webcluster::172.31.11.11:8080::canary::false
webcluster::172.31.11.11:8080::priority::0
webcluster::172.31.11.11:8080::success_rate::-1.0
webcluster::172.31.11.11:8080::local_origin_success_rate::-1.0
webcluster::172.31.11.12:8080::cx_active::0
webcluster::172.31.11.12:8080::cx_connect_fail::0
webcluster::172.31.11.12:8080::cx_total::0
webcluster::172.31.11.12:8080::rq_active::0
webcluster::172.31.11.12:8080::rq_error::0
webcluster::172.31.11.12:8080::rq_success::0
webcluster::172.31.11.12:8080::rq_timeout::0
webcluster::172.31.11.12:8080::rq_total::0
webcluster::172.31.11.12:8080::hostname::
webcluster::172.31.11.12:8080::health_flags::healthy
webcluster::172.31.11.12:8080::weight::1
webcluster::172.31.11.12:8080::region::
webcluster::172.31.11.12:8080::zone::
webcluster::172.31.11.12:8080::sub_zone::
webcluster::172.31.11.12:8080::canary::false
webcluster::172.31.11.12:8080::priority::0
webcluster::172.31.11.12:8080::success_rate::-1.0
webcluster::172.31.11.12:8080::local_origin_success_rate::-1.0
#新節點172.31.11.12已經添加

3、lds-cds-filesystem

實驗環境

五個Service:

envoy:Front Proxy,地址為172.31.12.2
webserver01:第一個后端服務
webserver01-sidecar:第一個后端服務的Sidecar Proxy,地址為172.31.12.11
webserver02:第二個后端服務
webserver02-sidecar:第二個后端服務的Sidecar Proxy,地址為172.31.12.12

front-envoy.yaml

node:
  id: envoy_front_proxy
  cluster: MageEdu_Cluster

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

dynamic_resources:
  lds_config:
    path: /etc/envoy/conf.d/lds.yaml
  cds_config:
    path: /etc/envoy/conf.d/cds.yaml

envoy-sidecar-proxy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }

docker-compose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    - ./conf.d/:/etc/envoy/conf.d/
    networks:
      envoymesh:
        ipv4_address: 172.31.12.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver01-app
    - webserver02
    - webserver02-app

  webserver01:
    #image: envoyproxy/envoy-alpine:v1.18-latest
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.12.11
        aliases:
        - webserver01-sidecar

  webserver01-app:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver01"
    depends_on:
    - webserver01

  webserver02:
    #image: envoyproxy/envoy-alpine:v1.18-latest
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.12.12
        aliases:
        - webserver02-sidecar

  webserver02-app:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver02"
    depends_on:
    - webserver02

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.12.0/24

conf.d目錄中的文件

cds.yaml

resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: webcluster
  connect_timeout: 1s
  type: STRICT_DNS
  load_assignment:
    cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: webserver01
              port_value: 8080
      #- endpoint:
      #    address:
      #      socket_address:
      #        address: webserver02
      #        port_value: 8080

lds.yaml

resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
  name: listener_http
  address:
    socket_address: { address: 0.0.0.0, port_value: 80 }
  filter_chains:
  - filters:
      name: envoy.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        route_config:
          name: local_route
          virtual_hosts:
          - name: local_service
            domains: ["*"]
            routes:
            - match:
                prefix: "/"
              route:
                cluster: webcluster
        http_filters:
        - name: envoy.filters.http.router

實驗驗證

docker-compose up

克隆窗口

# 查看Cluster的信息 
root@test:~# curl 172.31.12.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.12.11:8080::cx_active::0
webcluster::172.31.12.11:8080::cx_connect_fail::0
webcluster::172.31.12.11:8080::cx_total::0
webcluster::172.31.12.11:8080::rq_active::0
webcluster::172.31.12.11:8080::rq_error::0
webcluster::172.31.12.11:8080::rq_success::0
webcluster::172.31.12.11:8080::rq_timeout::0
webcluster::172.31.12.11:8080::rq_total::0
webcluster::172.31.12.11:8080::hostname::webserver01
webcluster::172.31.12.11:8080::health_flags::healthy
webcluster::172.31.12.11:8080::weight::1
webcluster::172.31.12.11:8080::region::
webcluster::172.31.12.11:8080::zone::
webcluster::172.31.12.11:8080::sub_zone::
webcluster::172.31.12.11:8080::canary::false
webcluster::172.31.12.11:8080::priority::0
webcluster::172.31.12.11:8080::success_rate::-1.0
webcluster::172.31.12.11:8080::local_origin_success_rate::-1.0

# 查看Listener的信息 
root@test:~# curl 172.31.12.2:9901/listeners
listener_http::0.0.0.0:80

# 接入front proxy envoy容器的交互式接口
root@test:~# docker exec -it lds-cds-filesystem_envoy_1 /bin/sh
/ # cd /etc/envoy/conf.d/

# 修改cds.yaml的內容,cds.yaml中添加一個節點
/etc/envoy/conf.d # cat cds.yaml 
resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: webcluster
  connect_timeout: 1s
  type: STRICT_DNS
  load_assignment:
    cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: webserver01
              port_value: 8080
      - endpoint:
          address:
            socket_address:
              address: webserver02
              port_value: 8080

#運行類似下面的命令強制激活文件更改,以便基於inode監視的工作機制可被觸發
/etc/envoy/conf.d # mv cds.yaml temp && mv temp cds.yaml

## 再次驗證相關的配置信息
root@test:~# curl 172.31.12.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.12.11:8080::cx_active::0
webcluster::172.31.12.11:8080::cx_connect_fail::0
webcluster::172.31.12.11:8080::cx_total::0
webcluster::172.31.12.11:8080::rq_active::0
webcluster::172.31.12.11:8080::rq_error::0
webcluster::172.31.12.11:8080::rq_success::0
webcluster::172.31.12.11:8080::rq_timeout::0
webcluster::172.31.12.11:8080::rq_total::0
webcluster::172.31.12.11:8080::hostname::webserver01
webcluster::172.31.12.11:8080::health_flags::healthy
webcluster::172.31.12.11:8080::weight::1
webcluster::172.31.12.11:8080::region::
webcluster::172.31.12.11:8080::zone::
webcluster::172.31.12.11:8080::sub_zone::
webcluster::172.31.12.11:8080::canary::false
webcluster::172.31.12.11:8080::priority::0
webcluster::172.31.12.11:8080::success_rate::-1.0
webcluster::172.31.12.11:8080::local_origin_success_rate::-1.0
webcluster::172.31.12.12:8080::cx_active::0
webcluster::172.31.12.12:8080::cx_connect_fail::0
webcluster::172.31.12.12:8080::cx_total::0
webcluster::172.31.12.12:8080::rq_active::0
webcluster::172.31.12.12:8080::rq_error::0
webcluster::172.31.12.12:8080::rq_success::0
webcluster::172.31.12.12:8080::rq_timeout::0
webcluster::172.31.12.12:8080::rq_total::0
webcluster::172.31.12.12:8080::hostname::webserver02
webcluster::172.31.12.12:8080::health_flags::healthy
webcluster::172.31.12.12:8080::weight::1
webcluster::172.31.12.12:8080::region::
webcluster::172.31.12.12:8080::zone::
webcluster::172.31.12.12:8080::sub_zone::
webcluster::172.31.12.12:8080::canary::false
webcluster::172.31.12.12:8080::priority::0
webcluster::172.31.12.12:8080::success_rate::-1.0
webcluster::172.31.12.12:8080::local_origin_success_rate::-1.0
#新節點已經添加

4、ads-grpc

實驗環境

六個Service:

envoy:Front Proxy,地址為172.31.16.2
webserver01:第一個后端服務
webserver01-sidecar:第一個后端服務的Sidecar Proxy,地址為172.31.16.11
webserver02:第二個后端服務
webserver02-sidecar:第二個后端服務的Sidecar Proxy,地址為172.31.16.12
xdsserver: xDS management server,地址為172.31.16.5

front-envoy.yaml

node:
  id: envoy_front_proxy
  cluster: webcluster

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

dynamic_resources:
  ads_config:
    api_type: GRPC
    transport_api_version: V3
    grpc_services:
    - envoy_grpc:
        cluster_name: xds_cluster
    set_node_on_first_message_only: true
  cds_config:
    resource_api_version: V3
    ads: {}
  lds_config:
    resource_api_version: V3
    ads: {}
static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    # The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections. 
    typed_extension_protocol_options:
      envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
        "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
        explicit_http_config:
          http2_protocol_options: {}
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: xdsserver
                port_value: 18000

envoy-sidecar-proxy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router
  clusters:
    - name: local_cluster
      connect_timeout: 0.25s
      type: STATIC
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: local_cluster
        endpoints:
        - lb_endpoints:
          - endpoint:
              address:
                socket_address: { address: 127.0.0.1, port_value: 8080 }

docker-compose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.18-latest
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.16.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02
    - xdsserver

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.16.11

  webserver01-sidecar:
    image: envoyproxy/envoy-alpine:v1.18-latest
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver01"
    depends_on:
    - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.16.12

  webserver02-sidecar:
    image: envoyproxy/envoy-alpine:v1.18-latest
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver02"
    depends_on:
    - webserver02

  xdsserver:
    image: ikubernetes/envoy-xds-server:v0.1
    environment:
      - SERVER_PORT=18000
      - NODE_ID=envoy_front_proxy
      - RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
    volumes:
    - ./resources:/etc/envoy-xds-server/config/
    networks:
      envoymesh:
        ipv4_address: 172.31.16.5
        aliases:
        - xdsserver
        - xds-service
    expose:
    - "18000"
    
networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.16.0/24

resources目錄中的文件

config.yaml

name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.16.11
      port: 8080

config.yaml-v2

name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.16.11
      port: 8080
    - address: 172.31.16.12
      port: 8080

實驗驗證

docker-compose up

克隆窗口

# 查看Cluster及Endpoints信息;
root@test:~# curl 172.31.16.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.16.11:8080::cx_active::0
webcluster::172.31.16.11:8080::cx_connect_fail::0
webcluster::172.31.16.11:8080::cx_total::0
webcluster::172.31.16.11:8080::rq_active::0
webcluster::172.31.16.11:8080::rq_error::0
webcluster::172.31.16.11:8080::rq_success::0
webcluster::172.31.16.11:8080::rq_timeout::0
webcluster::172.31.16.11:8080::rq_total::0
webcluster::172.31.16.11:8080::hostname::
webcluster::172.31.16.11:8080::health_flags::healthy
webcluster::172.31.16.11:8080::weight::1
webcluster::172.31.16.11:8080::region::
webcluster::172.31.16.11:8080::zone::
webcluster::172.31.16.11:8080::sub_zone::
webcluster::172.31.16.11:8080::canary::false
webcluster::172.31.16.11:8080::priority::0
webcluster::172.31.16.11:8080::success_rate::-1.0
webcluster::172.31.16.11:8080::local_origin_success_rate::-1.0
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.16.5:18000::cx_active::1
xds_cluster::172.31.16.5:18000::cx_connect_fail::0
xds_cluster::172.31.16.5:18000::cx_total::1
xds_cluster::172.31.16.5:18000::rq_active::3
xds_cluster::172.31.16.5:18000::rq_error::0
xds_cluster::172.31.16.5:18000::rq_success::0
xds_cluster::172.31.16.5:18000::rq_timeout::0
xds_cluster::172.31.16.5:18000::rq_total::3
xds_cluster::172.31.16.5:18000::hostname::xdsserver
xds_cluster::172.31.16.5:18000::health_flags::healthy
xds_cluster::172.31.16.5:18000::weight::1
xds_cluster::172.31.16.5:18000::region::
xds_cluster::172.31.16.5:18000::zone::
xds_cluster::172.31.16.5:18000::sub_zone::
xds_cluster::172.31.16.5:18000::canary::false
xds_cluster::172.31.16.5:18000::priority::0
xds_cluster::172.31.16.5:18000::success_rate::-1.0
xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0

#查看動態Clusters的相關信息
root@test:~# curl -s 172.31.16.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
[
  {
    "version_info": "411",
    "cluster": {
      "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
      "name": "webcluster",
      "type": "EDS",
      "eds_cluster_config": {
        "eds_config": {
          "api_config_source": {
            "api_type": "GRPC",
            "grpc_services": [
              {
                "envoy_grpc": {
                  "cluster_name": "xds_cluster"
                }
              }
            ],
            "set_node_on_first_message_only": true,
            "transport_api_version": "V3"
          },
          "resource_api_version": "V3"
        }
      },
      "connect_timeout": "5s",
      "dns_lookup_family": "V4_ONLY"
    },
    "last_updated": "2021-12-02T07:47:28.765Z"
  }
]

# 查看Listener列表
root@test:~# curl 172.31.16.2:9901/listeners
listener_http::0.0.0.0:80

#查看動態的Listener信息
root@test:~# curl -s 172.31.16.2:9901/config_dump?resource=dynamic_listeners | jq '.configs[0].active_state.listener.address'
{
  "socket_address": {
    "address": "0.0.0.0",
    "port_value": 80
  }
}

# 接入xdsserver容器的交互式接口,修改config.yaml文件中的內容,將另一個endpoint添加進文件中,或進行其它修改;
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# docker-compose exec xdsserver sh
/ # cd /etc/envoy-xds-server/config/
/etc/envoy-xds-server/config #/etc/envoy-xds-server/config # cat config.yaml-v2 > config.yaml
#提示:以上修改操作也可以直接在宿主機上的存儲卷目錄中進行。

# 再次查看Cluster中的Endpoint信息
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# curl 172.31.16.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.16.11:8080::cx_active::0
webcluster::172.31.16.11:8080::cx_connect_fail::0
webcluster::172.31.16.11:8080::cx_total::0
webcluster::172.31.16.11:8080::rq_active::0
webcluster::172.31.16.11:8080::rq_error::0
webcluster::172.31.16.11:8080::rq_success::0
webcluster::172.31.16.11:8080::rq_timeout::0
webcluster::172.31.16.11:8080::rq_total::0
webcluster::172.31.16.11:8080::hostname::
webcluster::172.31.16.11:8080::health_flags::healthy
webcluster::172.31.16.11:8080::weight::1
webcluster::172.31.16.11:8080::region::
webcluster::172.31.16.11:8080::zone::
webcluster::172.31.16.11:8080::sub_zone::
webcluster::172.31.16.11:8080::canary::false
webcluster::172.31.16.11:8080::priority::0
webcluster::172.31.16.11:8080::success_rate::-1.0
webcluster::172.31.16.11:8080::local_origin_success_rate::-1.0
webcluster::172.31.16.12:8080::cx_active::0
webcluster::172.31.16.12:8080::cx_connect_fail::0
webcluster::172.31.16.12:8080::cx_total::0
webcluster::172.31.16.12:8080::rq_active::0
webcluster::172.31.16.12:8080::rq_error::0
webcluster::172.31.16.12:8080::rq_success::0
webcluster::172.31.16.12:8080::rq_timeout::0
webcluster::172.31.16.12:8080::rq_total::0
webcluster::172.31.16.12:8080::hostname::
webcluster::172.31.16.12:8080::health_flags::healthy
webcluster::172.31.16.12:8080::weight::1
webcluster::172.31.16.12:8080::region::
webcluster::172.31.16.12:8080::zone::
webcluster::172.31.16.12:8080::sub_zone::
webcluster::172.31.16.12:8080::canary::false
webcluster::172.31.16.12:8080::priority::0
webcluster::172.31.16.12:8080::success_rate::-1.0
webcluster::172.31.16.12:8080::local_origin_success_rate::-1.0
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.16.5:18000::cx_active::1
xds_cluster::172.31.16.5:18000::cx_connect_fail::0
xds_cluster::172.31.16.5:18000::cx_total::1
xds_cluster::172.31.16.5:18000::rq_active::3
xds_cluster::172.31.16.5:18000::rq_error::0
xds_cluster::172.31.16.5:18000::rq_success::0
xds_cluster::172.31.16.5:18000::rq_timeout::0
xds_cluster::172.31.16.5:18000::rq_total::3
xds_cluster::172.31.16.5:18000::hostname::xdsserver
xds_cluster::172.31.16.5:18000::health_flags::healthy
xds_cluster::172.31.16.5:18000::weight::1
xds_cluster::172.31.16.5:18000::region::
xds_cluster::172.31.16.5:18000::zone::
xds_cluster::172.31.16.5:18000::sub_zone::
xds_cluster::172.31.16.5:18000::canary::false
xds_cluster::172.31.16.5:18000::priority::0
xds_cluster::172.31.16.5:18000::success_rate::-1.0
xds_cluster::172.31.16.5:18000::local_origin_success_rate::-1.0
#新節點已經添加

5、lds-cds-grpc

實驗環境

六個Service:

envoy:Front Proxy,地址為172.31.15.2
webserver01:第一個后端服務
webserver01-sidecar:第一個后端服務的Sidecar Proxy,地址為172.31.15.11
webserver02:第二個后端服務
webserver02-sidecar:第二個后端服務的Sidecar Proxy,地址為172.31.15.12
xdsserver: xDS management server,地址為172.31.15.5

envoy_front_proxy.yaml

node:
  id: envoy_front_proxy
  cluster: webcluster

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

dynamic_resources:
  lds_config:
    resource_api_version: V3
    api_config_source:
      api_type: GRPC
      transport_api_version: V3
      grpc_services:
      - envoy_grpc:
          cluster_name: xds_cluster

  cds_config:
    resource_api_version: V3
    api_config_source:
      api_type: GRPC
      transport_api_version: V3
      grpc_services:
      - envoy_grpc:
          cluster_name: xds_cluster

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    # The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections. 
    typed_extension_protocol_options:
      envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
        "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
        explicit_http_config:
          http2_protocol_options: {}
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: xdsserver
                port_value: 18000

envoy-sidecar-proxy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service 
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }

docker-compose.yaml

version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.15.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02
    - xdsserver

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.15.11

  webserver01-sidecar:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver01"
    depends_on:
    - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.15.12

  webserver02-sidecar:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver02"
    depends_on:
    - webserver02

  xdsserver:
    image: ikubernetes/envoy-xds-server:v0.1
    environment:
      - SERVER_PORT=18000
      - NODE_ID=envoy_front_proxy
      - RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
    volumes:
    - ./resources:/etc/envoy-xds-server/config/
    networks:
      envoymesh:
        ipv4_address: 172.31.15.5
        aliases:
        - xdsserver
        - xds-service
    expose:
    - "18000"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.15.0/24

resources目錄文件

config.yaml

name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.15.11
      port: 8080

config.yaml-v2

name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.15.11
      port: 8080
    - address: 172.31.15.12
      port: 8080

實驗驗證

docker-compose up

克隆窗口

# 查看Cluster及Endpoints信息
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# curl 172.31.15.2:9901/clusters
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.15.5:18000::cx_active::1
xds_cluster::172.31.15.5:18000::cx_connect_fail::0
xds_cluster::172.31.15.5:18000::cx_total::1
xds_cluster::172.31.15.5:18000::rq_active::4
xds_cluster::172.31.15.5:18000::rq_error::0
xds_cluster::172.31.15.5:18000::rq_success::0
xds_cluster::172.31.15.5:18000::rq_timeout::0
xds_cluster::172.31.15.5:18000::rq_total::4
xds_cluster::172.31.15.5:18000::hostname::xdsserver
xds_cluster::172.31.15.5:18000::health_flags::healthy
xds_cluster::172.31.15.5:18000::weight::1
xds_cluster::172.31.15.5:18000::region::
xds_cluster::172.31.15.5:18000::zone::
xds_cluster::172.31.15.5:18000::sub_zone::
xds_cluster::172.31.15.5:18000::canary::false
xds_cluster::172.31.15.5:18000::priority::0
xds_cluster::172.31.15.5:18000::success_rate::-1.0
xds_cluster::172.31.15.5:18000::local_origin_success_rate::-1.0
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.15.11:8080::cx_active::0
webcluster::172.31.15.11:8080::cx_connect_fail::0
webcluster::172.31.15.11:8080::cx_total::0
webcluster::172.31.15.11:8080::rq_active::0
webcluster::172.31.15.11:8080::rq_error::0
webcluster::172.31.15.11:8080::rq_success::0
webcluster::172.31.15.11:8080::rq_timeout::0
webcluster::172.31.15.11:8080::rq_total::0
webcluster::172.31.15.11:8080::hostname::
webcluster::172.31.15.11:8080::health_flags::healthy
webcluster::172.31.15.11:8080::weight::1
webcluster::172.31.15.11:8080::region::
webcluster::172.31.15.11:8080::zone::
webcluster::172.31.15.11:8080::sub_zone::
webcluster::172.31.15.11:8080::canary::false
webcluster::172.31.15.11:8080::priority::0
webcluster::172.31.15.11:8080::success_rate::-1.0
webcluster::172.31.15.11:8080::local_origin_success_rate::-1.0

#或者查看動態Clusters的相關信息
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# curl -s 172.31.15.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'
[
  {
    "version_info": "411",
    "cluster": {
      "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
      "name": "webcluster",
      "type": "EDS",
      "eds_cluster_config": {
        "eds_config": {
          "api_config_source": {
            "api_type": "GRPC",
            "grpc_services": [
              {
                "envoy_grpc": {
                  "cluster_name": "xds_cluster"
                }
              }
            ],
            "set_node_on_first_message_only": true,
            "transport_api_version": "V3"
          },
          "resource_api_version": "V3"
        }
      },
      "connect_timeout": "5s",
      "dns_lookup_family": "V4_ONLY"
    },
    "last_updated": "2021-12-02T08:05:20.650Z"
  }
]

# 查看Listener列表
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# curl 172.31.15.2:9901/listeners
listener_http::0.0.0.0:80

#或者查看動態的Listener信息
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# curl -s 172.31.15.2:9901/config_dump?resource=dynamic_listeners | jq '.configs[0].active_state.listener.address'
{
  "socket_address": {
    "address": "0.0.0.0",
    "port_value": 80
  }
}

# 接入xdsserver容器的交互式接口,修改config.yaml文件中的內容,將另一個endpoint添加進文件中,或進行其它修改;
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# docker exec -it lds-cds-grpc_xdsserver_1 sh

/ # cd /etc/envoy-xds-server/config
/etc/envoy-xds-server/config # cat config.yaml-v2 > config.yaml
#提示:以上修改操作也可以直接在宿主機上的存儲卷目錄中進行。

# 再次查看Cluster中的Endpoint信息 
root@test:/apps/servicemesh_in_practise-develop/Dynamic-Configuration/ads-grpc# curl 172.31.15.2:9901/clusters
xds_cluster::observability_name::xds_cluster
xds_cluster::default_priority::max_connections::1024
xds_cluster::default_priority::max_pending_requests::1024
xds_cluster::default_priority::max_requests::1024
xds_cluster::default_priority::max_retries::3
xds_cluster::high_priority::max_connections::1024
xds_cluster::high_priority::max_pending_requests::1024
xds_cluster::high_priority::max_requests::1024
xds_cluster::high_priority::max_retries::3
xds_cluster::added_via_api::false
xds_cluster::172.31.15.5:18000::cx_active::1
xds_cluster::172.31.15.5:18000::cx_connect_fail::0
xds_cluster::172.31.15.5:18000::cx_total::1
xds_cluster::172.31.15.5:18000::rq_active::4
xds_cluster::172.31.15.5:18000::rq_error::0
xds_cluster::172.31.15.5:18000::rq_success::0
xds_cluster::172.31.15.5:18000::rq_timeout::0
xds_cluster::172.31.15.5:18000::rq_total::4
xds_cluster::172.31.15.5:18000::hostname::xdsserver
xds_cluster::172.31.15.5:18000::health_flags::healthy
xds_cluster::172.31.15.5:18000::weight::1
xds_cluster::172.31.15.5:18000::region::
xds_cluster::172.31.15.5:18000::zone::
xds_cluster::172.31.15.5:18000::sub_zone::
xds_cluster::172.31.15.5:18000::canary::false
xds_cluster::172.31.15.5:18000::priority::0
xds_cluster::172.31.15.5:18000::success_rate::-1.0
xds_cluster::172.31.15.5:18000::local_origin_success_rate::-1.0
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.15.11:8080::cx_active::0
webcluster::172.31.15.11:8080::cx_connect_fail::0
webcluster::172.31.15.11:8080::cx_total::0
webcluster::172.31.15.11:8080::rq_active::0
webcluster::172.31.15.11:8080::rq_error::0
webcluster::172.31.15.11:8080::rq_success::0
webcluster::172.31.15.11:8080::rq_timeout::0
webcluster::172.31.15.11:8080::rq_total::0
webcluster::172.31.15.11:8080::hostname::
webcluster::172.31.15.11:8080::health_flags::healthy
webcluster::172.31.15.11:8080::weight::1
webcluster::172.31.15.11:8080::region::
webcluster::172.31.15.11:8080::zone::
webcluster::172.31.15.11:8080::sub_zone::
webcluster::172.31.15.11:8080::canary::false
webcluster::172.31.15.11:8080::priority::0
webcluster::172.31.15.11:8080::success_rate::-1.0
webcluster::172.31.15.11:8080::local_origin_success_rate::-1.0
webcluster::172.31.15.12:8080::cx_active::0
webcluster::172.31.15.12:8080::cx_connect_fail::0
webcluster::172.31.15.12:8080::cx_total::0
webcluster::172.31.15.12:8080::rq_active::0
webcluster::172.31.15.12:8080::rq_error::0
webcluster::172.31.15.12:8080::rq_success::0
webcluster::172.31.15.12:8080::rq_timeout::0
webcluster::172.31.15.12:8080::rq_total::0
webcluster::172.31.15.12:8080::hostname::
webcluster::172.31.15.12:8080::health_flags::healthy
webcluster::172.31.15.12:8080::weight::1
webcluster::172.31.15.12:8080::region::
webcluster::172.31.15.12:8080::zone::
webcluster::172.31.15.12:8080::sub_zone::
webcluster::172.31.15.12:8080::canary::false
webcluster::172.31.15.12:8080::priority::0
webcluster::172.31.15.12:8080::success_rate::-1.0
webcluster::172.31.15.12:8080::local_origin_success_rate::-1.0
#新節點已加入

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM