envoy部分七:envoy的http流量管理基礎


http的流程圖

 

一、路由的作用

1、路由匹配(match)

(1)基礎匹配:prefix、path和safe_regex

(2)高級匹配:headers和query_patameters

 

2、路由

(1)路由(route) :把請求報文轉發到相應的集群

(2)重定向(redirect) : 把請求報文重定向到另一個域名或host主機

(3)直接響應(direct_response): 直接返回請求報文的應答

二、HTTP連接管理

1、Envoy通過內置的L4過濾器HTTP連接管理器將原始字節轉換為HTTP應用層協議級別的 消息和事 件,例如 接收到的標頭和主體等 ,以及處理 所有HTTP連接和請求共有的功能 , 包括訪問日志、生成和跟蹤請求ID, 請求/響應頭處理、路由表管理和統計信息等。

1)支持HTTP/1.1、WebSockets和HTTP/2,但不支持SPDY
2)關聯的路由表可靜態配置,亦可經由xDS API中的RDS動態生成;
3)內建重試插件,可用於配置重試行為
  (1)Host Predicates
  (2)Priority Predicates
4)內建支持302重定向,它可以捕獲302重定向響應,合成新請求后將其發送到新的路由匹配 ( match)所指定的上游端點,並將收到的響應作為對原始請求的響應返回 客戶端
5)支持適用於HTTP連接及其組成流(constituent streams)的多種可配置超時機制
  (1)連接級別:空閑超時和排空超時(GOAWAY);
  (2)流級別:空閑超時、每路由相關的上游端點超時和每路由相關的gRPC最大超時時長;
6)基於自定義集群的動態轉發代理;

2、HTTP協議相關的功能通過各HTTP過濾器實現 ,這些過濾器大體可分為編碼器、 解碼器和 編 / 解碼器三類;

router (envoy.router )是最常的過濾器之一 ,它基於路由表完成請求的轉發或重定向,以及處理重試操作和生成統計信息等

三、HTTP高級路由功能

Envoy基於HTTP router過濾器基於路由表完成多種高級路由機制,包括:

(1)將域名映射到虛擬主機;
(2)path的前綴(prefix)匹配、精確匹配或正則表達式匹配; 虛擬主機級別的TLS重定向;
(3)path級別的path/host重定向;
(4)由Envoy直接生成響應報文;
(5)顯式host rewrite;
(6)prefix rewrite;
(7)基於HTTP標頭或路由配置的請 求 重 試 與請 求 超 時; 基於運行時參數的流量遷移;
(8)基於權重或百分比的跨集群流量分割;
(9)基於任意標頭匹配路由規則;
(10)基於優先級的路由;
(11)基於hash策略的路由;
... ...

四、HTTP路由及配置框架

1、路由配置中的頂級元素是虛擬主機

1)每個虛擬主機都有一個邏輯名稱以及一組域名,請求報文中的主機頭將根據此處的域名進行路由;
2)基於域名選擇虛擬主機后,將基於配置的路由機制完成請求路由或進行重定向;
  (1)每個虛擬主機都有一個邏輯名稱(name)以及一組域名(domains),請求報文中的主機頭將根據此處的域名進行路 由;
  (2)基於域名選擇虛擬主機后,將基於配置的路由機制(routes)完成請求路由或進行重定向;

配置框架

---
listeners:
- name:
  address: {...}
  filter_chians: []
  - filters:
    - name: envoy.filters.network.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        codec_type: AUTO
        route_config: 
          name: ...
          virutal_hosts: []
          - name : ...
            domains: [] # 虛擬主機的域名,路由匹配時將請求報文中的host標頭值與此處列表項進行匹配檢測;
            routes: [] # 路由條目,匹配到當前虛擬主機的請求中的path匹配檢測將針對各route中由match定義條件進行 ;
            - name: ...
              match: {...} # 常用內嵌字段 prefix|path|sate_regex|connect_matcher ,用於定義基於路徑前綴、路徑、正則表達式或連接匹配器四者之一定義匹配條件 ;
              route: {...} # 常用內嵌字段cluster|cluster_header|weighted_clusters ,基於集群 、請求報文中的集群標頭 或加權集群 (流量分割)定義路由目標 ;
              redirect: {…} #重定向請求,但不可與route或direct_response一同使用;
              direct_response: {…} #直接響應請求,不可與route和redirect一同使用;
            virtual_clusters: [] # 為此虛擬主機定義的 用於 收 集統 計 信息 的虛 擬 集群 列 表;
            ...
          ...

2、虛擬主機是路由配置中的頂級元素,它可以通過virtual_hosts字段進行靜態配置,也可基於VHDS進行動態發現 。

3、VirtualHost的配置

{
"name": "...",
"virtual_hosts": [],  #虛擬主機的具體配置如下
"vhds": "{...}",
"internal_only_headers": [],
"response_headers_to_add": [],
"response_headers_to_remove": [],
"request_headers_to_add": [],
"request_headers_to_remove": [],
"most_specific_header_mutations_wins": "...",
"validate_clusters": "{...}",
"max_direct_response_body_size_bytes": "{...}"
}

virtual_hosts

{ 
"name": "...",
"domains": [],
"routes": [],
"require_tls": "...", 
"virtual_clusters": [],
"rate_limits": [],
"request_headers_to_add": [],
"request_headers_to_remove": [],
"response_headers_to_add": [],
"response_headers_to_remove": [],
"cors": "{...}",
"typed_per_filter_config": "{...}",
"include_request_attempt_count": "...",
"include_attempt_count_in_response": "...",
"retry_policy": "{...}",
"hedge_policy": "{...}",
"per_request_buffer_limit_bytes": "{...}"
}

 

虛擬主機級別的路由策略用於為相關的路由屬性提供默認配置,用戶也可在路由配置上自定義用到的路由屬 性,例如限流、CORS和重試機制等。

4、Envoy匹配路由時,它基於如下工作過程進行

(1)檢測HTTP請求的host標頭或:authority,並將其同路由配置中定義的虛擬主機作匹配檢查;
(2)在匹配到的虛擬主機配置中按順序檢查虛擬主機中的每個route條目中的匹配條件,直到第一個匹配的為止 (短路);
(3)若定義了虛擬集群,按順序檢查虛擬主機中的每個虛擬集群,直到第一個匹配的為止;
---
listeners: 
- name:
  address: {...}
  filter_chians: []
  - filters:
    - name: envoy.filters.network.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        codec_type: AUTO
        route_config:
          name: ...
          virutal_hosts: []
          - name: ...
            domains: [] # 虛擬主機的域名,路由匹配時將請求報文中的host標頭值與此處列表項進行匹配檢測;
            routes: [] # 路由條目,匹配到當前虛擬主機的請求中的path匹配檢測將針對各route中由match 定義條件進行;
            - name: ...
              match : {...} # 常用內嵌字段 prefix|path|sate_regex|connect_matcher,用於定義基於路徑前綴、路徑、正則表達式或連接匹配器四者之一定義匹配條件;
              route: {...} # 常用內嵌字段cluster|cluster_header|weighted_clusters,基於集群、請求報文中的集群標頭或加權集群(流量分割)定義路由目標;
            virtual_clusters: [] # 為此虛擬主機定義的用於收集統計信息的虛擬集群列表;
            ...
          ...

五、路由配置示例

下面的配置重點說明match的基本匹配機制及不同的路由方式

virtual_hosts: 
 - name: vh_001
   domains: ["ilinux.io", "*.ilinux.io", "ilinux.*"]
   routes:
   #精確匹配到"/service/blue",路由到blue集群
   - match:
       path: "/service/blue"
       route:
         cluster: blue
   #正則匹配到 "^/service/.*blue$",跳轉到/service/blue,再轉發到blue集群
   - match:
       safe_regex:
         google_re2: {}
         regex: "^/service/.*blue$"
     redirect:
         path_redirect: "/service/blue"
         
   #精確匹配到"/service/yellow",則直接回復"This page will be provided soon later.\n" 
   - match:
       prefix: "/service/yellow"
       direct_response:
         status: 200
         body:
           inline_string: "This page will be provided soon later.\n" 
   #默認匹配轉發到red集群     
   - match:
       prefix: "/"
     route:
       cluster: red
       
 - name: vh_002
   domains: ["*"]
   routes:
   #匹配到"/",轉發到gray集群
   - match:
     prefix: "/"
     route:
       cluster: gray

下面的配置則側重基於標頭和查詢參數

virtual_hosts:
- name: vh_001
  domains: ["*"]
  routes:
  #精確匹配到X-Canary,轉發到demoappv12集群
  - match:
    prefix: "/"
    headers:
    - name: X-Canary
      exact_match: "true"
      route:
        cluster: demoappv12
  #查詢匹配到"username"並且前綴匹配到"vip_",轉發到demoappv12集群
  - match:
    prefix: "/"
    query_parameters:
    - name: "username"
      string_match:
        prefix: "vip_"
      route:
        cluster: demoappv11
  #其它匹配轉發到denoappv10
  - match:
      prefix: "/"
    route:
      cluster: demoappv10

六、將域名映射到虛擬主機

1、域名搜索順序

1)將請求報文中的host標頭值依次與路由表中定義的各Virtualhost的domain屬性值進行比較,並於第一 次匹配時終止搜索;
2)Domain search order
  (1)Exact domain names: www.ilinux.io.
  (2)Prefix domain wildcards: *.ilinux.io or *-envoy.ilinux.io. Suffix domain wildcards: ilinux.* or ilinux-*.
  (3)Special wildcard * matching any domain.

 

七、路由基礎配置說明

1、match匹配

(1)基於prefix、path、safe_regex和connect_matchter 四者其中任何一個進行URL匹配。
   提示:早期版本中的regex已經被safe_regex取代。
(2)可額外根據headers 和query_parameters 完成報文匹配
(3)匹配的到報文可有三種路由機制
   (1)redirect
   (2)direct_response
   (3)route

2、route

(1)支持cluster、weighted_clusters和cluster_header三者之一定義流量路由目標
(2)轉發期間可根據prefix_rewrite和host_rewrite完成URL重寫;
(3)還可以額外配置流量管理機制,例如
   韌性相關:timeout、retry_policy
   測試相關: request_mirror_policies
   流控相關: rate_limits
   訪問控制相關:cors

八、Envoy HTTP 路由配置

1、路由配置框架

1) 符合匹配條件的請求要由如下三種方式之一處理

(1)route:路由到指定位置

(2)redirect:重定向到指定位置

(3)direct_response :直接以給定的內容進行響應

2)路由中也可按需在請求及響應報文中添加或刪除響應標頭

{
"name": "...",
"match": "{...}",  # 定義匹配條件
“route”: “{...}”,  # 定義流量路由目標,與redirect和direct_response互斥
“redirect”: “{...}”,  # 將請求進行重定向,與route和direct_response互斥
“direct_response”: “{...}”, # 用指定的內容直接響應請求,與redirect和redirect互斥

“metadata”: “{...}”, #為路由機制提供額外的元數據,常用於configuration、stats和logging相關功能,則通常需要先定義相關的  
filter 

"decorator": "{...}",
"typed_per_filter_config": "{...}",
"request_headers_to_add": [],
"request_headers_to_remove": [],
"response_headers_to_add": [],
"response_headers_to_remove": [],
"tracing": "{...}",
"per_request_buffer_limit_bytes": "{...}"
}

8.1 路由匹配(route.RouteMatch)

1、 匹配條件是定義的檢測機制,用於過濾出符合條件的請求並對其作出所需的處理,例如路由、重定向或直接響應等。必須要定義prefix、path和regex三種匹配條件中的一種形式。

2、除了必須設置上述三者其中之一外,還可額外完成如下限定

(1)區分字符大小寫(case_sensitive )
(2)匹配指定的運行鍵值表示的比例進行流量遷移(runtime_fraction);
   不斷地修改運行時鍵值完成流量遷移。
(3)基於標頭的路由:匹配指定的一組標頭(headers); 
(4)基於參數的路由:匹配指定的一組URL查詢參數(query_parameters);
(5)僅匹配grpc流量(grpc);
{
“prefix”: “...”,  #URL中path前綴匹配條件
"path": "...",  #path精確匹配條件
"safe_regex": "{...}", #整個path(不包含query字串)必須與
"google_re2": "{...}", #指定的正則表達式匹配
"regex": "..."
"connect_matcher": "{...}",
"case_sensitive": "{...}",
"runtime_fraction": "{...}",
"headers": [],
"query_parameters": [],
"grpc": "{...}",
"tls_context": "{...}",
"dynamic_metadata": []
}

8.2 基於標頭的路由匹配(route.HeaderMatch)

需要額外匹配指定的一組標頭,

1)路由器將根據路由配置中的所有指定標頭檢查請求的標頭 
  (1)若路由中指定的所有標頭都存在於請求中且具有相同值,則匹配
  (2)若配置中未指定標頭值,則基於標頭的存在性進行判斷
2)標頭及其值的上述檢查機制僅能定義exact_match、safe_regex_match range_match、 present_match 、 prefix_match 、suffix_match 、contains_match及string_match其中之一;
{

routes.match.headers
“name”: “...”,
"exact_match": "...", # 精確值匹配
“safe_regex_match”: “{...}”, # 正則表達式模式匹配
“range_match”: “{...}”, # 值范圍匹配,檢查標頭值是否在指定的范圍內
“present_match ”: “...”, # 標頭存在性匹配,檢查標頭存在與否
“prefix_match”: “...”, # 值前綴匹配
“suffix_match”: “...”, # 值后綴匹配
“contains_match”: “...”, # 檢測標頭值是否包含此處指定的字符串
“string_match”: “{...}”, # 檢測標頭值是否匹配該處指定的字符串
“invert_match”: “...“ # 是否將匹配的檢測結果取反,即以不滿足條件為”真” ,默認為fase
}

8.3 基於查詢參數的路由匹配(route.QueryParameterMatcher)

指定的路由需要額外匹配的一組URL查詢參數。

路由器將根據路由配置中指定的所有查詢參數檢查路徑頭中的查詢字符串。

(1)查詢參數匹配將請求的URL中查詢字符串視為以&符號分隔的“鍵”或“鍵=值”元素列表 
(2)若存在指定的查詢參數,則所有參數都必須與URL中的查詢字符串匹配
(3)匹配條件指定為value、regex、string_match或present_match其中之一
query_parameters:

name: "..."
string_match: "{...}" # 參數值的字符串匹配檢查,支持使用以下五種檢查方式其中之一進行字符串匹配 exact: "...“
  prefix: "..."
  suffix: "..."
  contains: "..."
  safe_regex: "{...}"
  ignore_case: ""
  present_match: "..."
routes.match.query_parameters

8.4 路由目標之一:路由到指定集群(route)

匹配到的流量可路由至如下三種目標之一

(1)cluster:路由至指定的上游集群;
(2)cluster_header:路由至請求標頭中由cluster_header的 值指定的上游集群;
(3)weighted_clusters:基於權重將請求路由至多個上游集群,進行流量分割;

 

注意:路由到所有集群的流量之和要等於100%

{
“cluster”: “... ”, # 路由到指定的目標集群
"cluster_header ": "...",
“weighted_clusters”: “{...}”, # 路由並按權重比例分配到多個上游集群 "cluster_not_found_response_code": "...",
"metadata_match ": "{...}",
"prefix_rewrite": "...",
"regex_rewrite": "{...}",
"host_rewrite_literal": "...",
"auto_host_rewrite": "{...}",
"host_rewrite_header": "...",
"host_rewrite_path_regex": "{...}",
"timeout": "{...}",
"idle_timeout": "{...}",
"retry_policy": "{...}",
"request_mirror_policies": [],
"priority": "...",
"rate_limits": [],
"include_vh_rate_limits": "{...}",
"hash_policy": [],
"cors": "{...}",
"max_grpc_timeout": "{...}",
"grpc_timeout_offset": "{...}",
"upgrade_configs": [],
"internal_redirect_policy": "{...}",
"internal_redirect_action": "...",
"max_internal_redirects": "{...}",
"hedge_policy": "{...}",
"max_stream_duration": "{...}"
}

8.5 路由目標之二:重定向 (redirect)

1)為請求響應一個301應答,從而將請求從一個URL永久重定向至另一個URL

2) Envoy支持如下重定向行為

(1)協議重定向:https_redirect或scheme_redirect 二者只能使用其一; 主機重定向:host_redirect
(2)端口重定向:port_redirect
(3)路徑重定向:path_redirect
(4)路徑前綴重定向:prefix_redirect
(5)正則表達式模式定義的重定向:regex_rewrite
(6)重設響應碼:response_code,默認為301;
(6)strip_query:是否在重定向期間刪除URL中的查詢參數,默認為false ;
{
"https_redirect": "...",
"scheme_redirect": "...",
"host_redirect": "...",
"port_redirect": "...",
"path_redirect": "...",
"prefix_rewrite": "...",
"regex_rewrite": "{...}",
"response_code": "...",
"strip_query": "..."
} 

8.6 路由目標之三:直接響應請求 (direct_response)

Envoy還可以直接響應請求

{
"status": "..."
"body": "{...}"
}

2)status:指定響應碼的狀態

3) body:響應正文,可省略,默認為空;需要指定時應該由body通過如下三種方式之一給出數據源

{
"filename": "...",
"inline_bytes": "...",
"inline_string": "..."
}

8.7 基於查詢參數的路由匹配(route.QueryParameterMatcher)

指定的路由需要額外匹配的一組URL查詢參數。

路由器將根據路由配置中指定的所有查詢參數檢查路徑頭中 的查詢字符串:

(1)查詢參數匹配將請求的URL中查詢字符串視為以&符號分隔的“鍵”或“鍵=值”元素列表 
(2)若存在指定的查詢參數,則所有參數都必須與URL中的查詢字符串匹配
(3)匹配條件指定為value、regex、string_match或present_match其中之一
query_parameters:

name: "..."
string_match: "{...}" # 參數值的字符串匹配檢查,支持使用以下五種檢查方式其中之一進行字符串匹配 exact: "...“
  prefix: "..."
  suffix: "..." 
  contains: "..."
  safe_regex: "{...}"
  ignore_case: ""
present_match: "..."

九、小結

1、基礎路由配置

1)在match中簡單通過prefix、path或regex指定匹配條件

2)將匹配到的請求進行重定向、直接響應或路由到指定目標集群

2、高級路由策略

1)在match中通過prefix、path或regex指定匹配條件,並使用高級匹配機制

(1) 結合runtime_fraction按比例切割流量

(2) 結合headers按指定的標頭路由,例如基於cookie進行,將其值分組后路由到不同目標;

(3) 結合query_parameters按指定的參數路由,例如基於參數group進行,將其值分組后路由到不同的目標;

(4) 提示:可靈活組合多種條件構建復雜的匹配機制

2) 復雜路由目標

(1) 結合請求報文標頭中cluster_header的值進行定向路由

(2) weighted_clusters:將請求根據目標集群權重進行流量分割

(3) 配置高級路由屬性,例如重試、超時、CORS、限速等;

十、實驗案例

1、httproute-simple-match

實驗環境

envoy:Front Proxy,地址為172.31.50.10
7個后端服務
light_blue和dark_blue:對應於Envoy中的blue集群
light_red和dark_red:對應於Envoy中的red集群
light_green和dark_green:對應Envoy中的green集群
gray:對應於Envoy中的gray集群

front-envoy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: vh_001
              domains: ["ilinux.io", "*.ilinux.io", "ilinux.*"]
              routes:
              - match:
                  path: "/service/blue"
                route:
                  cluster: blue
              - match:
                  safe_regex: 
                    google_re2: {}
                    regex: "^/service/.*blue$"
                redirect:
                  path_redirect: "/service/blue"
              - match:
                  prefix: "/service/yellow"
                direct_response:
                  status: 200
                  body:
                    inline_string: "This page will be provided soon later.\n"
              - match:
                  prefix: "/"
                route:
                  cluster: red
            - name: vh_002
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: gray
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: blue
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: blue
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: blue
                port_value: 80

  - name: red
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: red
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: red
                port_value: 80

  - name: green
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: green
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: green
                port_value: 80

  - name: gray
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: gray
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: gray
                port_value: 80
                

docker-compose.yaml

version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.50.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  light_blue:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - light_blue
          - blue
    environment:
      - SERVICE_NAME=light_blue
    expose:
      - "80"

  dark_blue:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - dark_blue
          - blue
    environment:
      - SERVICE_NAME=dark_blue
    expose:
      - "80"

  light_green:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - light_green
          - green
    environment:
      - SERVICE_NAME=light_green
    expose:
      - "80"

  dark_green:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - dark_green
          - green
    environment:
      - SERVICE_NAME=dark_green
    expose:
      - "80"

  light_red:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - light_red
          - red
    environment:
      - SERVICE_NAME=light_red
    expose:
      - "80"

  dark_red:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - dark_red
          - red
    environment:
      - SERVICE_NAME=dark_red
    expose:
      - "80"

  gray:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - gray
          - grey
    environment:
      - SERVICE_NAME=gray
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.50.0/24

路由說明

            virtual_hosts:
            - name: vh_001
              domains: ["ilinux.io", "*.ilinux.io", "ilinux.*"]
              routes:
              - match:
                  path: "/service/blue"
                route:
                  cluster: blue
              - match:
                  safe_regex: 
                    google_re2: {}
                    regex: "^/service/.*blue$"
                redirect:
                  path_redirect: "/service/blue"
              - match:
                  prefix: "/service/yellow"
                direct_response:
                  status: 200
                  body:
                    inline_string: "This page will be provided soon later.\n"
              - match:
                  prefix: "/"
                route:
                  cluster: red
            - name: vh_002
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: gray

實驗驗證

docker-compose up

克隆窗口測試

測試domain的匹配機制

# 首先訪問無法匹配到vh_001的域名
root@test:~# curl -H "Host: www.magedu.com" http://172.31.50.10/service/a
Hello from App behind Envoy (service gray)! hostname: e5f05afa0a68 resolved hostname: 172.31.50.3

root@test:~# curl -v -H "Host: www.magedu.com" http://172.31.50.10/service/a
*   Trying 172.31.50.10:80...
* TCP_NODELAY set
* Connected to 172.31.50.10 (172.31.50.10) port 80 (#0)
> GET /service/a HTTP/1.1
> Host: www.magedu.com
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 98
< server: envoy
< date: Fri, 03 Dec 2021 07:04:02 GMT
< x-envoy-upstream-service-time: 3
< 
Hello from App behind Envoy (service gray)! hostname: e5f05afa0a68 resolved hostname: 172.31.50.3
* Connection #0 to host 172.31.50.10 left intact
#沒有匹配到vh_001,匹配上了vh_002

# 接着訪問可以匹配vh_001的域名
root@test:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/a
Hello from App behind Envoy (service light_red)! hostname: 235f09398734 resolved hostname: 172.31.50.8
root@test:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/a
Hello from App behind Envoy (service dark_red)! hostname: 171330891e5c resolved hostname: 172.31.50.2
#直接返回vh_001后端集群的結果

測試路由匹配機制

# 首先訪問“/service/blue”
#直接返回集群結果
root@test:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/blue
Hello from App behind Envoy (service dark_blue)! hostname: cf263713476d resolved hostname: 172.31.50.5
root@test:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/blue
Hello from App behind Envoy (service light_blue)! hostname: ce070602a111 resolved hostname: 172.31.50.6
root@test:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/blue
#匹配到直接返回后端集群結果

# 接着訪問“/service/dark_blue”
#會跳轉信息
root@test:~# curl -I -H "Host: www.ilinux.io" http://172.31.50.10/service/dark_blue
HTTP/1.1 301 Moved Permanently
location: http://www.ilinux.io/service/blue
date: Fri, 03 Dec 2021 07:08:35 GMT
server: envoy
transfer-encoding: chunked


# 然后訪問“/serevice/yellow”
#直接返回結果,不需要經過后端集群
root@test:~# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/yellow
This page will be provided soon later.

2、httproute-headers-match

實驗環境

envoy:Front Proxy,地址為172.31.52.10
5個后端服務
demoapp-v1.0-1和demoapp-v1.0-2:對應於Envoy中的demoappv10集群
demoapp-v1.1-1和demoapp-v1.1-2:對應於Envoy中的demoappv11集群
demoapp-v1.2-1:對應於Envoy中的demoappv12集群

front-envoy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: vh_001
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                  headers:
                  - name: X-Canary
                    exact_match: "true"
                route:
                  cluster: demoappv12
              - match:
                  prefix: "/"
                  query_parameters:
                  - name: "username"
                    string_match:
                      prefix: "vip_"
                route:
                  cluster: demoappv11
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv10
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80

  - name: demoappv12
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv12
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv12
                port_value: 80
                

docker-compose.yaml

version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.52.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"      
      
  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"  

  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"      
      
  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"  
      
  demoapp-v1.2-1:
    image: ikubernetes/demoapp:v1.2
    hostname: demoapp-v1.2-1
    networks:
      envoymesh:
        aliases:
          - demoappv12
    expose:
      - "80"     

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.52.0/24

使用的路由

            virtual_hosts:
            - name: vh_001
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                  headers:
                  - name: X-Canary
                    exact_match: "true"
                route:
                  cluster: demoappv12
              - match:
                  prefix: "/"
                  query_parameters:
                  - name: "username"
                    string_match:
                      prefix: "vip_"
                route:
                  cluster: demoappv11
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv10

實驗驗證

docker-compose up

窗口克隆測試

發起無附加條件的請求

# 不使用任何獨特的訪問條件,直接返回默認的demoappv10的結果
root@test:~# curl 172.31.52.10/hostname
ServerName: demoapp-v1.0-2
root@test:~# curl 172.31.52.10/hostname
ServerName: demoapp-v1.0-1
root@test:~# curl 172.31.52.10/hostname
ServerName: demoapp-v1.0-2
root@test:~# curl 172.31.52.10/hostname
ServerName: demoapp-v1.0-1

測試使用“X-Canary: true”村頭的請求

# 使用特定的標頭發起請求,返回demoappv12的結果
root@test:~# curl -H "X-Canary: true" 172.31.52.10/hostname
ServerName: demoapp-v1.2-1

測試使用特定的查詢條件

# 在請求中使用特定的查詢條件,返回demoappv11的結果
root@test:~# curl 172.31.52.10/hostname?username=vip_mageedu
ServerName: demoapp-v1.1-1
root@test:~# curl 172.31.52.10/hostname?username=vip_mageedu
ServerName: demoapp-v1.1-2

root@test:~# curl 172.31.52.10/hostname?username=vip_ilinux     
ServerName: demoapp-v1.1-1
root@test:~# curl 172.31.52.10/hostname?username=vip_ilinux     
ServerName: demoapp-v1.1-2

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM