純靜態資源配置方式主是直接在配置文件中通過static_resources配置參數明確定義listeners、 clusters和secrets的配置方式,各配置參數的數據類型如下面的配置所示;
◼ 其中,listeners用於配置純靜態類型的偵聽器列表,clusters用於定義可用的集群列表及每個集群的 端點,而可選的secrets用於定義TLS通信中用到數字證書等配置信息 ◼ 具體使用時,admin和static_resources兩參數即可提供一個最小化的資源配置,甚至admin也可省略 { "listeners": [], "clusters": [], "secrets": [] }
1、基於envoy的預制docker鏡像啟動實例時,需要額外自定義配置文件,而后將其焙進新的鏡像 中或以存儲卷的方式向容器提供以啟動容器;下面以二次打包鏡像的方式進行測試:
docker pull envoyproxy/envoy-alpine:v1.20.0 docker run --name envoy-test -p 80:80 -v /envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy-alpine:v1.20.0 1. 為envoy容器創建專用的工作目錄,例如/applications/envoy/ 2. 將前面的偵聽器示例保存為此目錄下的echo-demo/子目錄中,文件 名為envoy.yaml 提示:完整的文件路徑為/applications/envoy/echo-demo/envoy.yaml ; 3. 測試配置文件語法 # cd /applications/envoy/ # docker run --name echo-demo --rm -v $(pwd)/echo-demo/envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy-alpine:v1.20.0 --mode validate -c /etc/envoy/envoy.yaml 4. 若語法測試不存在問題,即可直接啟動Envoy示例;下面第 一個命令 用於了解echo-demo容器的IP地 址 # docker run --name echo-demo --rm -v $(pwd)/echo-demo/envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy-alpine:v1.20.0 -c /etc/envoy/envoy.yaml 5. 使用nc命令發起測試,鍵入的任何內容將會由envoy.echo過濾器直接echo回來 # containerIP=$(docker container inspect --format="{{.NetworkSettings.IPAddress}}" echo-demo) # nc $containerIP 15001
2、基於直接部署二進制程序啟動運行envoy實例時,直接指定要使用的配置文件即可:
1. 為envoy創建配置文件專用的存儲目錄,例如/etc/envoy/ 2. 將前面的偵聽器示例保存為此目錄下的envoy-echo-demo.yaml配置文件 3. 測試配置文件語法 # envoy --mode validate -c /etc/envoy/envoy-echo-demo.yaml 4. 若語法測試不存在問題,即可直接啟動Envoy示例 # envoy -c /etc/envoy/envoy-echo-demo.yaml 5. 使用nc命令發起測試,鍵入的任何內容將會由envoy.echo過濾器直接echo回來 # nc 127.0.0.1 15001
二、Listener的簡易靜態配置
偵聽器主要用於定義Envoy監聽的用於接收Downstreams請求的套接字、用於處理請求時調 用的過濾器鏈及相關的其它配置屬性;
listener的配置格式
static_resources: listeners: - name: #偵聽器的名稱 address: socket_address: address: #偵聽的ip地址 port_value: #偵聽的端口 filter_chains: - filters: - name: #配置filter chain config:
◼ 下面是一個最簡單的靜態偵聽器配置示例
static_resources: listeners: #配置偵聽器 - name: listener_0 #偵聽器的名稱 address: #配置偵聽器的地址信息 socket_address: address: 0.0.0.0 #偵聽的ip地址 port_value: 8080 #偵聽的端口 filter_chains: #配置偵聽器的filter chains - filters: - name: envoy.filters.network.echo #使用什么filter,這里是使用的echo filter
通常,集群代表了一組提供相同服務的上游服務器(端點)的組合,它可由用戶靜態配置, 也能夠通過CDS動態獲取。
集群需要在“預熱”環節完成之后方能轉為可用狀態,這意味着集群管理器通過DNS解析或 EDS服務完成端點初始化,以及健康狀態檢測成功之后才可用。
clusters的配置格式
clusters: - name: ... # 集群的惟一名稱,且未提供a lt_stat_name 時將會被用於統計信息中; alt_state_name: ... # 統計信息中使用的集群代名稱; type: ... # 用於解析集群(生成集群端點)時使用的服務發現類型,可用值有STATIC、STRICT_DNS 、LOGICAL_DNS、ORIGINAL_DST和EDS等; lb_policy: # 負載均衡算法,支持ROUND_ROBIN、LEAST_REQUEST、RING_HASH、RANDOM、MAGLEV和CLUSTER_PROVIDED; load_assignment: # 為S TATIC、STRICT_DNS或LOGICAL_DNS類型的集群指定成員獲取方式;EDS 類型的集成要使用eds_cluster_config 字段配置; cluster_name: ... # 集群名稱; endpoints: # 端點列表; - locality: {} # 標識上游主機所處的位置,通常以region 、zone等進行標識; lb_endpoints: # 屬於指定位置的端點列表; - endpoint_name: ... # 端點的名稱; endpoint: # 端點定義; socket_adddress: # 端點地址標識; address: ... # 端點地址; port_value : ... # 端點端口; protocol: ... # 協議類型;
靜態Cluster的各端點可以在配置中直接給出,也可借助DNS服務進行動態發現。
clusters: - name: test_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: test_cluster endpoints: - lb_endpoints: #轉發都后端的端口服務器 - endpoint: address: socket_address: { address: 172.17.0.3, port_value: 80 } - endpoint: address: socket_address: { address: 172.17.0.4, port_value: 80 }
TCP代理過濾器在下游客戶端及上游集群之間執行1:1網絡連接代理
它可以單獨用作隧道替換,也可以 同其他過濾器(如MongoDB過濾器或速率限制 過濾器) 結合使用。
TCP代理過濾器嚴格執行由全局資源管理於為每個上游集群的全局資源管理器設定的連接限制。
TCP代理過濾器檢查上游集群的資源管理器是否可以在不超過該集群的最大連接數的情況下創建連接 .
TCP代理過濾器可直接將請求路由至指定的集群,也能夠在多個目標集群間基於權重進行調度轉發。
配置語法格式:
{ "stat_prefix": "...", # 用於統計數據中輸出時使用的前綴字符; "cluster": "...", # 路由到的目標集群標識; "weighted_clusters": "{...}", "metadata_match": "{...}", "idle_timeout": "{...}", # 上下游連接間的超時時長,即沒有發送和接收報文的超時時長; "access_log": [], # 訪問日志; "max_connect_attempts": "{...}" # 最大連接嘗試次數; }
下面的示例基於TCP代理將下游用戶(本機)請求代理至后端的兩個web服務器
static_resources: listeners: name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.tcp_proxy typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy stat_prefix: tcp #tcp前綴 cluster: local_cluster #路由到本地的local_cluster clusters: #定義local_cluster - name: local_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: local_cluster endpoints: #定義local_cluster后端的端點信息 - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.1.11, port_value: 8080 } - endpoint: address: socket_address: { address: 172.31.1.12, port_value: 8080 }
http_connection_manager通過引入L7過濾器鏈實現了對http協議的操縱,其中router 過濾器用 於配置路由轉發。
配置格式:
listeners: - name: address: socket_address: { address: ..., port_value: ..., protocol: ... } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager #啟用http_connection_manager stat_prefix: ... # 統計信息中使用的易讀性的信息前綴; route_config: # 靜態路由配置;動態配置應該使用rds字段進行指定; name: ... # 路由配置的名稱; virtual_hosts: # 虛擬主機列表,用於構成路由表; - name: ... # 虛擬主機的邏輯名稱,用於統計信息,與路由無關; domains: [] # 當前虛擬主機匹配的域名列表,支持使用“*” 通配符;匹配搜索次序為精確匹配、前綴通配、后綴通配及完全通配; routes: [] # 指定的域名下的路由列表,執行時按順序搜索,第一個匹配到路由信息即為使用的路由機制; http_filters: # 定義http過濾器鏈 - name: envoy.filters.http.router # 調用7層的路由過濾器
提示:
◼而后搜索當前虛擬主機中的routes列表中的路由列表中各路由條目的match的定義,第一個匹配到 的match后的路由機制(route、redirect或direct_response)即生效;
六、HTTP L7路由基礎配置
route_config.virtual_hosts.routes配置的路由信息用於將下游的客戶端請求路由至合適 的上游集群中某Server上;
◼ 其路由方式是將url匹配match字段的定義
◆match字段可通過prefix (前綴)、path(路徑)或safe_regex (正則表達式)三者之一來表示匹配模 式;
◼ 與match相關的請求將由route(路由規則)、redirect(重定向規則)或direct_response (直接響應)三個字段其中之一完成路由;
◼ 由route定義的路由目標必須是cluster(上游集群名稱)、cluster_header(根據請求標頭中 的cluster_header的值確定目標集群)或weighted_clusters(路由目標有多個集群,每個集群 擁有一定的權重)其中之一;
配置格式:
routes: - name: ... # 此路由條目的名稱; match: prefix: ... # 請求的URL的前綴; route: # 路由條目; cluster: # 目標下游集群;
match:
1)基於prefix、path或regex三者其中任何一個進行URL匹配 提示:regex將會被safe_regex取代; 2)可額外根據headers 和query_parameters完成報文匹配 3)匹配的到報文可有三種路由機制 redirect跳轉 direct_response直接應答 route路由
route:
1)支持cluster、weighted_clusters和cluster_header三者之一定義目標路由 2)轉發期間可根據prefix_rewrite和host_rewrite完成URL重寫 3)可額外配置流量管理機制,例如timeout、retry_policy 、cors、request_mirror_policy和rate_limits等;

1、管理接口admin的介紹
Envoy內建了一個管理服務(administration server ),它支持查詢和修改操作,甚至有可能暴 露私有數據(例如統計數據、集群名稱和證書信息等),因此非常有必要精心編排其訪問控 制機制以避免非授權訪問。
配置格式:
admin: access_log: [] # 訪問日志協議的相關配置,通常需要指定日志過濾器及日志配置等; access_log_path: ... # 管理接口的訪問日志文件路徑,無須記錄訪問日志時使用/dev/null ; profile_path: ... # cpu profiler 的輸出路徑,默認為/var/log/envoy/envoy.prof ; address: # 監聽的套接字; socket_address: protocol: ... address: ... port_value: ...
簡單的admin配置實例
admin: access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9901 } # 提示:此處 僅為出於方便測試的目的,才設定其監聽於對外通信的任意IP地址;安全起見,應該使用127.0.0.1;
“L7 Front Proxy ”添加管理接口的方法,僅需要在其用到的envoy.yaml 配置文件中添加相關的配置信息即可;
下面給出的簡單的測試命令
admin: access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9901 } static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO …… clusters: - name: local_cluster connect_timeout: 0.25s type: STATIC ……
admin接口內置了多個/path,不同的path可能會分別接受不同的GET或POST請求;
admin commands are: /: Admin home page # GET /ready:Outputs a string and error code reflecting the state of the server. # GET ,返回envoy服務當前的狀態; /certs: print certs on machine # GET,列出已加載的所有TLS 證書及相關的信息; /clusters: upstream cluster status # GET,額外支持使用“GET /clusters?format=json ” /config_dump: dump current Envoy configs # GET,打印Envoy加載的各類配置信息;支持include_eds、master和resource 等查詢參數; /contention: dump current Envoy mutex contention stats (if enabled) # GET,互斥跟蹤 /cpuprofiler: enable/disable the CPU profiler# POST,啟用或禁用cpuprofiler /healthcheck/fail: cause the server to fail health checks # POST,強制設定HTTP健康狀態檢查為失敗; /healthcheck/ok: cause the server to pass health checks # POST,強制設定HTTP健康狀態檢查為成功; /heapprofiler: enable/disable the heap profiler # POST,啟用或禁用heapprofiler ; /help: print out list of admin commands /hot_restart_version: print the hot restart compatibility version# GET,打印熱重啟相關的信息; /listeners: print listener addresses# GET,列出所有偵聽器,支持使用“GET /listeners?format=json” /drain_listeners:Drains all listeners. # POST ,驅逐所有的listener,支持使用inboundonly (僅入站偵聽器)和graceful(優雅關閉)等查詢參數; /logging: query/change logging levels# POST,啟用或禁用不同子組件上的不同日志記錄級別 /memory: print current allocation/heap usage# POST,打印當前內在分配信息,以字節為單位; /quitquitquit: exit the server# POST,干凈退出服務器; /reset_counters: reset all counters to zero # POST ,重圍所有計數器; /tap:This endpoint is used for configuring an active tap session. # POST ,用於配置活動的帶標簽的session; /reopen_logs:Triggers reopen of all access logs. Behavior is similar to SIGUSR1 handling. # POST ,重新打開所有的日志,功能類似於SIGUSR1信號; /runtime: print runtime values# GET,以json 格式輸出所有運行時相關值; /runtime_modify: modify runtime values # POST /runtime_modify?key1=value1&key2=value2 ,添加或修改在查詢參數中傳遞的運行時值 /server_info: print server version/status information # GET,打印當前Envoy Server 的相關信息; /stats: print server stats# 按需輸出統計數據,例如GET /stats?filter= regex,另外還支持json和prometheus兩種輸出格式; /stats/prometheus: print server stats in prometheus format# 輸出prometheus格式的統計信息;
示例輸出
1)GET /clusters:列出所有已配置的集群,包括每個集群中發現的所有上游主機以及 每個主機的統計信 息;支持輸出為json格式; 集群管理器信息:“version_info string”,無CDS時,則顯示為“version_info::static”。 集群相關的信息:斷路器、異常點檢測和用於表示是否通過CDS添加的標識“add_via_api”。 每個主機的統計信息:包括總連接數、活動連接數、總請求數和主機的健康狀態等;不健康的原因通常有以下三種 (1)failed_active_hc:未通過主動健康狀態檢測; (2)failed_eds_health:被EDS標記為不健康; (3)failed_outlier_check:未通過異常檢測機制的檢查; 2)GET /listeners: 列出所有已配置的偵聽器,包括偵聽器的名稱以及監聽的地址;支持輸出為json格 式; ◼POST /reset_counters :將所有計數器重 圍為0; 不過,它只 會影響Server 本地 的輸出, 對於已經發送到 外部存儲系統的統 計數 據無效 ; 3)GET /config_dump:以json格式打印當前從Envoy 的各 種組 件 加載的 配置信息; 4)GET /ready:獲取Server 就緒與否的狀態,LIVE狀態為200 , 否則為503;
1)相較於靜態資源配置來說,xDS API的動態配置機制使得Envoy的配置系統極具彈性; (1)但有時候配置的變動僅需要修改個別的功能特性,若通過xDS接口完成未免有些動靜過大, Runtime便是面向這種場景的配置接口; (2)Runtime就是一個虛擬文件系統樹,可通過一至多個本地文件系統目錄、靜態資源、 RTDS動態發現和Admin Interface進行定義和配置; 每個配置稱為一個Layer,因而也稱為“Layered Runtime”,這些Layer最終疊加生效; 2)換句話說,Runtime是與Envoy一起部署的外置實時配置系統,用於支持更改配置設置而無需 重啟Envoy或更改主配置; (1)運行時配置相關的運行時參數也稱為“功能標志(feature flags )”或“決策者(decider)”; (2)通過運行時參數更改配置將實時生效; 3)運行時配置的實現也稱為運行時配置供應者; (1)Envoy當前支持的運行時配置的實現是由多個層級組成的虛擬文件系統 Envoy在配置的目錄中監視符號鏈接的交換空間,並在發生交換時重新加載文件樹; (2)但Envoy會使用默認運行時值和“null”提供給程序以確保其正確運行,因此,運行時配置系統並 不必不可少; 4)啟用Envoy的運行時配置機制需要在Bootstrap文件中予以啟用和配置 (1)定義在bootstrap配置文件中的layered_runtime 頂級字段之下 (2)一旦在bootstrap中給出layered_runtime字段,則至少要定義出一個layer; 5)運行時配置用於指定包含重新加載配置元素的虛擬文件系統樹 (1)該虛擬文件可以通過靜態引導配置、本地文件系統、管理控 制台和RTDS派生的疊加來 實現; (2)因此,可以將運行時視為由多個層組成的虛擬文件系統; 在分層運行時的引導配置中指定各層級,后續的各 層級中的 運行設置 會覆蓋較 早的層級 ;
配置格式:
layered_runtime: # 配置運行配置供應者,未指定時則使用null供應者,即所有參數均加載其默認值; layers: # 運行時的層級列表,后面的層將覆蓋先前層上的配置; - name: ... # 運行時的層級名稱,僅用於“GET /runtime”時的輸出; static_layer: {...} # 靜態運行時層級,遵循運行時probobuf JSON表示編碼格式;不同於靜態的xDS 資源,靜態運行時層一樣可被后面的層所覆蓋; # 此項配置,以及后面三個層級類型彼此互斥,因此一個列表項中僅可定義一層; disk_layer: {...} # 基於本地磁盤的運行時層級; symlink_root: ... # 通過符號鏈接訪問的文件系統樹; subdirectory: ... # 指定要在根目錄中加載的子目錄; append_service_cluster: ... # 是否將服務集群附加至符號鏈接根目錄下的子路徑上; admin_layer: {...} # 管理控制台運行時層級,即通過/runtime管理端點查看,通過/runtime_modify管理端點修改的配置方式; rtds_layer: {...} # 運行時發現服務(runtime discovery service )層級,即通過xDS API 中的RTDS API動態發現相關的層級配置; name: ... # 在rtds_config 上為RTDS層訂閱的資源; rtds_config:RTDS 的ConfigSource;
一個典型的配置示例,它定義了四個層級
layers: - name: static_layer_0 static_layer: health_check: min_interval: 5 - name: disk_layer_0 disk_layer: { symlink_root: /srv/runtime/current, subdirectory: envoy } - name: disk_layer_1 disk_layer: { symlink_root: /srv/runtime/current, subdirectory: envoy_override, append_service_cluster: true } - name: admin_layer_0 admin_layer: {} ◆靜態引導配置層級,直接指定配置的運行時參數及其值; ◆本地磁盤文件系統 ◆本地磁盤文件系統,子目錄覆蓋(override_subdirectory) ◆管理控制台層級
在Envoy Mesh中,作為Front Proxy的Envoy通常是獨立運行的進程,它將客戶端請求代理至 Mesh中的各Service,而這些Service中的每個應用實例都會隱藏於一個Sidecar Proxy模式的envoy 實例背后。

Envoy Mesh中的TLS模式大體有如下幾種常用場景
1、Front Proxy面向下游客戶端提供https服務,但Front Proxy、Mesh內部的各服務間依然使用http協議 https →http 2、Front Proxy面向下游客戶端提供https服務,而且Front Proxy、Mesh內部的各服務間也使用https協議 https →https 但是內部各Service間的通信也有如下兩種情形 (1)僅客戶端驗證服務端證書 (2)客戶端與服務端之間互相驗證彼此的證書( mTLS) 注意:對於容器化的動態環境來說,證書預配和管理將成為顯著難題 3、Front Proxy直接以TCP Proxy的代理模式,在下游客戶端與上游服務端之間透傳tls協議; https-passthrough 集群內部的東西向流量同樣工作於https協議模型
僅需要配置Listener面向下游客戶端提供tls通信,下面是Front Proxy Envoy 的配置示例
static_resources: listeners: - name: listener_http address: socket_address: { address: 0.0.0.0, port_value: 8443 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: web_service_01 domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: web_cluster_01 } http_filters: - name: envoy.filters.http.router transport_socket: name: envoy.transport_sockets.tls typed_config: "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext common_tls_context: tls_certificates: # The following self-signed certificate pair is generated using: # $ openssl req -x509 -newkey rsa:2048 -keyout front-proxy.key -out front-proxy.crt -days 3650 -nodes -subj '/CN=www.magedu.com' - certificate_chain: filename: "/etc/envoy/certs/front-proxy.crt" private_key: filename: "/etc/envoy/certs/front-proxy.key“
除了Listener中面向下游提供tls通信,Front Proxy還要以tls協議與Envoy Mesh中的各Service建 立tls連接
下面是Envoy Mesh中的某Service的Sidecar Proxy Envoy的配置示例
static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 443 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: …… http_filters: - name: envoy.filters.http.router transport_socket: name: envoy.transport_sockets.tls typed_config: "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext common_tls_context: tls_certificates: - certificate_chain: filename: "/etc/envoy/certs/webserver.crt" private_key: filename: "/etc/envoy/certs/webserver.key"
下面是Front Proxy Envoy 中的Cluster面向上游通信的配置示例
clusters: - name: web_cluster_01 connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: web_cluster_01 endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.8.11, port_value: 443 } - endpoint: address: socket_address: { address: 172.31.8.12, port_value: 443 } transport_socket: name: envoy.transport_sockets.tls typed_config: "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext ......
TLS passthrough模式的Front Proxy需要使用TCP Proxy類型的Listener,Cluster的相關配置中也 無需再指定transport_socket相關的配置。
但Envoy Mesh中各Service 需要基於tls提供服務。
static_resources: listeners: - name: listener_http address: socket_address: { address: 0.0.0.0, port_value: 8443 } filter_chains: - filters: - name: envoy.filters.network.tcp_proxy typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy cluster: web_cluster_01 stat_prefix: https_passthrough clusters: - name: web_cluster_01 connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: web_cluster_01 endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.9.11, port_value: 443 } - endpoint: address: socket_address: { address: 172.31.9.12, port_value: 443 }
十一、envoy靜態配置實例
https://github.com/ikubernetes/servicemesh_in_practice.git
https://gitee.com/mageedu/servicemesh_in_practise
1、envoy-echo
telnet ip 端口,輸入什么信息,會顯示什么信息
envoy.yaml
static_resources: listeners: - name: listener_0 address: socket_address: address: 0.0.0.0 port_value: 8080 filter_chains: - filters: - name: envoy.filters.network.echo
Dockerfile
FROM envoyproxy/envoy-alpine:v1.20.0 ADD envoy.yaml /etc/envoy/
docker-compose.yaml
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 volumes: - ./envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.4.2 aliases: - envoy-echo networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.4.0/24
root@test:/apps/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose up
telnet envoy的ip+端口
root@test:~# telnet 172.31.4.2 8080 Trying 172.31.4.2... Connected to 172.31.4.2. Escape character is '^]'. #輸入什么,會顯示什么 root@test:~# telnet 172.31.4.2 8080 Trying 172.31.4.2... Connected to 172.31.4.2. Escape character is '^]'. abc abc ni hao ni hao
修改envoy配置文件在envoy容器中測試
root@test:/apps/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose down
編輯envoy-v2.yaml
admin: access_log_path: /dev/null address: socket_address: address: 127.0.0.1 port_value: 0 static_resources: clusters: name: cluster_0 connect_timeout: 0.25s load_assignment: cluster_name: cluster_0 endpoints: - lb_endpoints: - endpoint: address: socket_address: address: 127.0.0.1 port_value: 0 listeners: - name: listener_0 address: socket_address: address: 127.0.0.1 port_value: 8080 filter_chains: - filters: - name: envoy.filters.network.echo
docker-cpmpose.yaml
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 volumes: - ./envoy-v2.yaml:/etc/envoy/envoy.yaml #使用envoy-v2.yaml networks: envoymesh: ipv4_address: 172.31.4.2 aliases: - envoy-echo networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.4.0/24
再次運行
root@test:/apps/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose up
進去容器
root@test:/apps/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose exec envoy sh / # nc 127.0.0.1 8080 abv abv ni hao world ni hao world #輸入什么,就顯示什么
實驗環境
兩個Service: envoy:Sidecar Proxy webserver01:第一個后端服務,地址為127.0.0.1
envoy.yaml
static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: web_service_1 domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: local_cluster } http_filters: - name: envoy.filters.http.router clusters: - name: local_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: local_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 127.0.0.1, port_value: 8080 }
docker-compose.yaml
version: '3' services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 environment: - ENVOY_UID=0 #docker-compose up報error initializing configuration '/etc/envoy/envoy.yaml': cannot bind '0.0.0.0:80': Permission denied需要添加該環境變量 volumes: - ./envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.3.2 aliases: - ingress webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:envoy" depends_on: - envoy networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.3.0/24
訪問172.31.3.2:80,可以被envoy轉發到后端webserver01上
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-ingress# docker-compose up #重新克隆一個窗口多訪問幾次 root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2 iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: d4eda0b2b84c, ServerIP: 172.31.3.2! root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2 iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: dc4bd7a1316f, ServerIP: 172.31.3.2! root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2 iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: dc4bd7a1316f, ServerIP: 172.31.3.2! root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2 iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: dc4bd7a1316f, ServerIP: 172.31.3.2! #在前台運行的envoy程序上查看日志信息 ...... webserver01_1 | * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit) webserver01_1 | 127.0.0.1 - - [01/Dec/2021 08:36:37] "GET / HTTP/1.1" 200 - webserver01_1 | 127.0.0.1 - - [01/Dec/2021 08:38:49] "GET / HTTP/1.1" 200 - webserver01_1 | 127.0.0.1 - - [01/Dec/2021 08:38:50] "GET / HTTP/1.1" 200 -
實驗環境
三個Service: envoy:Front Proxy,地址為172.31.4.2 webserver01:第一個外部服務,地址為172.31.4.11 webserver02:第二個外部服務,地址為172.31.4.12
envoy.yaml
static_resources: listeners: - name: listener_0 address: socket_address: { address: 127.0.0.1, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: web_service_1 domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: web_cluster } http_filters: - name: envoy.filters.http.router clusters: - name: web_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: web_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.4.11, port_value: 80 } - endpoint: address: socket_address: { address: 172.31.4.12, port_value: 80 }
docker-compose.yaml
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 environment: - ENVOY_UID=0 volumes: - ./envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.4.2 aliases: - front-proxy depends_on: - webserver01 - webserver02 client: image: ikubernetes/admin-toolbox:v1.0 network_mode: "service:envoy" depends_on: - envoy webserver01: image: ikubernetes/demoapp:v1.0 hostname: webserver01 networks: envoymesh: ipv4_address: 172.31.4.11 aliases: - webserver01 webserver02: image: ikubernetes/demoapp:v1.0 hostname: webserver02 networks: envoymesh: ipv4_address: 172.31.4.12 aliases: - webserver02 networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.4.0/24
實驗驗證
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-egress# docker-compose up
另外克隆一個窗口,進入容器
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-egress# docker-compose exec client sh [root@b8f9b62f2771 /]# [root@b8f9b62f2771 /]# curl 127.0.0.1 iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver01, ServerIP: 172.31.4.11! [root@b8f9b62f2771 /]# curl 127.0.0.1 iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver02, ServerIP: 172.31.4.12! [root@b8f9b62f2771 /]# curl 127.0.0.1 iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver01, ServerIP: 172.31.4.11! [root@b8f9b62f2771 /]# curl 127.0.0.1 iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver02, ServerIP: 172.31.4.12! #在容器訪問127.0.0.1,envoy會把請求以輪詢的方式轉發到webserver01和webserver02上
實驗環境
三個Service: envoy:Front Proxy,地址為172.31.2.2 webserver01:第一個后端服務,地址為172.31.2.11 webserver02:第二個后端服務,地址為172.31.2.12 #把域名www.ik8s.io和www.magedu.com映射到172.31.2.2 #訪問域名www.ik8s.io會輪詢轉發到webserver01和webserver02上 #訪問域名www.magedu.com會跳轉到www.ik8s.io,並輪詢轉發到webserver01和webserver02上
envoy.yaml
static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: web_service_1 domains: ["*.ik8s.io", "ik8s.io"] routes: - match: { prefix: "/" } route: { cluster: local_cluster } - name: web_service_2 domains: ["*.magedu.com",“magedu.com"] routes: - match: { prefix: "/" } redirect: host_redirect: "www.ik8s.io" http_filters: - name: envoy.filters.http.router clusters: - name: local_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: local_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.2.11, port_value: 8080 } - endpoint: address: socket_address: { address: 172.31.2.12, port_value: 8080 }
docker-compose.yaml
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 environment: - ENVOY_UID=0 volumes: - ./envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.2.2 aliases: - front-proxy depends_on: - webserver01 - webserver02 webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver01 networks: envoymesh: ipv4_address: 172.31.2.11 aliases: - webserver01 webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver02 networks: envoymesh: ipv4_address: 172.31.2.12 aliases: - webserver02 networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.2.0/24
實驗驗證
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# docker-compose up
另外克隆一個窗口
#訪問域名www.ik8s.io,envoy會以輪詢的方式轉發到webserver01和webserver02上 root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# curl -H "host: www.ik8s.io" 172.31.2.2 iKubernetes demoapp v1.0 !! ClientIP: 172.31.2.2, ServerName: webserver01, ServerIP: 172.31.2.11! root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# curl -H "host: www.ik8s.io" 172.31.2.2 iKubernetes demoapp v1.0 !! ClientIP: 172.31.2.2, ServerName: webserver02, ServerIP: 172.31.2.12! #訪問域名www.magedu.com會跳轉到www.ik8s.io上 root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# curl -I -H "host: www.magedu.com" 172.31.2.2 HTTP/1.1 301 Moved Permanently location: http://www.ik8s.io/ #跳轉到了www.ik8s.io上 date: Wed, 01 Dec 2021 14:07:55 GMT server: envoy transfer-encoding: chunked
實驗環境
三個Service: envoy:Front Proxy,地址為172.31.1.2 webserver01:第一個后端服務,地址為172.31.1.11 webserver02:第二個后端服務,地址為172.31.1.12 #訪問envoy的ip:172.31.1.2,會以輪詢的方式轉發到webserver01和webserver02上
static_resources: listeners: name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.tcp_proxy typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy stat_prefix: tcp cluster: local_cluster clusters: - name: local_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: local_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.1.11, port_value: 8080 } - endpoint: address: socket_address: { address: 172.31.1.12, port_value: 8080 }
docker-compose.yaml
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 environment: - ENVOY_UID=0 volumes: - ./envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.1.2 aliases: - front-proxy depends_on: - webserver01 - webserver02 webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver01 networks: envoymesh: ipv4_address: 172.31.1.11 aliases: - webserver01 webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver02 networks: envoymesh: ipv4_address: 172.31.1.12 aliases: - webserver02 networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.1.0/24
實驗驗證
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/tcp-front-proxy# docker-compose up
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/tcp-front-proxy# curl 172.31.1.2 iKubernetes demoapp v1.0 !! ClientIP: 172.31.1.2, ServerName: webserver01, ServerIP: 172.31.1.11! root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/tcp-front-proxy# curl 172.31.1.2 iKubernetes demoapp v1.0 !! ClientIP: 172.31.1.2, ServerName: webserver02, ServerIP: 172.31.1.12!
實驗環境
三個Service: envoy:Front Proxy,地址為172.31.5.2 webserver01:第一個后端服務,地址為172.31.5.11 webserver02:第二個后端服務,地址為172.31.5.12 #訪問envoy的9901端口可以個獲取相應的信息
envoy.yaml
admin: profile_path: /tmp/envoy.prof access_log_path: /tmp/admin_access.log address: socket_address: address: 0.0.0.0 #在生產環境配置127.0.0.1;否則不安全 port_value: 9901 static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: web_service_1 domains: ["*.ik8s.io", "ik8s.io"] routes: - match: { prefix: "/" } route: { cluster: local_cluster } - name: web_service_2 domains: ["*.magedu.com",“magedu.com"] routes: - match: { prefix: "/" } redirect: host_redirect: "www.ik8s.io" http_filters: - name: envoy.filters.http.router clusters: - name: local_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: local_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.5.11, port_value: 8080 } - endpoint: address: socket_address: { address: 172.31.5.12, port_value: 8080 }
docker-compose.yaml
services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 environment: - ENVOY_UID=0 volumes: - ./envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.5.2 aliases: - front-proxy depends_on: - webserver01 - webserver02 webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver01 networks: envoymesh: ipv4_address: 172.31.5.11 aliases: - webserver01 webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver02 networks: envoymesh: ipv4_address: 172.31.5.12 aliases: - webserver02 networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.5.0/24
實驗驗證
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# docker-compose up
另外克隆一個窗口訪問172.31.5.1:9901
#顯示幫助信息 root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/help admin commands are: /: Admin home page /certs: print certs on machine /clusters: upstream cluster status /config_dump: dump current Envoy configs (experimental) /contention: dump current Envoy mutex contention stats (if enabled) /cpuprofiler: enable/disable the CPU profiler /drain_listeners: drain listeners /healthcheck/fail: cause the server to fail health checks /healthcheck/ok: cause the server to pass health checks /heapprofiler: enable/disable the heap profiler /help: print out list of admin commands /hot_restart_version: print the hot restart compatibility version /init_dump: dump current Envoy init manager information (experimental) /listeners: print listener info /logging: query/change logging levels /memory: print current allocation/heap usage /quitquitquit: exit the server /ready: print server state, return 200 if LIVE, otherwise return 503 /reopen_logs: reopen access logs /reset_counters: reset all counters to zero /runtime: print runtime values /runtime_modify: modify runtime values /server_info: print server version/status information /stats: print server stats /stats/prometheus: print server stats in prometheus format /stats/recentlookups: Show recent stat-name lookups /stats/recentlookups/clear: clear list of stat-name lookups and counter /stats/recentlookups/disable: disable recording of reset stat-name lookup names /stats/recentlookups/enable: enable recording of reset stat-name lookup names # 查看完成的配置信息 root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/config_dump ...... } ] }, { "name": "web_service_2", "domains": [ "*.magedu.com", "“magedu.com\"" ], "routes": [ { "match": { "prefix": "/" }, "redirect": { "host_redirect": "www.ik8s.io" } } ] } ] }, "last_updated": "2021-12-01T14:26:32.586Z" ...... #列出各Listener root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/listeners listener_0::0.0.0.0:80 #列出各cluster root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/clusters local_cluster::observability_name::local_cluster local_cluster::default_priority::max_connections::1024 local_cluster::default_priority::max_pending_requests::1024 local_cluster::default_priority::max_requests::1024 local_cluster::default_priority::max_retries::3 local_cluster::high_priority::max_connections::1024 local_cluster::high_priority::max_pending_requests::1024 local_cluster::high_priority::max_requests::1024 local_cluster::high_priority::max_retries::3 local_cluster::added_via_api::false local_cluster::172.31.5.11:8080::cx_active::0 local_cluster::172.31.5.11:8080::cx_connect_fail::0 local_cluster::172.31.5.11:8080::cx_total::0 local_cluster::172.31.5.11:8080::rq_active::0 local_cluster::172.31.5.11:8080::rq_error::0 local_cluster::172.31.5.11:8080::rq_success::0 local_cluster::172.31.5.11:8080::rq_timeout::0 local_cluster::172.31.5.11:8080::rq_total::0 local_cluster::172.31.5.11:8080::hostname:: local_cluster::172.31.5.11:8080::health_flags::healthy local_cluster::172.31.5.11:8080::weight::1 local_cluster::172.31.5.11:8080::region:: local_cluster::172.31.5.11:8080::zone:: local_cluster::172.31.5.11:8080::sub_zone:: local_cluster::172.31.5.11:8080::canary::false local_cluster::172.31.5.11:8080::priority::0 local_cluster::172.31.5.11:8080::success_rate::-1.0 local_cluster::172.31.5.11:8080::local_origin_success_rate::-1.0 local_cluster::172.31.5.12:8080::cx_active::0 local_cluster::172.31.5.12:8080::cx_connect_fail::0 local_cluster::172.31.5.12:8080::cx_total::0 local_cluster::172.31.5.12:8080::rq_active::0 local_cluster::172.31.5.12:8080::rq_error::0 local_cluster::172.31.5.12:8080::rq_success::0 local_cluster::172.31.5.12:8080::rq_timeout::0 local_cluster::172.31.5.12:8080::rq_total::0 local_cluster::172.31.5.12:8080::hostname:: local_cluster::172.31.5.12:8080::health_flags::healthy local_cluster::172.31.5.12:8080::weight::1 local_cluster::172.31.5.12:8080::region:: local_cluster::172.31.5.12:8080::zone:: local_cluster::172.31.5.12:8080::sub_zone:: local_cluster::172.31.5.12:8080::canary::false local_cluster::172.31.5.12:8080::priority::0 local_cluster::172.31.5.12:8080::success_rate::-1.0 local_cluster::172.31.5.12:8080::local_origin_success_rate::-1.0
7、layered-runtime
三個Service: envoy:Front Proxy,地址為172.31.14.2 webserver01:第一個后端服務,地址為172.31.14.11 webserver02:第二個后端服務,地址為172.31.14.12
envoy.yaml
admin: profile_path: /tmp/envoy.prof access_log_path: /tmp/admin_access.log address: socket_address: address: 0.0.0.0 port_value: 9901 layered_runtime: layers: - name: static_layer_0 static_layer: health_check: min_interval: 5 - name: admin_layer_0 admin_layer: {} static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: web_service_1 domains: ["*.ik8s.io", "ik8s.io"] routes: - match: { prefix: "/" } route: { cluster: local_cluster } - name: web_service_2 domains: ["*.magedu.com",“magedu.com"] routes: - match: { prefix: "/" } redirect: host_redirect: "www.ik8s.io" http_filters: - name: envoy.filters.http.router clusters: - name: local_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: local_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 172.31.14.11, port_value: 8080 } - endpoint: address: socket_address: { address: 172.31.14.12, port_value: 8080 }
docker-compose.yaml
version: '3.3' services: envoy: image: envoyproxy/envoy-alpine:v1.20.0 environment: - ENVOY_UID=0 volumes: - ./envoy.yaml:/etc/envoy/envoy.yaml networks: envoymesh: ipv4_address: 172.31.14.2 aliases: - front-proxy depends_on: - webserver01 - webserver02 webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver01 networks: envoymesh: ipv4_address: 172.31.14.11 aliases: - webserver01 webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver02 networks: envoymesh: ipv4_address: 172.31.14.12 aliases: - webserver02 networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.14.0/24
實驗驗證
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/layered-runtime# docker-compose up
另外克隆一個窗口
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/layered-runtime# curl 172.31.14.2:9901/runtime { "entries": { "health_check.min_interval": { "final_value": "5", "layer_values": [ "5", "" ] } }, "layers": [ "static_layer_0", "admin_layer_0" ] }
參考馬哥教育:

