Flume官方文檔翻譯——Flume 1.7.0 User Guide (unreleased version)(二)


     Flume官方文檔翻譯——Flume 1.7.0 User Guide (unreleased version)(一)

Logging raw data(記錄原始數據)

Logging the raw stream of data flowing through the ingest pipeline is not desired behaviour in many production environments because this may result in leaking sensitive data or security related configurations, such as secret keys, to Flume log files. By default, Flume will not log such information. On the other hand, if the data pipeline is broken, Flume will attempt to provide clues for debugging the problem.

One way to debug problems with event pipelines is to set up an additional Memory Channel connected to a Logger Sink, which will output all event data to the Flume logs. In some situations, however, this approach is insufficient.

In order to enable logging of event- and configuration-related data, some Java system properties must be set in addition to log4j properties.

To enable configuration-related logging, set the Java system property -Dorg.apache.flume.log.printconfig=true. This can either be passed on the command line or by setting this in the JAVA_OPTS variable in flume-env.sh.

To enable data logging, set the Java system property -Dorg.apache.flume.log.rawdata=true in the same way described above. For most components, the log4j logging level must also be set to DEBUG or TRACE to make event-specific logging appear in the Flume logs.

Here is an example of enabling both configuration logging and raw data logging while also setting the Log4j loglevel to DEBUG for console output:

通過攝取管道獲取記錄到Flume log文件的原始數據流大多不會描述生產環境的行為因為數據里面可能包含敏感數據或者安全相關的配置,例如安全密鑰。默認情況,Flume不會記錄這些信息。另一方面,如果數據管道損壞,Flume會試圖提供一些線索來調試問題。

一個調試事件管道的方法是設置一個額外的內存channel來連接Logger Sink,用來將所有事件數據都記錄到Flume log。然而,在一些情況之下,這種方法還是不足以解決問題。

為了能夠記錄配置相關的日志,設置-Dorg.apache.flume.log.printconfig=true。這個也可以通過命令行或者在flume-env.sh設置JAVA_OPTS屬性。

為了能夠記錄數據,通過跟上面相同的方式來設置-Dorg.apache.flume.log.rawdata=true 。對於大部分組件來說,log4j的打印級別必須設置為DEBUG或者TRACE來讓指定事件的日志信息出現在Flume log中。

下面是一個例子能夠保證將配置信息和原始數據在log4j的打印級別設置在DEBUG的情況下輸出到控制台:

$ bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=DEBUG,console 
-Dorg.apache.flume.log.printconfig=true -Dorg.apache.flume.log.rawdata=true                                                                         

Zookeeper based ConfigurationZookeeper基礎配置)

Flume supports Agent configurations via Zookeeper. This is an experimental feature. The configuration file needs to be uploaded in the Zookeeper, under a configurable prefix. The configuration file is stored in Zookeeper Node data. Following is how the Zookeeper Node tree would look like for agents a1 and a2

Flume支持通過ZooKeeper來配置Agent。這是個實驗性的特性。配置文檔必須上傳到ZooKeeper中,(在一個可配置的前綴下)。這個配置文檔存儲在ZooKeeper節點數據下。下面是ZooKeeper下的節點樹結構:

- /flume

 |- /a1 [Agent config file]

 |- /a2 [Agent config file]

 

Once the configuration file is uploaded, start the agent with following options

一旦配置文檔上傳完成,通過下面選項來啟動agent

$ bin/flume-ng agent –conf conf -z zkhost:2181,zkhost1:2181 -p /flume –name a1 -Dflume.root.logger=INFO,console

Argument Name

Default

Description

z

Zookeeper connection string. Comma separated list of hostname:port

p

/flume

Base Path in Zookeeper to store Agent configurations

Installing third-party plugins(安裝第三方插件)

Flume has a fully plugin-based architecture. While Flume ships with many out-of-the-box sources, channels, sinks, serializers, and the like, many implementations exist which ship separately from Flume.

While it has always been possible to include custom Flume components by adding their jars to the FLUME_CLASSPATH variable in the flume-env.sh file, Flume now supports a special directory called plugins.d which automatically picks up plugins that are packaged in a specific format. This allows for easier management of plugin packaging issues as well as simpler debugging and troubleshooting of several classes of issues, especially library dependency conflicts.

Flume擁有一個全備的插件架構。雖然Flume自帶許多開箱即用的sources、channel,sinks和serializers等,同時也存在許多跟Flume之外的實現。

Flume曾經支持在flume-env.sh中的FLUME_CLASSPATH中添加一些自定義的Flume組件,現在Flume支持一個特殊路徑plugins.d自動地安裝那些按照指定格式存儲的插件。

The plugins.d directory(插件目錄)

The plugins.d directory is located at $FLUME_HOME/plugins.d. At startup time, the flume-ng start script looks in the plugins.d directory for plugins that conform to the below format and includes them in proper paths when starting up java.

這個plugins.d目錄位於$FLUME_HOME/plugins.d。在啟動的時候,flume-ng啟動腳本掃描plugins.d目錄下的遵循格式的插件並在啟動java時將它們放在合適的路徑。

Directory layout for plugins(插件的底層目錄)

Each plugin (subdirectory) within plugins.d can have up to three sub-directories:

    1. lib - the plugin’s jar(s)
    2. libext - the plugin’s dependency jar(s)
    3. native - any required native libraries, such as .so files

Example of two plugins within the plugins.d directory:

plugins.d下的每個插件(子目錄)包含三個子目錄:

    1. lib – 插件的jar包
    2. libext – 插件的依賴jar包
    3. native – 任何需要的本地庫,例如.so文檔。

plugins.d目錄下的兩個插件:

plugins.d/

plugins.d/custom-source-1/

plugins.d/custom-source-1/lib/my-source.jar

plugins.d/custom-source-1/libext/spring-core-2.5.6.jar

plugins.d/custom-source-2/

plugins.d/custom-source-2/lib/custom.jar

plugins.d/custom-source-2/native/gettext.so

 

Data ingestion(數據獲取)

Flume supports a number of mechanisms to ingest data from external sources.

Flume支持從外部來源獲取數據的一系列機制。

RPC

An Avro client included in the Flume distribution can send a given file to Flume Avro source using avro RPC mechanism:

Flume 中的Avro client 可以用avro RPC 機制來發送一個給定文檔給Flume Avro source。

$ bin/flume-ng avro-client -H localhost -p 41414 -F /usr/logs/log.10

 

The above command will send the contents of /usr/logs/log.10 to to the Flume source listening on that ports.

上面的命令行發送/usr/logs/log.10的內容給監聽在那個端口的Flume source。

Executing commands(執行命令行)

There’s an exec source that executes a given command and consumes the output. A single ‘line’ of output ie. text followed by carriage return (‘\r’) or line feed (‘\n’) or both together.

有一個執行source來執行給出的命令和消費輸出。輸出的一行文本帶着(‘\r’)或者(‘\n’)或者兩者皆有。

Note : Flume does not support tail as a source. One can wrap the tail command in an exec source to stream the file.

說明:Fluem不支持一個結尾符作為一個資源。所以可以用一個可執行的源碼來包裝結尾命令輸出文件。)

Network streams

Flume supports the following mechanisms to read data from popular log stream types, such as:

Flume支持下面的機制來讀取受歡迎的日志類型,例如:

    1. Avro
    2. Thrift
    3. Syslog
    4. Netcat

Setting multi-agent flow(設置多個agent流)

 

In order to flow the data across multiple agents or hops, the sink of the previous agent and source of the current hop need to be avro type with the sink pointing to the hostname (or IP address) and port of the source.

為了讓數據可以流過多個agents或者hops,前面那個agent的sink和當前的hop的source都必須是avro類型並且sink還要指向source的主機名(IP地址)和端口。

Consolidation(結合)

A very common scenario in log collection is a large number of log producing clients sending data to a few consumer agents that are attached to the storage subsystem. For example, logs collected from hundreds of web servers sent to a dozen of agents that write to HDFS cluster.

一個日志收集中非常常見的情形是大量日志發送到一些消費數據的綁定到子存儲系統的agent上。例如,從上百個web 服務器日志收集而來日志發送到一打agents寫到HDFS 集群

 

This can be achieved in Flume by configuring a number of first tier agents with an avro sink, all pointing to an avro source of single agent (Again you could use the thrift sources/sinks/clients in such a scenario). This source on the second tier agent consolidates the received events into a single channel which is consumed by a sink to its final destination.

這個可以在Flume中配置第一層包含avro sink的agents,所有的sink都執行一個單獨的擁有avro source的agent(你也可以在這樣的情形下使用thrift sources/sinks/cleints)。這個在第二層agent中的source將接收到的數據存儲在一個channel中用來給sink輸入到最終的目的。

Multiplexing the flow(選擇分流)

Flume supports multiplexing the event flow to one or more destinations. This is achieved by defining a flow multiplexer that can replicate or selectively route an event to one or more channels.

Flume支持將事件流向一個或者多個目的地。這個可以通過定義一個流的能夠復制或者可選路徑的多路選擇器來將事件導向一個或者多個Channel來實現。

 

The above example shows a source from agent “foo” fanning out the flow to three different channels. This fan out can be replicating or multiplexing. In case of replicating flow, each event is sent to all three channels. For the multiplexing case, an event is delivered to a subset of available channels when an event’s attribute matches a preconfigured value. For example, if an event attribute called “txnType” is set to “customer”, then it should go to channel1 and channel3, if it’s “vendor” then it should go to channel2, otherwise channel3. The mapping can be set in the agent’s configuration file.

上面那個例子展示“foo”agent中的source將事件流分到三個不同的Channel。這個分流可以是復制或者多路復用。在復制流的情況下,每個實踐都會被發送到三個channel中。對於分流的情況,一個事件將會匹配與配置好的value來發送到可達的channel中。例如,假如一個事件的屬性“txnType”設為“customer”,那么它將被發送到channel1和channel3,如果值為“vendor”,那么會被送到channel2,否則就去channel3。這個映射關系可以在agent的配置文檔中設置。

Configuration(配置)

As mentioned in the earlier section, Flume agent configuration is read from a file that resembles a Java property file format with hierarchical property settings.

正如在前面部分所提到的,Flume agent配置是從一個類似於Java屬性文件格式和層級屬性設置的文檔中讀取的。

Defining the flow(定義流)

To define the flow within a single agent, you need to link the sources and sinks via a channel. You need to list the sources, sinks and channels for the given agent, and then point the source and sink to a channel. A source instance can specify multiple channels, but a sink instance can only specify one channel. The format is as follows:

在一個單點agent中定義流。你必須通過一個channel來連接source和sink。你必須列出給定的agent的sources,sinks和channel,然后指出source和sink所指定的channel。一個source實例可以指定多個channel,但是一個sink實例只能指定一個channel。格式如下:

# list the sources, sinks and channels for the agent

<Agent>.sources = <Source>

<Agent>.sinks = <Sink>

<Agent>.channels = <Channel1> <Channel2>

 

# set channel for source

<Agent>.sources.<Source>.channels = <Channel1> <Channel2> ...

 

# set channel for sink

<Agent>.sinks.<Sink>.channel = <Channel1>

For example, an agent named agent_foo is reading data from an external avro client and sending it to HDFS via a memory channel. The config file weblog.config could look like:

例如,一個agent命名為agent_foo從一個外部的avro客戶端讀取數據通過一個內存channel發送到HDFS。配置文件可以如下:

# list the sources, sinks and channels for the agent

agent_foo.sources = avro-appserver-src-1

agent_foo.sinks = hdfs-sink-1

agent_foo.channels = mem-channel-1

 

# set channel for source

agent_foo.sources.avro-appserver-src-1.channels = mem-channel-1

 

# set channel for sink

agent_foo.sinks.hdfs-sink-1.channel = mem-channel-1

 

This will make the events flow from avro-AppSrv-source to hdfs-Cluster1-sink through the memory channel mem-channel-1. When the agent is started with the weblog.config as its config file, it will instantiate that flow.

這樣就可以使得事件流從avro-AppSrv-source通過內存channel mem-channel-1流向hdfs-Cluster1-sink。當agent將weblog.config作為他的配置文件啟動時,就會實例化這樣一個流。

Configuring individual components(配置單個組件)

After defining the flow, you need to set properties of each source, sink and channel. This is done in the same hierarchical namespace fashion where you set the component type and other values for the properties specific to each component:

定義好一個流之后,你必須為每個source、sink和channel配置屬性。這跟你為每個組件設置組件類型和其他屬性時使用的命名空間層級格式是一樣的。

# properties for sources

<Agent>.sources.<Source>.<someProperty> = <someValue>

 

# properties for channels

<Agent>.channel.<Channel>.<someProperty> = <someValue>

 

# properties for sinks

<Agent>.sources.<Sink>.<someProperty> = <someValue>

 

The property “type” needs to be set for each component for Flume to understand what kind of object it needs to be. Each source, sink and channel type has its own set of properties required for it to function as intended. All those need to be set as needed. In the previous example, we have a flow from avro-AppSrv-source to hdfs-Cluster1-sink through the memory channel mem-channel-1. Here’s an example that shows configuration of each of those components:

每個組件的“type”屬性是必須設置的,以保證Flume框架能夠知道他們是哪種類型的。每個source、sink和channel類型都有它們被設計的預期功能而自己獨有的屬性。所有這些都必須設置。在前面的例子當中。我們擁有一個avro-AppSrv-source通過內存channel mem-channel-1連接hdfs-Cluster1-sink的流。下面將展示這些組件的配置情況

agent_foo.sources = avro-AppSrv-source

agent_foo.sinks = hdfs-Cluster1-sink

agent_foo.channels = mem-channel-1

 

# set channel for sources, sinks

 

# properties of avro-AppSrv-source

agent_foo.sources.avro-AppSrv-source.type = avro

agent_foo.sources.avro-AppSrv-source.bind = localhost

agent_foo.sources.avro-AppSrv-source.port = 10000

 

# properties of mem-channel-1

agent_foo.channels.mem-channel-1.type = memory

agent_foo.channels.mem-channel-1.capacity = 1000

agent_foo.channels.mem-channel-1.transactionCapacity = 100

 

# properties of hdfs-Cluster1-sink

agent_foo.sinks.hdfs-Cluster1-sink.type = hdfs

agent_foo.sinks.hdfs-Cluster1-sink.hdfs.path = hdfs://namenode/flume/webdata

 

#...

 

Adding multiple flows in an agent(一個Agent多個流)

A single Flume agent can contain several independent flows. You can list multiple sources, sinks and channels in a config. These components can be linked to form multiple flows:

單個Flume agent可以包含多個獨立的流。你可以在一個配置文件中列出多個sources、sinks和channels。這些組件將連接組成多個流。

# list the sources, sinks and channels for the agent

<Agent>.sources = <Source1> <Source2>

<Agent>.sinks = <Sink1> <Sink2>

<Agent>.channels = <Channel1> <Channel2>

 

Then you can link the sources and sinks to their corresponding channels (for sources) of channel (for sinks) to setup two different flows. For example, if you need to setup two flows in an agent, one going from an external avro client to external HDFS and another from output of a tail to avro sink, then here’s a config to do that:

然后你可以將sources和sink是通過相應的channels連接來配置兩個不同的流。例如,你必須在一個agent中配置兩個流,一個是從外部avro客戶端到外部HDFS和另一個是從一個avro sink獲取數據,以下配置可達到這個目標:

# list the sources, sinks and channels in the agent

agent_foo.sources = avro-AppSrv-source1 exec-tail-source2

agent_foo.sinks = hdfs-Cluster1-sink1 avro-forward-sink2

agent_foo.channels = mem-channel-1 file-channel-2

 

# flow #1 configuration

agent_foo.sources.avro-AppSrv-source1.channels = mem-channel-1

agent_foo.sinks.hdfs-Cluster1-sink1.channel = mem-channel-1

 

# flow #2 configuration

agent_foo.sources.exec-tail-source2.channels = file-channel-2

agent_foo.sinks.avro-forward-sink2.channel = file-channel-2

 

Configuring a multi agent flow(配置一個多agent流)

To setup a multi-tier flow, you need to have an avro/thrift sink of first hop pointing to avro/thrift source of the next hop. This will result in the first Flume agent forwarding events to the next Flume agent. For example, if you are periodically sending files (1 file per event) using avro client to a local Flume agent, then this local agent can forward it to another agent that has the mounted for storage.

為了配置一個多層的流,你必須要有一個avro/thriftsink 指向下一個hop的avro/thrift source。這將會使得第一個Flume agent將events傳給下一個Flume agent。例如,如果你用avro client周期性地向一個本地的Flume agent發送數據,這個本地的Flume agent將events傳到另外一個掛載內存的agent。

Weblog agent config:

# list sources, sinks and channels in the agent

agent_foo.sources = avro-AppSrv-source

agent_foo.sinks = avro-forward-sink

agent_foo.channels = file-channel

 

# define the flow

agent_foo.sources.avro-AppSrv-source.channels = file-channel

agent_foo.sinks.avro-forward-sink.channel = file-channel

 

# avro sink properties

agent_foo.sources.avro-forward-sink.type = avro

agent_foo.sources.avro-forward-sink.hostname = 10.1.1.100

agent_foo.sources.avro-forward-sink.port = 10000

 

# configure other pieces

#...

HDFS agent config:

# list sources, sinks and channels in the agent

agent_foo.sources = avro-collection-source

agent_foo.sinks = hdfs-sink

agent_foo.channels = mem-channel

 

# define the flow

agent_foo.sources.avro-collection-source.channels = mem-channel

agent_foo.sinks.hdfs-sink.channel = mem-channel

 

# avro sink properties

agent_foo.sources.avro-collection-source.type = avro

agent_foo.sources.avro-collection-source.bind = 10.1.1.100

agent_foo.sources.avro-collection-source.port = 10000

 

# configure other pieces

#...

 

Here we link the avro-forward-sink from the weblog agent to the avro-collection-source of the hdfs agent. This will result in the events coming from the external appserver source eventually getting stored in HDFS.

在這里,我們將weblog agent的avro-forward-sink連到hdfs agent的avro-collection-source。這將使得從外部app服務器來的events最終儲存到HDFS中。

Fan out flow(分流)

As discussed in previous section, Flume supports fanning out the flow from one source to multiple channels. There are two modes of fan out, replicating and multiplexing. In the replicating flow, the event is sent to all the configured channels. In case of multiplexing, the event is sent to only a subset of qualifying channels. To fan out the flow, one needs to specify a list of channels for a source and the policy for the fanning it out. This is done by adding a channel “selector” that can be replicating or multiplexing. Then further specify the selection rules if it’s a multiplexer. If you don’t specify a selector, then by default it’s replicating:

正如前面部分所討論的,Flume支持將來自一個source的events分到多個channels。將有兩個模式的分流(暫且叫分流吧),復制流和選擇流。在復制流中,所有的events將會發送到所有的channel中。在選擇流中,event會被分到特定的channel中。在分流中,必須為source指定一組channel和相應的策略。通過給source增加一個selector.type的屬性來選擇復制還是選擇。如果是選擇流,那么就要指定選擇規則。如果沒有指定的話,默認就是復制流。

# List the sources, sinks and channels for the agent

<Agent>.sources = <Source1>

<Agent>.sinks = <Sink1> <Sink2>

<Agent>.channels = <Channel1> <Channel2>

 

# set list of channels for source (separated by space)

<Agent>.sources.<Source1>.channels = <Channel1> <Channel2>

 

# set channel for sinks

<Agent>.sinks.<Sink1>.channel = <Channel1>

<Agent>.sinks.<Sink2>.channel = <Channel2>

 

<Agent>.sources.<Source1>.selector.type = replicating

 

The multiplexing select has a further set of properties to bifurcate the flow. This requires specifying a mapping of an event attribute to a set for channel. The selector checks for each configured attribute in the event header. If it matches the specified value, then that event is sent to all the channels mapped to that value. If there’s no match, then the event is sent to set of channels configured as default:

選擇流擁有一組屬性來進行分流。這個需要為事件屬性和channel指定一個映射關系。這個選擇器檢查每個事件的header中的配置屬性。如果他匹配到指定的值,該事件將會發送到所有跟指定值存在映射關系的channel。如果沒有匹配成功,該event會發送到默認的channel。

# Mapping for multiplexing selector

<Agent>.sources.<Source1>.selector.type = multiplexing

<Agent>.sources.<Source1>.selector.header = <someHeader>

<Agent>.sources.<Source1>.selector.mapping.<Value1> = <Channel1>

<Agent>.sources.<Source1>.selector.mapping.<Value2> = <Channel1> <Channel2>

<Agent>.sources.<Source1>.selector.mapping.<Value3> = <Channel2>

#...

 

<Agent>.sources.<Source1>.selector.default = <Channel2>

 

The mapping allows overlapping the channels for each value.

該映射允許一個channel對應多個值。

The following example has a single flow that multiplexed to two paths. The agent named agent_foo has a single avro source and two channels linked to two sinks:

接下來的例子是一個擁有兩條路徑的選擇流。名字為agent_foo的agent擁有單個avro source和兩個channel連接兩個sinks。

# list the sources, sinks and channels in the agent

agent_foo.sources = avro-AppSrv-source1

agent_foo.sinks = hdfs-Cluster1-sink1 avro-forward-sink2

agent_foo.channels = mem-channel-1 file-channel-2

 

# set channels for source

agent_foo.sources.avro-AppSrv-source1.channels = mem-channel-1 file-channel-2

 

# set channel for sinks

agent_foo.sinks.hdfs-Cluster1-sink1.channel = mem-channel-1

agent_foo.sinks.avro-forward-sink2.channel = file-channel-2

 

# channel selector configuration

agent_foo.sources.avro-AppSrv-source1.selector.type = multiplexing

agent_foo.sources.avro-AppSrv-source1.selector.header = State

agent_foo.sources.avro-AppSrv-source1.selector.mapping.CA = mem-channel-1

agent_foo.sources.avro-AppSrv-source1.selector.mapping.AZ = file-channel-2

agent_foo.sources.avro-AppSrv-source1.selector.mapping.NY = mem-channel-1 file-channel-2

agent_foo.sources.avro-AppSrv-source1.selector.default = mem-channel-1

 

The selector checks for a header called “State”. If the value is “CA” then its sent to mem-channel-1, if its “AZ” then it goes to file-channel-2 or if its “NY” then both. If the “State” header is not set or doesn’t match any of the three, then it goes to mem-channel-1 which is designated as ‘default’.

選擇器檢查名為“State”的header。如果值為“CA”會被送到 mem-channel-1,如果值為“AZ”將會被送file-channel-2或者值為“NY”那么就會被送到兩個channel。如果“State”header沒有找到匹配的channel,那么它會被送到默認channel mem-channel-1

The selector also supports optional channels. To specify optional channels for a header, the config parameter ‘optional’ is used in the following way:

選擇器也支持可選channels。可以為一個header指定可選channel,可按以下方式來使用“optional”配置參數:

# channel selector configuration

agent_foo.sources.avro-AppSrv-source1.selector.type = multiplexing

agent_foo.sources.avro-AppSrv-source1.selector.header = State

agent_foo.sources.avro-AppSrv-source1.selector.mapping.CA = mem-channel-1

agent_foo.sources.avro-AppSrv-source1.selector.mapping.AZ = file-channel-2

agent_foo.sources.avro-AppSrv-source1.selector.mapping.NY = mem-channel-1 file-channel-2

agent_foo.sources.avro-AppSrv-source1.selector.optional.CA = mem-channel-1 file-channel-2

agent_foo.sources.avro-AppSrv-source1.selector.mapping.AZ = file-channel-2

agent_foo.sources.avro-AppSrv-source1.selector.default = mem-channel-1

 

The selector will attempt to write to the required channels first and will fail the transaction if even one of these channels fails to consume the events. The transaction is reattempted on all of the channels. Once all required channels have consumed the events, then the selector will attempt to write to the optional channels. A failure by any of the optional channels to consume the event is simply ignored and not retried.

If there is an overlap between the optional channels and required channels for a specific header, the channel is considered to be required, and a failure in the channel will cause the entire set of required channels to be retried. For instance, in the above example, for the header “CA” mem-channel-1 is considered to be a required channel even though it is marked both as required and optional, and a failure to write to this channel will cause that event to be retried on all channels configured for the selector.

Note that if a header does not have any required channels, then the event will be written to the default channels and will be attempted to be written to the optional channels for that header. Specifying optional channels will still cause the event to be written to the default channels, if no required channels are specified. If no channels are designated as default and there are no required, the selector will attempt to write the events to the optional channels. Any failures are simply ignored in that case.

選擇器會試圖第一時間將數據寫到需求channel和當這些channel中某些channel沒法消費這些events時會停止這次事務。該事務會重新連接所有channel。一旦所有channel都在消費了所有events,那么選擇器會試圖將events寫到備選channel中。備選channel消費event產生的失效會被簡單地忽略和不再重試。

如果對於一個指定的header存在備選channel和需求channel的重疊,那么選擇需求channel,並且當一個需求channel發生失效時將會引起所有需求channel的重試。舉個例子,在上面的案例中,為header“CA”指定了一個需求channel mem-channel-1,盡管備選channel和需求channel都指定了,但是一旦需求channel發生失效,name會引起該選擇器中所有channel的重試。

需要說明的是如果一個header沒有指定任何需求channel,那么events會寫到默認channel和試圖寫到備選channel中。如果沒有指定需求channel,就算指定了備選channel,events還是會被寫到默認channel中。如果沒有指定需求channel和默認channel,選擇器會說將events寫到備選channel。在這些情況中,失效會被忽略。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM