前言
最早是認為 kubernetes 中 controller 模式 informer 中 resync 是 controller 定時和 api-server 去同步,保證數據的一致性的。后來發現其實不是。下面我們來一一說下。本文的引用都是來自這本書 Programming Kubernetes 。
controller 和 api-server 是不需要定期同步來保證數據的一致性的
參考下面這段話:
Programming Kubernetes 第三章 client-go 的 Informers and Caching
The resync is purely in-memory and does not trigger a call to the server. This used to be different but was eventually changed because the error behavior of the watch mechanism had been improved enough to make relists unnecessary.
Informers also have advanced error behavior: when the long-running watch connection breaks down, they recover from it by trying another watch request, picking up the event stream without losing any events. If the outage is long, and the API server lost events because etcd purged them from its database before the new watch request was successful, the informer will relist all objects.
Next to relists, there is a configurable resync period for reconciliation between the in-memory cache and the business logic: the registered event handlers will be called for all objects each time this period has passed. Common values are in minutes (e.g., 10 or 30 minutes).
我記得之前也看過 blog ,最早 kubernetes controller 是有定期和 api-server 同步來保證數據一致性的,但是現在 watch 機制已經改良了,定期同步這個機制是沒有必要的。
上面文字也說了,resync 是為了 reconcile 業務邏輯和內存緩存( 最近一次 relist 的結果 )的。
如果 resync 是為了 reconcile 業務邏輯和內存緩存,那么為什么有的 controller 會有對比 ResourceVersion 的步驟
The resync interval of 30 seconds in this example leads to a complete set of events being sent to the registered UpdateFunc such that the controller logic is able to reconcile its state with that of the API server. By comparing the ObjectMeta.resourceVersion field, it is possible to distinguish a real update from a resync.
我覺得如果要 reconcile 業務邏輯和內存緩存,就應該把所有的 event 都放到 workqueue 中,但是實際查看有的 controller 中是會比較一下 ResourceVersion,可以先看下我提的這個 issue kubernetes 源碼中在 pkg/controller/deployment/deployment_controller.go 101行 中的 NewDeploymentController 會傳入 DeploymentInformer 和 ReplicaSetInformer,為什么這兩個 Informer 的 AddEventHandler 中 注冊的 UpdateFunc “dc.updateDeployment dc.updateReplicaSet” updateReplicaSet 中有對比 ResourceVersion 的步驟而 updateDeployment 中沒有?
我還是不太理解這個 resync 是怎么 reconcile 業務邏輯,我再仔細研究一下,研究明白了,會在更新這個博客。
更新於20201208
問了一下大神 Brendan Burns,郵件回復如下:
Hello,
I'm not super familiar with that code since I wasn't involved in writting it, but my guess from looking at it is that it is an oversight.In both cases it is possible that there will be an "update" where the resource version doesn't change, due to a re-list of the resources. Such an "update" is in fact a no-op and should be ignored. So I think it would be reasonable to add the same check to the updateDeployment code.
Hope that helps.
--brendan
看了大神的回答,還是沒能解開我的疑惑,我又從 discuss.kubernetes.io 也問了這個問題,還沒人回復我呢。
更新於20210408
我給 kubernetes 提了一個 PR Add compare ResourceVersion process,主要是有群友和我說,提個 PR 估計就知道負責這塊代碼的人,對這個問題怎么看了,主要是也想通過這種方式弄清楚這個問題,目前基本已經得到答案了,Programming Kubernetes 這本書中應該是寫錯了,這個 resync 就不是 reconcile 業務邏輯和內存緩存( 最近一次 relist 的結果 )的,resync 就是處理最近兩次 relist 結果不一致的,不加 compare ResourceVersion process 的結果就是后面 reconcile 的過程多做些事情,加了后面 reconcile 的過程就能省些事情,這個對最終結果是沒有影響,所以現在 kubernetes 源碼中有的地方有 compare ResourceVersion process 這一步有的地方沒有。