根據官網創建擁有自己配置文件的redis容器
https://hub.docker.com/_/redis
一、拉取redis容器
docker pull redis
docker images
二、從github上獲取redis配置文件
三、創建一個dockerFile
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
四、制作鏡像啟動
docker build -f /usr/local/redis/dockerFile -t redis:1.0 . #制作鏡像
#復制redis.conf到/usr/etc/redis/conf,否則會因為docker掛載時自動創建redis.conf目錄導致掛載失敗報錯,啟動時必須指定配置文件/usr/local/etc/redis/redis.conf,否則在配置#文件中的修改不生效
docker run -d -p 6379:6379 -v /usr/etc/redis/conf/redis.conf:/usr/local/etc/redis/redis.conf -v /usr/local/redis/data:/data --name redis redis:1.0 redis-server /usr/local/etc/redis/redis.conf --appendonly yes #--appendonly yes AOF持久化 可在redis.conf文件中配置。同事默認啟用RDB備份方式
docker exec -it a8e9fef63f252ce452847519df6b37484d45d777ddd26793e33c08f3332dd75f /bin/bash
五、驗證是否掛載成功
#exit 退出容器后重啟容器
docker restart redis
剛才設置的key還在
六、配置密碼
修改掛載宿主機配置文件(命令行設置的密碼為臨時密碼重啟后失效)
將該行注釋放開后設置自己的密碼
Redis常用配置參數
# 參數說明
# redis.conf 配置項說明如下:
# Redis默認不是以守護進程的方式運行,可以通過該配置項修改,使用yes啟用守護進程
daemonize no
# 當Redis以守護進程方式運行時,Redis默認會把pid寫入/var/run/redis.pid文件,可以通過pidfile指定
pidfile /var/run/redis.pid
# 指定Redis監聽端口,默認端口為6379,作者在自己的一篇博文中解釋了為什么選用6379作為默認端口,因為6379在手機按鍵上MERZ對應的號碼,而MERZ取自意大利歌女Alessia Merz的名字
port 6379
# 綁定的主機地址
bind 127.0.0.1
# 當 客戶端閑置多長時間后關閉連接,如果指定為0,表示關閉該功能
timeout 300
# 指定日志記錄級別,Redis總共支持四個級別:debug、verbose、notice、warning,默認為verbose
loglevel verbose
# 日志記錄方式,默認為標准輸出,如果配置Redis為守護進程方式運行,而這里又配置為日志記錄方式為標准輸出,則日志將會發送給/dev/null
logfile stdout
# 設置數據庫的數量,默認數據庫為0,可以使用SELECT <dbid>命令在連接上指定數據庫id
databases 16
# 指定在多長時間內,有多少次更新操作,就將數據同步到數據文件,可以多個條件配合
save <seconds> <changes>
Redis默認配置文件中提供了三個條件:
save 900 1
save 300 10
save 60 10000
分別表示900秒(15分鍾)內有1個更改,300秒(5分鍾)內有10個更改以及60秒內有10000個更改。
# 指定存儲至本地數據庫時是否壓縮數據,默認為yes,Redis采用LZF壓縮,如果為了節省CPU時間,可以關閉該選項,但會導致數據庫文件變的巨大
rdbcompression yes
# 指定本地數據庫文件名,默認值為dump.rdb
dbfilename dump.rdb
# 指定本地數據庫存放目錄
dir ./
# 設置當本機為slav服務時,設置master服務的IP地址及端口,在Redis啟動時,它會自動從master進行數據同步
slaveof <masterip> <masterport>
# 當master服務設置了密碼保護時,slav服務連接master的密碼
masterauth <master-password>
# 設置Redis連接密碼,如果配置了連接密碼,客戶端在連接Redis時需要通過AUTH <password>命令提供密碼,默認關閉
requirepass foobared
# 設置同一時間最大客戶端連接數,默認無限制,Redis可以同時打開的客戶端連接數為Redis進程可以打開的最大文件描述符數,如果設置 maxclients 0,表示不作限制。當客戶端連接數到達限制時,Redis會關閉新的連接並向客戶端返回max number of clients reached錯誤信息
maxclients 128
# 指定Redis最大內存限制,Redis在啟動時會把數據加載到內存中,達到最大內存后,Redis會先嘗試清除已到期或即將到期的Key,當此方法處理 后,仍然到達最大內存設置,將無法再進行寫入操作,但仍然可以進行讀取操作。Redis新的vm機制,會把Key存放內存,Value會存放在swap區
maxmemory <bytes>
# 指定是否在每次更新操作后進行日志記錄,Redis在默認情況下是異步的把數據寫入磁盤,如果不開啟,可能會在斷電時導致一段時間內的數據丟失。因為 redis本身同步數據文件是按上面save條件來同步的,所以有的數據會在一段時間內只存在於內存中。默認為no
appendonly no
# 指定更新日志文件名,默認為appendonly.aof
appendfilename appendonly.aof
# 指定更新日志條件,共有3個可選值:
no:表示等操作系統進行數據緩存同步到磁盤(快)
always:表示每次更新操作后手動調用fsync()將數據寫到磁盤(慢,安全)
everysec:表示每秒同步一次(折衷,默認值)
appendfsync everysec
# 指定是否啟用虛擬內存機制,默認值為no,簡單的介紹一下,VM機制將數據分頁存放,由Redis將訪問量較少的頁即冷數據swap到磁盤上,訪問多的頁面由磁盤自動換出到內存中(在后面的文章我會仔細分析Redis的VM機制)
vm-enabled no
# 虛擬內存文件路徑,默認值為/tmp/redis.swap,不可多個Redis實例共享
vm-swap-file /tmp/redis.swap
# 將所有大於vm-max-memory的數據存入虛擬內存,無論vm-max-memory設置多小,所有索引數據都是內存存儲的(Redis的索引數據 就是keys),也就是說,當vm-max-memory設置為0的時候,其實是所有value都存在於磁盤。默認值為0
vm-max-memory 0
# Redis swap文件分成了很多的page,一個對象可以保存在多個page上面,但一個page上不能被多個對象共享,vm-page-size是要根據存儲的 數據大小來設定的,作者建議如果存儲很多小對象,page大小最好設置為32或者64bytes;如果存儲很大大對象,則可以使用更大的page,如果不 確定,就使用默認值
vm-page-size 32
# 設置swap文件中的page數量,由於頁表(一種表示頁面空閑或使用的bitmap)是在放在內存中的,,在磁盤上每8個pages將消耗1byte的內存。
vm-pages 134217728
# 設置訪問swap文件的線程數,最好不要超過機器的核數,如果設置為0,那么所有對swap文件的操作都是串行的,可能會造成比較長時間的延遲。默認值為4
vm-max-threads 4
# 設置在向客戶端應答時,是否把較小的包合並為一個包發送,默認為開啟
glueoutputbuf yes
# 指定在超過一定的數量或者最大的元素超過某一臨界值時,采用一種特殊的哈希算法
hash-max-zipmap-entries 64
hash-max-zipmap-value 512
# 指定是否激活重置哈希,默認為開啟(后面在介紹Redis的哈希算法時具體介紹)
activerehashing yes
# 指定包含其它的配置文件,可以在同一主機上多個Redis實例之間使用同一份配置文件,而同時各個實例又擁有自己的特定配置文件
include /path/to/local.conf
Redis持久化
Redis 提供了不同級別的持久化方式:
- RDB持久化方式能夠在指定的時間間隔能對你的數據進行快照存儲.
- AOF持久化方式記錄每次對服務器寫的操作,當服務器重啟的時候會重新執行這些命令來恢復原始的數據,AOF命令以redis協議追加保存每次寫的操作到文件末尾.Redis還能對AOF文件進行后台重寫,使得AOF文件的體積不至於過大.
- 如果你只希望你的數據在服務器運行的時候存在,你也可以不使用任何持久化方式.
- 你也可以同時開啟兩種持久化方式, 在這種情況下, 當redis重啟的時候會優先載入AOF文件來恢復原始的數據,因為在通常情況下AOF文件保存的數據集要比RDB文件保存的數據集要完整.
最重要的事情是了解RDB和AOF持久化方式的不同,讓我們以RDB持久化方式開始:
RDB的優點
RDB是一個非常緊湊的文件,它保存了某個時間點得數據集,非常適用於數據集的備份,比如你可以在每個小時報保存一下過去24小時內的數據,同時每天保存過去30天的數據,這樣即使出了問題你也可以根據需求恢復到不同版本的數據集.
RDB是一個緊湊的單一文件,很方便傳送到另一個遠端數據中心或者亞馬遜的S3(可能加密),非常適用於災難恢復.
RDB在保存RDB文件時父進程唯一需要做的就是fork出一個子進程,接下來的工作全部由子進程來做,父進程不需要再做其他IO操作,所以RDB持久化方式可以最大化redis的性能.
與AOF相比,在恢復大的數據集的時候,RDB方式會更快一些.
RDB的缺點
如果你希望在redis意外停止工作(例如電源中斷)的情況下丟失的數據最少的話,那么RDB不適合你.雖然你可以配置不同的save時間點(例如每隔5分鍾並且對數據集有100個寫的操作),是Redis要完整的保存整個數據集是一個比較繁重的工作,你通常會每隔5分鍾或者更久做一次完整的保存,萬一在Redis意外宕機,你可能會丟失幾分鍾的數據.
RDB 需要經常fork子進程來保存數據集到硬盤上,當數據集比較大的時候,fork的過程是非常耗時的,可能會導致Redis在一些毫秒級內不能響應客戶端的請求.如果數據集巨大並且CPU性能不是很好的情況下,這種情況會持續1秒,AOF也需要fork,但是你可以調節重寫日志文件的頻率來提高數據集的耐久度.
AOF優點
使用AOF 會讓你的Redis更加耐久: 你可以使用不同的fsync策略:無fsync,每秒fsync,每次寫的時候fsync.使用默認的每秒fsync策略,Redis的性能依然很好(fsync是由后台線程進行處理的,主線程會盡力處理客戶端請求),一旦出現故障,你最多丟失1秒的數據.
AOF文件是一個只進行追加的日志文件,所以不需要寫入seek,即使由於某些原因(磁盤空間已滿,寫的過程中宕機等等)未執行完整的寫入命令,你也也可使用redis-check-aof工具修復這些問題.
Redis 可以在 AOF 文件體積變得過大時,自動地在后台對 AOF 進行重寫: 重寫后的新 AOF 文件包含了恢復當前數據集所需的最小命令集合。 整個重寫操作是絕對安全的,因為 Redis 在創建新 AOF 文件的過程中,會繼續將命令追加到現有的 AOF 文件里面,即使重寫過程中發生停機,現有的 AOF 文件也不會丟失。 而一旦新 AOF 文件創建完畢,Redis 就會從舊 AOF 文件切換到新 AOF 文件,並開始對新 AOF 文件進行追加操作。
AOF 文件有序地保存了對數據庫執行的所有寫入操作, 這些寫入操作以 Redis 協議的格式保存, 因此 AOF 文件的內容非常容易被人讀懂, 對文件進行分析(parse)也很輕松。 導出(export) AOF 文件也非常簡單: 舉個例子, 如果你不小心執行了 FLUSHALL 命令, 但只要 AOF 文件未被重寫, 那么只要停止服務器, 移除 AOF 文件末尾的 FLUSHALL 命令, 並重啟 Redis , 就可以將數據集恢復到 FLUSHALL 執行之前的狀態。
AOF缺點
對於相同的數據集來說,AOF 文件的體積通常要大於 RDB 文件的體積。
根據所使用的 fsync 策略,AOF 的速度可能會慢於 RDB 。 在一般情況下, 每秒 fsync 的性能依然非常高, 而關閉 fsync 可以讓 AOF 的速度和 RDB 一樣快, 即使在高負荷之下也是如此。 不過在處理巨大的寫入載入時,RDB 可以提供更有保證的最大延遲時間(latency)。
如何選擇使用哪種持久化方式?
一般來說, 如果想達到足以媲美 PostgreSQL 的數據安全性, 你應該同時使用兩種持久化功能。
如果你非常關心你的數據, 但仍然可以承受數分鍾以內的數據丟失, 那么你可以只使用 RDB 持久化。
有很多用戶都只使用 AOF 持久化, 但我們並不推薦這種方式: 因為定時生成 RDB 快照(snapshot)非常便於進行數據庫備份, 並且 RDB 恢復數據集的速度也要比 AOF 恢復的速度要快, 除此之外, 使用 RDB 還可以避免之前提到的 AOF 程序的 bug 。
Note: 因為以上提到的種種原因, 未來我們可能會將 AOF 和 RDB 整合成單個持久化模型。 (這是一個長期計划。) 接下來的幾個小節將介紹 RDB 和 AOF 的更多細節。
快照
在默認情況下, Redis 將數據庫快照保存在名字為 dump.rdb的二進制文件中。你可以對 Redis 進行設置, 讓它在“ N 秒內數據集至少有 M 個改動”這一條件被滿足時, 自動保存一次數據集。你也可以通過調用 SAVE或者 BGSAVE , 手動讓 Redis 進行數據集保存操作。
比如說, 以下設置會讓 Redis 在滿足“ 60 秒內有至少有 1000 個鍵被改動”這一條件時, 自動保存一次數據集:
save 60 1000
這種持久化方式被稱為快照 snapshotting.
工作方式
當 Redis 需要保存 dump.rdb 文件時, 服務器執行以下操作:
- Redis 調用forks. 同時擁有父進程和子進程。
- 子進程將數據集寫入到一個臨時 RDB 文件中。
- 當子進程完成對新 RDB 文件的寫入時,Redis 用新 RDB 文件替換原來的 RDB 文件,並刪除舊的 RDB 文件。
- 這種工作方式使得 Redis 可以從寫時復制(copy-on-write)機制中獲益。
恢復方式
只需將備份的dump.rdb文件替換config get dir 下的dump.rdb文件即可;
只追加操作的文件(Append-only file,AOF)
快照功能並不是非常耐久(dura ble): 如果 Redis 因為某些原因而造成故障停機, 那么服務器將丟失最近寫入、且仍未保存到快照中的那些數據。 從 1.1 版本開始, Redis 增加了一種完全耐久的持久化方式: AOF 持久化。
你可以在配置文件中打開AOF方式:
appendonly yes
從現在開始, 每當 Redis 執行一個改變數據集的命令時(比如 SET), 這個命令就會被追加到 AOF 文件的末尾。這樣的話, 當 Redis 重新啟時, 程序就可以通過重新執行 AOF 文件中的命令來達到重建數據集的目的。
日志重寫
因為 AOF 的運作方式是不斷地將命令追加到文件的末尾, 所以隨着寫入命令的不斷增加, AOF 文件的體積也會變得越來越大。舉個例子, 如果你對一個計數器調用了 100 次 INCR , 那么僅僅是為了保存這個計數器的當前值, AOF 文件就需要使用 100 條記錄(entry)。然而在實際上, 只使用一條 SET 命令已經足以保存計數器的當前值了, 其余 99 條記錄實際上都是多余的。
為了處理這種情況, Redis 支持一種有趣的特性: 可以在不打斷服務客戶端的情況下, 對 AOF 文件進行重建(rebuild)。執行 BGREWRITEAOF 命令, Redis 將生成一個新的 AOF 文件, 這個文件包含重建當前數據集所需的最少命令。Redis 2.2 需要自己手動執行 BGREWRITEAOF 命令; Redis 2.4 則可以自動觸發 AOF 重寫, 具體信息請查看 2.4 的示例配置文件。
AOF有多耐用?
你可以配置 Redis 多久才將數據 fsync 到磁盤一次。有三種方式:
- 每次有新命令追加到 AOF 文件時就執行一次 fsync :非常慢,也非常安全
- 每秒 fsync 一次:足夠快(和使用 RDB 持久化差不多),並且在故障時只會丟失 1 秒鍾的數據。
- 從不 fsync :將數據交給操作系統來處理。更快,也更不安全的選擇。
- 推薦(並且也是默認)的措施為每秒 fsync 一次, 這種 fsync 策略可以兼顧速度和安全性。
如果AOF文件損壞了怎么辦?
服務器可能在程序正在對 AOF 文件進行寫入時停機, 如果停機造成了 AOF 文件出錯(corrupt), 那么 Redis 在重啟時會拒絕載入這個 AOF 文件, 從而確保數據的一致性不會被破壞。當發生這種情況時, 可以用以下方法來修復出錯的 AOF 文件:
-
為現有的 AOF 文件創建一個備份。
-
使用 Redis 附帶的 redis-check-aof 程序,對原來的 AOF 文件進行修復:
-
$ redis-check-aof –fix
-
(可選)使用 diff -u 對比修復后的 AOF 文件和原始 AOF 文件的備份,查看兩個文件之間的不同之處。
-
重啟 Redis 服務器,等待服務器載入修復后的 AOF 文件,並進行數據恢復。
工作原理
AOF 重寫和 RDB 創建快照一樣,都巧妙地利用了寫時復制機制:
- Redis 執行 fork() ,現在同時擁有父進程和子進程。
- 子進程開始將新 AOF 文件的內容寫入到臨時文件。
- 對於所有新執行的寫入命令,父進程一邊將它們累積到一個內存緩存中,一邊將這些改動追加到現有 AOF 文件的末尾,這樣樣即使在重寫的中途發生停機,現有的 AOF 文件也還是安全的。
- 當子進程完成重寫工作時,它給父進程發送一個信號,父進程在接收到信號之后,將內存緩存中的所有數據追加到新 AOF 文件的末尾。
- 搞定!現在 Redis 原子地用新文件替換舊文件,之后所有命令都會直接追加到新 AOF 文件的末尾。
怎樣從RDB方式切換為AOF方式
在 Redis 2.2 或以上版本,可以在不重啟的情況下,從 RDB 切換到 AOF :
- 為最新的 dump.rdb 文件創建一個備份。
- 將備份放到一個安全的地方。
- 執行以下兩條命令:
- redis-cli config set appendonly yes
- redis-cli config set save “”
- 確保寫命令會被正確地追加到 AOF 文件的末尾。
- 執行的第一條命令開啟了 AOF 功能: Redis 會阻塞直到初始 AOF 文件創建完成為止, 之后 Redis 會繼續處理命令請求, 並開始將寫入命令追加到 AOF 文件末尾。
- 執行的第二條命令用於關閉 RDB 功能。 這一步是可選的, 如果你願意的話, 也可以同時使用 RDB 和 AOF 這兩種持久化功能。
重要:別忘了在 redis.conf 中打開 AOF 功能! 否則的話, 服務器重啟之后, 之前通過 CONFIG SET 設置的配置就會被遺忘, 程序會按原來的配置來啟動服務器。
AOF和RDB之間的相互作用
在版本號大於等於 2.4 的 Redis 中, BGSAVE 執行的過程中, 不可以執行 BGREWRITEAOF 。 反過來說, 在 BGREWRITEAOF 執行的過程中, 也不可以執行 BGSAVE。這可以防止兩個 Redis 后台進程同時對磁盤進行大量的 I/O 操作。
如果 BGSAVE 正在執行, 並且用戶顯示地調用 BGREWRITEAOF 命令, 那么服務器將向用戶回復一個 OK 狀態, 並告知用戶, BGREWRITEAOF 已經被預定執行: 一旦 BGSAVE 執行完畢, BGREWRITEAOF 就會正式開始。 當 Redis 啟動時, 如果 RDB 持久化和 AOF 持久化都被打開了, 那么程序會優先使用 AOF 文件來恢復數據集, 因為 AOF 文件所保存的數據通常是最完整的。
備份redis數據
在閱讀這個小節前, 請牢記下面這句話: 確保你的數據由完整的備份. 磁盤故障, 節點失效, 諸如此類的問題都可能讓你的數據消失不見, 不進行備份是非常危險的。
Redis 對於數據備份是非常友好的, 因為你可以在服務器運行的時候對 RDB 文件進行復制: RDB 文件一旦被創建, 就不會進行任何修改。 當服務器要創建一個新的 RDB 文件時, 它先將文件的內容保存在一個臨時文件里面, 當臨時文件寫入完畢時, 程序才使用 rename(2) 原子地用臨時文件替換原來的 RDB 文件。
這也就是說, 無論何時, 復制 RDB 文件都是絕對安全的。
- 創建一個定期任務(cron job), 每小時將一個 RDB 文件備份到一個文件夾, 並且每天將一個 RDB 文件備份到另一個文件夾。
- 確保快照的備份都帶有相應的日期和時間信息, 每次執行定期任務腳本時, 使用 find 命令來刪除過期的快照: 比如說, 你可以保留最近 48 小時內的每小時快照, 還可以保留最近一兩個月的每日快照。
- 至少每天一次, 將 RDB 備份到你的數據中心之外, 或者至少是備份到你運行 Redis 服務器的物理機器之外。
容災備份 - Redis 的容災備份基本上就是對數據進行備份, 並將這些備份傳送到多個不同的外部數據中心。容災備份可以在 Redis 運行並產生快照的主數據中心發生嚴重的問題時, 仍然讓數據處於安全狀態。
因為很多 Redis 用戶都是創業者, 他們沒有大把大把的錢可以浪費, 所以下面介紹的都是一些實用又便宜的容災備份方法:
- Amazon S3 ,以及其他類似 S3 的服務,是一個構建災難備份系統的好地方。 最簡單的方法就是將你的每小時或者每日 RDB 備份加密並傳送到 S3 。 對數據的加密可以通過 gpg -c 命令來完成(對稱加密模式)。 記得把你的密碼放到幾個不同的、安全的地方去(比如你可以把密碼復制給你組織里最重要的人物)。 同時使用多個儲存服務來保存數據文件,可以提升數據的安全性。
- 傳送快照可以使用 SCP 來完成(SSH 的組件)。 以下是簡單並且安全的傳送方法: 買一個離你的數據中心非常遠的 VPS , 裝上 SSH , 創建一個無口令的 SSH 客戶端 key , 並將這個 key 添加到 VPS 的 authorized_keys 文件中, 這樣就可以向這個 VPS 傳送快照備份文件了。 為了達到最好的數據安全性,至少要從兩個不同的提供商那里各購買一個 VPS 來進行數據容災備份。
- 需要注意的是, 這類容災系統如果沒有小心地進行處理的話, 是很容易失效的。最低限度下, 你應該在文件傳送完畢之后, 檢查所傳送備份文件的體積和原始快照文件的體積是否相同。 如果你使用的是 VPS , 那么還可以通過比對文件的 SHA1 校驗和來確認文件是否傳送完整。
另外, 你還需要一個獨立的警報系統, 讓它在負責傳送備份文件的傳送器(transfer)失靈時通知你。
Redis命令
config get * #獲取所有配置項
config get requirepass #獲取密碼
config set requirepass 123456 #設置密碼為123456
事務(Redis 部分支持事務,執行錯誤后不能回滾,會繼續執行正確的命令)
MULTI(標記事務塊的開始) 、 EXEC(執行所有MULTI后的命令) 、 DISCARD (丟棄所有MULTI后的命令)和 WATCH (鎖定key直到執行了MULTI/EXEC命令或DISCARD命令)、UNWATCH(取消事務以及WATCH鎖定的key)是 Redis 事務相關的命令。事務可以一次執行多個命令, 並且帶有以下兩個重要的保證:
- 事務是一個單獨的隔離操作:事務中的所有命令都會序列化、按順序地執行。事務在執行的過程中,不會被其他客戶端發送來的命令請求所打斷。
- 事務是一個原子操作:事務中的命令要么全部被執行,要么全部都不執行(不保證執行成功)。
EXEC 命令負責觸發並執行事務中的所有命令:
- 如果客戶端在使用 MULTI 開啟了一個事務之后,卻因為斷線而沒有成功執行 EXEC ,那么事務中的所有命令都不會被執行。
- 另一方面,如果客戶端成功在開啟事務之后執行 EXEC ,那么事務中的所有命令都會被執行。
當使用 AOF 方式做持久化的時候, Redis 會使用單個 write(2) 命令將事務寫入到磁盤中。
然而,如果 Redis 服務器因為某些原因被管理員殺死,或者遇上某種硬件故障,那么可能只有部分事務命令會被成功寫入到磁盤中。
如果 Redis 在重新啟動時發現 AOF 文件出了這樣的問題,那么它會退出,並匯報一個錯誤。
使用redis-check-aof程序可以修復這一問題:它會移除 AOF 文件中不完整事務的信息,確保服務器可以順利啟動。
從 2.2 版本開始,Redis 還可以通過樂觀鎖(optimistic lock)實現 CAS (check-and-set)操作,具體信息請參考文檔的后半部分。
用法
MULTI 命令用於開啟一個事務,它總是返回 OK 。 MULTI 執行之后, 客戶端可以繼續向服務器發送任意多條命令, 這些命令不會立即被執行, 而是被放到一個隊列中, 當 EXEC命令被調用時, 所有隊列中的命令才會被執行。另一方面, 通過調用 DISCARD , 客戶端可以清空事務隊列, 並放棄執行事務。
事務中的錯誤
使用事務時可能會遇上以下兩種錯誤:
- 事務在執行 EXEC 之前,入隊的命令可能會出錯。比如說,命令可能會產生語法錯誤(參數數量錯誤,參數名錯誤,等等),或者其他更嚴重的錯誤,比如內存不足(如果服務器使用 maxmemory 設置了最大內存限制的話)。
- 命令可能在 EXEC 調用之后失敗。舉個例子,事務中的命令可能處理了錯誤類型的鍵,比如將列表命令用在了字符串鍵上面,諸如此類。
對於發生在 EXEC 執行之前的錯誤,客戶端以前的做法是檢查命令入隊所得的返回值:如果命令入隊時返回 QUEUED ,那么入隊成功;否則,就是入隊失敗。如果有命令在入隊時失敗,那么大部分客戶端都會停止並取消這個事務。
不過,從 Redis 2.6.5 開始,服務器會對命令入隊失敗的情況進行記錄,並在客戶端調用 EXEC 命令時,拒絕執行並自動放棄這個事務。
> multi
OK
> abc #abc 命令錯誤入隊失敗,放棄整個事務
QUEUED
> set key13 v13
QUEUED
> exec
ReplyError: EXECABORT Transaction discarded because of previous errors.
> get key13
null
至於那些在 EXEC 命令執行之后所產生的錯誤, 並沒有對它們進行特別處理: 即使事務中有某個/某些命令在執行時產生了錯誤, 事務中的其他命令仍然會繼續執行。
> multi
OK
> incr k #k = a ,incr 命令執行失敗,其他命令繼續執行
QUEUED
> set key0 v0
QUEUED
> set key11 v11
QUEUED
> exec
OK
OK
為什么 Redis 不支持回滾(roll back)
如果你有使用關系式數據庫的經驗, 那么 “Redis 在事務失敗時不進行回滾,而是繼續執行余下的命令”這種做法可能會讓你覺得有點奇怪。
以下是這種做法的優點:
Redis 命令只會因為錯誤的語法而失敗(並且這些問題不能在入隊時發現),或是命令用在了錯誤類型的鍵上面:這也就是說,從實用性的角度來說,失敗的命令是由編程錯誤造成的,而這些錯誤應該在開發的過程中被發現,而不應該出現在生產環境中。
因為不需要對回滾進行支持,所以 Redis 的內部可以保持簡單且快速。
使用 check-and-set 操作實現樂觀鎖
WATCH 命令可以為 Redis 事務提供 check-and-set (CAS)行為。
被 WATCH 的鍵會被監視,並會發覺這些鍵是否被改動過了。 如果有至少一個被監視的鍵在 EXEC 執行之前被修改了, 那么整個事務都會被取消, EXEC 返回nil-reply來表示事務已經失敗。
> get k
10
> watch k
OK
> multi
OK
> set k 20
QUEUED
> exec #執行exec 命令前用其它客戶端執行了 set k 30 命令,該事務被放棄
null
> get k
30
Redis 主從復制、讀寫分離(基於docker)
sentinel.conf
sentinel monitor mymaster 172.19.0.11 6379 1
sentinel auth-pass mymaster 123456
requirepass 123456
daemonize no
redis.conf
主,可寫
# bind 127.0.0.1
protected-mode yes #關閉保護模式
logfile "redis.log" #自定義日志名稱
requirepass 123456 #密碼
appendonly yes #開啟AOF持久化
redis.conf
從,只讀
# bind 127.0.0.1
protected-mode yes
logfile "redis.log"
requirepass 123456
appendonly yes
replicaof 172.19.0.11 6379 #監聽主機地址端口,主從復制,該地址為啟動容器時指定
masterauth 123456
編寫dockerFile生成sentinel鏡像
FROM redis
COPY sentinel.conf /usr/local/etc/redis/sentinel.conf
CMD [ "redis-sentinel", "/usr/local/etc/redis/sentinel.conf" ]
docker build -f /usr/local/redis/sentinelDockerFile -t redis-sentinel:1.0 .
制作鏡像
啟動一主2從容器
#首先創建好/usr/etc/redis/conf/redis-master/redis.conf掛載文件,自動生成的為目錄
#創建自定義網絡(或使用host模式),不然同機器容器間不能通信,從redis庫連接不上主redis,會報Error condition on socket for SYNC: Operation now in progress
docker network create --driver bridge --subnet 172.19.0.0/16 redis_network
#主Redis 容器
docker run -d -p 6379:6379 --network redis_network --ip 172.19.0.11 -v /usr/etc/redis/conf/redis-master/redis.conf:/usr/local/etc/redis/redis.conf -v /usr/local/redis/redis-master/data:/data --name redis-master redis:1.0
#從Redis 容器
docker run -d -p 6380:6379 --network redis_network --ip 172.19.0.12 -v /usr/etc/redis/conf/redis-salve01/redis.conf:/usr/local/etc/redis/redis.conf -v /usr/local/redis/redis-salve01/data:/data --name redis-salve01 redis:1.0
#從Redis 容器
docker run -d -p 6381:6379 --network redis_network --ip 172.19.0.13 -v /usr/etc/redis/conf/redis-salve02/redis.conf:/usr/local/etc/redis/redis.conf -v /usr/local/redis/redis-salve02/data:/data --name redis-salve02 redis:1.0
主Redis地址
測試
同樣方式啟動另外一個從Redis
啟動sentinel容器
#注意:-rwxrwxrwx 1 root root 10925 Sep 29 22:17 sentinel.conf 該文件權限需要其它用戶可寫chmod 777 sentinel.conf,否則容器起不起來
docker run -d -p 6383:26379 --network redis_network --ip 172.19.0.14 -v /usr/etc/redis/conf/redis-sentinel01/sentinel.conf:/usr/local/etc/redis/sentinel.conf -v /usr/local/redis/redis-sentinel01/data:/data --name redis-sentinel01 redis-sentinel:1.0
docker run -d -p 6384:26379 --network redis_network --ip 172.19.0.15 -v /usr/etc/redis/conf/redis-sentinel02/sentinel.conf:/usr/local/etc/redis/sentinel.conf -v /usr/local/redis/redis-sentinel02/data:/data --name redis-sentinel02 redis-sentinel:1.0
查看sentinel.conf
停止redis-master容器,redis-salve02容器成為主庫,sentinel.conf配置文件中從庫地址更新
Redis 完整配置文件
# Redis configuration file example.
#
# 為了讀取配置文件,redis必須以文件路徑作為第一個參數啟動:
#
# ./redis-server /path/to/redis.conf
# 當需要指定內存大小時可以以如下形式指定
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# 單位不區分大小寫
################################## INCLUDES ###################################
# 在此包含一個或多個其他配置文件。如果你有一個標准的模板,去所有的Redis服務器,但也需要為每個服務器自定義一些設置。包含文件可以包括其他文件,所以要明智地使用它。
# 注意,選項“include”不會被來自admin或Redis Sentinel的命令“CONFIG REWRITE”重寫
# 如果您對使用include重寫配置選項感興趣,那么最好使用include作為最后一行
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## MODULES #####################################
# 默認情況下,如果沒有指定“bind”配置指令,Redis將偵聽主機上所有可用網絡接口的連接。
# 使用“bind”配置指令,后跟一個或多個IP地址,可以只監聽一個或多個選定的接口。
# 注:需要其它機器訪問redis時,需將該行注釋(或bind),默認 bind 127.0.0.1
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
# 如果運行Redis的計算機直接暴露在internet上,綁定到所有接口是危險的,並且會將實例暴露給internet上的每個人。
# 因此,在默認情況下,我們取消了以下bind指令的注釋,這將強制Redis只監聽IPv4環回接口地址(這意味着Redis將只能接受來自運行它的同一主機的客戶端連接)。
bind 127.0.0.1
# 保護模式是一層安全保護,為了避免這種情況
# 訪問和利用互聯網上開放的Redis實例。
# 當保護模式開啟時,如果:
# 1) 服務器沒有使用“bind”指令顯式綁定到一組地址。
# 2) 未配置密碼。
# 服務器只接受來自IPv4和IPv6環回地址127.0.0.1和::1的客戶端以及來自Unix域套接字的連接。默認情況下,已啟用保護模式。
# 只有當您確定希望其他主機的客戶機連接到Redis時,才應該禁用它,即使沒有配置身份驗證,也沒有使用“bind”指令顯式列出一組特定的接口。
# 注:其它主機機器訪問redis時需要關閉保護模式,默認 protected-mode yes
protected-mode yes
# 接受指定端口上的連接,默認值為6379
port 6379
#在每秒請求數高的環境中,為了避免客戶機連接速度慢的問題,您需要大量的積壓工作。請注意,Linux內核
#將靜默地將其截斷為/proc/sys/net/core/somaxconn的值,因此請確保同時提高somaxconn和tcp_max_syn_backlog的值,以獲得所需的效果。
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
#在客戶端空閑N秒后關閉連接(0表示禁用)
timeout 0
# TCP保持連接。
# 如果不為零,請使用SO_KEEPALIVE在沒有通信的情況下向客戶端發送TCP ACKs。這有兩個原因:
# 1) 檢測死機。
# 2) 強制中間的網絡設備認為連接是活動的。
# 在Linux上,指定的值(以秒為單位)是用於發送確認的周期。
# 請注意,要關閉連接,需要加倍的時間。在其他內核上,周期取決於內核配置。
# 此選項的合理值為300秒,這是從Redis 3.2.1開始的新Redis默認值。
tcp-keepalive 300
################################# TLS/SSL #####################################
# By default, TLS/SSL is disabled. To enable it, the "tls-port" configuration
# directive can be used to define TLS-listening ports. To enable TLS on the
# default port, use:
#
# port 0
# tls-port 6379
# Configure a X.509 certificate and private key to use for authenticating the
# server to connected clients, masters or cluster peers. These files should be
# PEM formatted.
#
# tls-cert-file redis.crt
# tls-key-file redis.key
# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange:
#
# tls-dh-params-file redis.dh
# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL
# clients and peers. Redis requires an explicit configuration of at least one
# of these, and will not implicitly use the system wide configuration.
#
# tls-ca-cert-file ca.crt
# tls-ca-cert-dir /etc/ssl/certs
# By default, clients (including replica servers) on a TLS port are required
# to authenticate using valid client side certificates.
#
# If "no" is specified, client certificates are not required and not accepted.
# If "optional" is specified, client certificates are accepted and must be
# valid if provided, but are not required.
#
# tls-auth-clients no
# tls-auth-clients optional
# By default, a Redis replica does not attempt to establish a TLS connection
# with its master.
#
# Use the following directive to enable TLS on replication links.
#
# tls-replication yes
# By default, the Redis Cluster bus uses a plain TCP connection. To enable
# TLS for the bus protocol, use the following directive:
#
# tls-cluster yes
# Explicitly specify TLS versions to support. Allowed values are case insensitive
# and include "TLSv1", "TLSv1.1", "TLSv1.2", "TLSv1.3" (OpenSSL >= 1.1.1) or
# any combination. To enable only TLSv1.2 and TLSv1.3, use:
#
# tls-protocols "TLSv1.2 TLSv1.3"
# Configure allowed ciphers. See the ciphers(1ssl) manpage for more information
# about the syntax of this string.
#
# Note: this configuration applies only to <= TLSv1.2.
#
# tls-ciphers DEFAULT:!MEDIUM
# Configure allowed TLSv1.3 ciphersuites. See the ciphers(1ssl) manpage for more
# information about the syntax of this string, and specifically for TLSv1.3
# ciphersuites.
#
# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256
# When choosing a cipher, use the server's preference instead of the client
# preference. By default, the server follows the client's preference.
#
# tls-prefer-server-ciphers yes
# By default, TLS session caching is enabled to allow faster and less expensive
# reconnections by clients that support it. Use the following directive to disable
# caching.
#
# tls-session-caching no
# Change the default number of TLS sessions cached. A zero value sets the cache
# to unlimited size. The default size is 20480.
#
# tls-session-cache-size 5000
# Change the default timeout of cached TLS sessions. The default timeout is 300
# seconds.
#
# tls-session-cache-timeout 60
################################# GENERAL #####################################
# 默認情況下,Redis不作為守護進程運行。如果需要,請使用"yes"。
# 注意Redis將在/var/run中寫入一個pid文件/redis.pid文件當被守護時。
daemonize no
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# requires "expect stop" in your upstart job config
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid
# 指定服務器日志級別。
# This can be one of:
# debug (很多信息,對開發/測試很有用)
# verbose (很多很少有用的信息,但不像調試級別那樣混亂)
# notice (適度冗長,可能是您在生產中想要的)
# warning (只記錄非常重要/關鍵的消息)
loglevel notice
# 指定日志文件名。也可以使用空字符串強制Redis登錄標准輸出。請注意,如果您使用標准輸出進行日志記錄,但是daemonize,日志將被發送到/dev/null
logfile ""
# 要啟用對系統記錄器的日志記錄,只需將“syslog enabled”設置為yes,並根據需要更新其他syslog參數。
# syslog-enabled no
# 指定系統日志標識
# syslog-ident redis
# 指定syslog工具。必須是USER或介於LOCAL0-LOCAL7之間。
# syslog-facility local0
# To disable the built in crash log, which will possibly produce cleaner core
# dumps when they are needed, uncomment the following:
#
# crash-log-enabled no
# To disable the fast memory check that's run as part of the crash log, which
# will possibly let redis terminate sooner, uncomment the following:
#
# crash-memcheck-enabled no
# 設置數據庫的數量。默認數據庫是db0,您可以使用select<dbid>在每個連接的基礎上選擇一個不同的數據庫,其中dbid是0和“databases”-1之間的數字
databases 16
# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY and syslog logging is
# disabled. Basically this means that normally a logo is displayed only in
# interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo no
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# 如果對數據庫執行給定的秒數和寫入操作數,則將保存數據庫。(還可通過save或bgsave命令手動觸發備份生成dump文件,另外flushall命令也會生成dump文件,但是是空的。)
#
# 在下面的示例中,行為將保存:
# 900秒(15分鍾)后,如果至少有1個key改變
# 300秒(5分鍾)后,如果至少有10個key發生變化
# 60秒后,如果至少有10000個key被更改
#
# Note:您可以通過注釋掉所有“保存”行來完全禁用保存。
#
# 還可以通過添加帶有單個空字符串參數的save指令來刪除所有先前配置的保存點
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
# 默認情況下,如果啟用了RDB快照(至少有一個保存點),並且最新的后台保存失敗,則Redis將停止接受寫入。
# 這將使用戶意識到(以一種硬的方式)數據沒有正確地保存在磁盤上,否則很可能沒有人會注意到,並且會發生一些災難。
#
# 如果后台保存進程重新開始工作,Redis將自動允許再次寫入。
#
# 但是,如果您已經設置了對Redis服務器和持久性的適當監視,那么您可能希望禁用此功能,以便即使磁盤、權限等出現問題,Redis也將繼續正常工作。
stop-writes-on-bgsave-error yes
# 在轉儲.rdb數據庫時使用LZF壓縮字符串對象?
# 默認情況下,壓縮是啟用的,因為它幾乎總是成功的。
# 如果您想在saving子進程中節約一些CPU,請將其設置為“no”,但如果您有可壓縮的值或鍵,則數據集可能會更大
rdbcompression yes
# 由於RDB的版本5,CRC64校驗和被放在文件的末尾。
# 這使得格式更能防止損壞,但是在保存和加載RDB文件時,會有一個性能損失(大約10%),因此您可以禁用它以獲得最大的性能。
#
# 在禁用校驗和的情況下創建的RDB文件的校驗和為零,這將告訴加載代碼跳過檢查。
rdbchecksum yes
# 轉儲數據庫的文件名
dbfilename dump.rdb
#在沒有啟用持久性的實例中刪除復制使用的RDB文件。
# 默認情況下,此選項處於禁用狀態,但是,在某些環境中,出於管理法規或其他安全考慮,應盡快刪除主服務器在磁盤上保留的RDB文件,
# 以便向副本提供數據,或通過副本存儲在磁盤上以加載這些文件以進行初始同步。
# 請注意,此選項僅適用於同時禁用AOF和RDB持久性的實例,否則將完全忽略。
#
# 另一種(有時更好)的方法是在主實例和副本實例上使用無盤復制。但是,對於副本,無盤並不總是一個選項。
rdb-del-sync-files no
# The working directory.
#
# 數據庫將寫入此目錄中,並使用上面使用“dbfilename”配置指令指定的文件名。
#
# 僅附加文件也將在此目錄中創建。
#
# 請注意,您必須在此處指定目錄,而不是文件名。
dir ./
################################# REPLICATION #################################
# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# +------------------+ +---------------+
# | Master | ---> | Replica |
# | (receive writes) | | (exact copy) |
# +------------------+ +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition replicas automatically try to reconnect to masters
# and resynchronize with them.
#
# replicaof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>
#
# However this is not enough if you are using Redis ACLs (for Redis version
# 6 or greater), and the default user is not capable of running the PSYNC
# command and/or other commands needed for replication. In this case it's
# better to configure a special user to use with replication, and specify the
# masteruser configuration as such:
#
# masteruser <username>
#
# When masteruser is specified, the replica will authenticate against its
# master using the new AUTH form: AUTH <username> <password>.
# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) If replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all commands except:
# INFO, REPLICAOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,
# UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,
# HOST and LATENCY.
#
replica-serve-stale-data yes
# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes
# Replication SYNC strategy: disk or socket.
#
# New replicas and reconnecting replicas that are not able to continue the
# replication process just receiving differences, need to do what is called a
# "full synchronization". An RDB file is transmitted from the master to the
# replicas.
#
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child
# producing the RDB file finishes its work. With diskless replication instead
# once the transfer starts, new replicas arriving will be queued and a new
# transfer will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple
# replicas will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the
# server waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# -----------------------------------------------------------------------------
# WARNING: RDB diskless load is experimental. Since in this setup the replica
# does not immediately store an RDB on disk, it may cause data loss during
# failovers. RDB diskless load + Redis modules not handling I/O reads may also
# cause Redis to abort in case of I/O errors during the initial synchronization
# stage with the master. Use only if you know what you are doing.
# -----------------------------------------------------------------------------
#
# Replica can load the RDB it reads from the replication link directly from the
# socket, or store the RDB to a file and read that file after it was completely
# received from the master.
#
# In many cases the disk is slower than the network, and storing and loading
# the RDB file may increase replication time (and even increase the master's
# Copy on Write memory and salve buffers).
# However, parsing the RDB file directly from the socket may mean that we have
# to flush the contents of the current database before the full rdb was
# received. For this reason we have the following options:
#
# "disabled" - Don't use diskless load (store the rdb file to the disk first)
# "on-empty-db" - Use diskless load only when it is completely safe.
# "swapdb" - Keep a copy of the current db contents in RAM while parsing
# the data directly from the socket. note that this requires
# sufficient memory, if you don't have it, you risk an OOM kill.
repl-diskless-load disabled
# Replicas send PINGs to server in a predefined interval. It's possible to
# change this interval with the repl_ping_replica_period option. The default
# value is 10 seconds.
#
# repl-ping-replica-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica. The default
# value is 60 seconds.
#
# repl-timeout 60
# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a
# replica wants to reconnect again, often a full resync is not needed, but a
# partial resync is enough, just passing the portion of data the replica
# missed while disconnected.
#
# The bigger the replication backlog, the longer the replica can endure the
# disconnect and later be able to perform a partial resynchronization.
#
# The backlog is only allocated if there is at least one replica connected.
#
# repl-backlog-size 1mb
# After a master has no connected replicas for some time, the backlog will be
# freed. The following option configures the amount of seconds that need to
# elapse, starting from the time the last replica disconnected, for the backlog
# buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with other replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The replica priority is an integer number published by Redis in the INFO
# output. It is used by Redis Sentinel in order to select a replica to promote
# into a master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel
# will pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.
# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP address and port normally reported by a replica is
# obtained in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the replica to connect with the master.
#
# Port: The port is communicated by the replica during the replication
# handshake, and is normally the port that the replica is using to
# listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may actually be reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234
############################### KEYS TRACKING #################################
# Redis implements server assisted support for client side caching of values.
# This is implemented using an invalidation table that remembers, using
# 16 millions of slots, what clients may have certain subsets of keys. In turn
# this is used in order to send invalidation messages to clients. Please
# check this page to understand more about the feature:
#
# https://redis.io/topics/client-side-caching
#
# When tracking is enabled for a client, all the read only queries are assumed
# to be cached: this will force Redis to store information in the invalidation
# table. When keys are modified, such information is flushed away, and
# invalidation messages are sent to the clients. However if the workload is
# heavily dominated by reads, Redis could use more and more memory in order
# to track the keys fetched by many clients.
#
# For this reason it is possible to configure a maximum fill value for the
# invalidation table. By default it is set to 1M of keys, and once this limit
# is reached, Redis will start to evict keys in the invalidation table
# even if they were not modified, just to reclaim memory: this will in turn
# force the clients to invalidate the cached values. Basically the table
# maximum size is a trade off between the memory you want to spend server
# side to track information about who cached what, and the ability of clients
# to retain cached objects in memory.
#
# If you set the value to 0, it means there are no limits, and Redis will
# retain as many keys as needed in the invalidation table.
# In the "stats" INFO section, you can find information about the number of
# keys in the invalidation table at every given moment.
#
# Note: when key tracking is used in broadcasting mode, no memory is used
# in the server side so this setting is useless.
#
# tracking-table-max-keys 1000000
################################## SECURITY ###################################
# Warning: since Redis is pretty fast, an outside user can try up to
# 1 million passwords per second against a modern box. This means that you
# should use very strong passwords, otherwise they will be very easy to break.
# Note that because the password is really a shared secret between the client
# and the server, and should not be memorized by any human, the password
# can be easily a long string from /dev/urandom or whatever, so by using a
# long and unguessable password no brute force attack will be possible.
# Redis ACL users are defined in the following format:
#
# user <username> ... acl rules ...
#
# For example:
#
# user worker +@list +@connection ~jobs:* on >ffa9203c493aa99
#
# The special username "default" is used for new connections. If this user
# has the "nopass" rule, then new connections will be immediately authenticated
# as the "default" user without the need of any password provided via the
# AUTH command. Otherwise if the "default" user is not flagged with "nopass"
# the connections will start in not authenticated state, and will require
# AUTH (or the HELLO command AUTH option) in order to be authenticated and
# start to work.
#
# The ACL rules that describe what a user can do are the following:
#
# on Enable the user: it is possible to authenticate as this user.
# off Disable the user: it's no longer possible to authenticate
# with this user, however the already authenticated connections
# will still work.
# +<command> Allow the execution of that command
# -<command> Disallow the execution of that command
# +@<category> Allow the execution of all the commands in such category
# with valid categories are like @admin, @set, @sortedset, ...
# and so forth, see the full list in the server.c file where
# the Redis command table is described and defined.
# The special category @all means all the commands, but currently
# present in the server, and that will be loaded in the future
# via modules.
# +<command>|subcommand Allow a specific subcommand of an otherwise
# disabled command. Note that this form is not
# allowed as negative like -DEBUG|SEGFAULT, but
# only additive starting with "+".
# allcommands Alias for +@all. Note that it implies the ability to execute
# all the future commands loaded via the modules system.
# nocommands Alias for -@all.
# ~<pattern> Add a pattern of keys that can be mentioned as part of
# commands. For instance ~* allows all the keys. The pattern
# is a glob-style pattern like the one of KEYS.
# It is possible to specify multiple patterns.
# allkeys Alias for ~*
# resetkeys Flush the list of allowed keys patterns.
# ><password> Add this password to the list of valid password for the user.
# For example >mypass will add "mypass" to the list.
# This directive clears the "nopass" flag (see later).
# <<password> Remove this password from the list of valid passwords.
# nopass All the set passwords of the user are removed, and the user
# is flagged as requiring no password: it means that every
# password will work against this user. If this directive is
# used for the default user, every new connection will be
# immediately authenticated with the default user without
# any explicit AUTH command required. Note that the "resetpass"
# directive will clear this condition.
# resetpass Flush the list of allowed passwords. Moreover removes the
# "nopass" status. After "resetpass" the user has no associated
# passwords and there is no way to authenticate without adding
# some password (or setting it as "nopass" later).
# reset Performs the following actions: resetpass, resetkeys, off,
# -@all. The user returns to the same state it has immediately
# after its creation.
#
# ACL rules can be specified in any order: for instance you can start with
# passwords, then flags, or key patterns. However note that the additive
# and subtractive rules will CHANGE MEANING depending on the ordering.
# For instance see the following example:
#
# user alice on +@all -DEBUG ~* >somepassword
#
# This will allow "alice" to use all the commands with the exception of the
# DEBUG command, since +@all added all the commands to the set of the commands
# alice can use, and later DEBUG was removed. However if we invert the order
# of two ACL rules the result will be different:
#
# user alice on -DEBUG +@all ~* >somepassword
#
# Now DEBUG was removed when alice had yet no commands in the set of allowed
# commands, later all the commands are added, so the user will be able to
# execute everything.
#
# Basically ACL rules are processed left-to-right.
#
# For more information about ACL configuration please refer to
# the Redis web site at https://redis.io/topics/acl
# ACL LOG
#
# The ACL Log tracks failed commands and authentication events associated
# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked
# by ACLs. The ACL Log is stored in memory. You can reclaim memory with
# ACL LOG RESET. Define the maximum entry length of the ACL Log below.
acllog-max-len 128
# Using an external ACL file
#
# Instead of configuring users here in this file, it is possible to use
# a stand-alone file just listing users. The two methods cannot be mixed:
# if you configure users here and at the same time you activate the external
# ACL file, the server will refuse to start.
#
# The format of the external ACL user file is exactly the same as the
# format that is used inside redis.conf to describe users.
#
# aclfile /etc/redis/users.acl
# 重要提示:從redis6開始,“requirepass”只是新ACL系統之上的一個兼容層。選項效果將只是設置默認用戶的密碼。
# 客戶機仍將使用AUTH<password>進行身份驗證,如果遵循新協議,則更明確地使用AUTH default<password>進行身份驗證:兩者都可以。
#
# requirepass foobared
# Command renaming (DEPRECATED).
#
# ------------------------------------------------------------------------
# WARNING: avoid using this option if possible. Instead use ACLs to remove
# commands from the default user, and put them only in some admin user you
# create for administrative purposes.
# ------------------------------------------------------------------------
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.
################################### CLIENTS ####################################
# 設置同時連接的客戶端的最大數量。默認情況下,此限制設置為10000個客戶端,
# 但是如果Redis服務器無法配置進程文件限制以允許指定的限制,則允許的最大客戶端數將設置為當前文件限制減去32(因為Redis保留了一些文件描述符供內部使用)。
#
# 一旦達到限制,Redis將關閉所有新連接,並發送錯誤“max number of clients reached”。
#
# 當使用Redis集群時,最大連接數也與集群總線共享:集群中的每個節點將使用兩個連接,一個傳入,另一個傳出。在非常大的簇的情況下,相應地調整限制的大小是很重要的。
#
# maxclients 10000
############################## MEMORY MANAGEMENT ################################
# 將內存使用限制設置為指定的字節數。當內存達到最大限制時,將根據策略刪除內存(see maxmemory-policy)。
#
# 如果Redis無法根據策略刪除密鑰,或者如果策略設置為“noeviction”,Redis將開始對使用更多內存的命令(如set、LPUSH等)進行錯誤應答,並繼續回復GET等只讀命令。
#
# 當將Redis用作LRU或LFU緩存時,或者設置實例的硬內存限制(使用“noeviction”策略)時,此選項通常很有用。
#
# WARNING: 如果要從內存中刪除復制副本,則需要從內存中減去復制副本的內存大小,
# 反過來,副本的輸出緩沖區充滿了被逐出的密鑰的del,從而觸發刪除更多的密鑰,以此類推,直到數據庫被完全清空。
#
# 簡而言之。。。如果附加了副本,建議您為maxmemory設置一個下限,以便系統上有一些可用的RAM用於副本輸出緩沖區(但是,如果策略為“noeviction”,則不需要這樣做)。
#
# maxmemory <bytes>
# MAXMEMORY策略:達到MAXMEMORY時Redis將如何選擇要刪除的內容。可以從以下行為中選擇一個:
#
# volatile-lru -> 使用近似的LRU逐出,僅限具有過期集的密鑰。
# allkeys-lru -> 使用近似的LRU逐出任何密鑰。
# volatile-lfu -> 使用近似的LFU逐出,僅限具有過期集的密鑰。
# allkeys-lfu -> 使用近似的LFU逐出任何密鑰。
# volatile-random -> 刪除具有過期集的隨機密鑰。
# allkeys-random -> 移除一個隨機鍵,任意鍵。
# volatile-ttl ->刪除最接近過期時間的密鑰(次要TTL)
# noeviction -> 不要逐出任何內容,只要在寫操作時返回一個錯誤。
#
# LRU表示最近最少使用
# LFU表示使用頻率最低
#
# LRU、LFU和volatile ttl都是用近似隨機算法實現的。
#
# 注意:對於以上任何一種策略,當沒有合適的密鑰進行逐出時,Redis將在寫操作時返回錯誤。
#
# 在編寫這些命令時: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU、LFU和minimal TTL算法不是精確算法,而是近似算法(為了節省內存),因此您可以根據速度或精度對其進行調整。
# 默認情況下,Redis將檢查五個鍵並選擇最近使用過的一個,您可以使用以下配置指令更改樣本大小。
#
# 默認值為5會產生足夠好的結果。10接近真實的LRU,但占用更多的CPU。3更快,但不是很准確。
#
# maxmemory-samples 5
# 逐出處理是為了在默認設置下正常工作而設計的。如果存在異常大的寫入流量,則可能需要增加此值。 降低此值可能會降低驅逐處理效率風險的延遲
# 0=最小延遲,10=默認值,100=不考慮延遲的進程
#
# maxmemory-eviction-tenacity 10
# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica
# to have a different memory setting, and you are sure all the writes performed
# to the replica are idempotent, then you may change this default (but be sure
# to understand what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory
# and so forth). So make sure you monitor your replicas and make sure they
# have enough memory to never hit a real out-of-memory condition before the
# master hits the configured maxmemory setting.
#
# replica-ignore-maxmemory yes
# Redis reclaims expired keys in two ways: upon access when those keys are
# found to be expired, and also in background, in what is called the
# "active expire key". The key space is slowly and interactively scanned
# looking for expired keys to reclaim, so that it is possible to free memory
# of keys that are expired and will never be accessed again in a short time.
#
# The default effort of the expire cycle will try to avoid having more than
# ten percent of expired keys still in memory, and will try to avoid consuming
# more than 25% of total memory and to add latency to the system. However
# it is possible to increase the expire "effort" that is normally set to
# "1", to a greater value, up to the value "10". At its maximum value the
# system will use more CPU, longer cycles (and technically may introduce
# more latency), and will tolerate less already expired keys still present
# in the system. It's a tradeoff between memory, CPU and latency.
#
# active-expire-effort 1
############################# LAZY FREEING ####################################
# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
# in order to make room for new data, without going over the specified
# memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
# EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
# already exist. For example the RENAME command may delete the old key
# content when it is replaced with another one. Similarly SUNIONSTORE
# or SORT with STORE option may delete existing keys. The SET command
# itself removes any old content of the specified key in order to replace
# it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
# its master, the content of the whole database is removed in order to
# load the RDB file just transferred.
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives.
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
# It is also possible, for the case when to replace the user code DEL calls
# with UNLINK calls is not easy, to modify the default behavior of the DEL
# command to act exactly like UNLINK, using the following configuration
# directive:
lazyfree-lazy-user-del no
################################ THREADED I/O #################################
# Redis is mostly single threaded, however there are certain threaded
# operations such as UNLINK, slow I/O accesses and other things that are
# performed on side threads.
#
# Now it is also possible to handle Redis clients socket reads and writes
# in different I/O threads. Since especially writing is so slow, normally
# Redis users use pipelining in order to speed up the Redis performances per
# core, and spawn multiple instances in order to scale more. Using I/O
# threads it is possible to easily speedup two times Redis without resorting
# to pipelining nor sharding of the instance.
#
# By default threading is disabled, we suggest enabling it only in machines
# that have at least 4 or more cores, leaving at least one spare core.
# Using more than 8 threads is unlikely to help much. We also recommend using
# threaded I/O only if you actually have performance problems, with Redis
# instances being able to use a quite big percentage of CPU time, otherwise
# there is no point in using this feature.
#
# So for instance if you have a four cores boxes, try to use 2 or 3 I/O
# threads, if you have a 8 cores, try to use 6 threads. In order to
# enable I/O threads use the following configuration directive:
#
# io-threads 4
#
# Setting io-threads to 1 will just use the main thread as usual.
# When I/O threads are enabled, we only use threads for writes, that is
# to thread the write(2) syscall and transfer the client buffers to the
# socket. However it is also possible to enable threading of reads and
# protocol parsing using the following configuration directive, by setting
# it to yes:
#
# io-threads-do-reads no
#
# Usually threading reads doesn't help much.
#
# NOTE 1: This configuration directive cannot be changed at runtime via
# CONFIG SET. Aso this feature currently does not work when SSL is
# enabled.
#
# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make
# sure you also run the benchmark itself in threaded mode, using the
# --threads option to match the number of Redis threads, otherwise you'll not
# be able to notice the improvements.
############################ KERNEL OOM CONTROL ##############################
# On Linux, it is possible to hint the kernel OOM killer on what processes
# should be killed first when out of memory.
#
# Enabling this feature makes Redis actively control the oom_score_adj value
# for all its processes, depending on their role. The default scores will
# attempt to have background child processes killed before all others, and
# replicas killed before masters.
oom-score-adj no
# When oom-score-adj is used, this directive controls the specific values used
# for master, replica and background child processes. Values range -1000 to
# 1000 (higher means more likely to be killed).
#
# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)
# can freely increase their value, but not decrease it below its initial
# settings.
#
# Values are used relative to the initial value of oom_score_adj when the server
# starts. Because typically the initial value is 0, they will often match the
# absolute values.
oom-score-adj-values 0 200 800
############################## APPEND ONLY MODE ###############################
# 默認情況下,Redis異步地將數據集轉儲到磁盤上。這種模式在許多應用程序中已經足夠好了,但是Redis進程的問題或斷電可能會導致幾分鍾的寫操作丟失(取決於配置的保存點)。
#
# Append-Only文件是另一種持久性模式,它提供了更好的持久性。
# 例如,如果使用默認的數據fsync策略(請參閱配置文件后面的部分),
# Redis在服務器斷電等戲劇性事件中可能只會丟失一秒鍾的寫入操作,或者如果Redis進程本身發生了問題,但操作系統仍在正常運行,則只會丟失一次寫入操作。
#
# AOF和RDB持久性可以同時啟用而不會出現問題。如果啟動時啟用了AOF,Redis將加載AOF,即具有更好的持久性保證的文件。
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# 僅附加文件的名稱(默認值:附錄Only.aof")
appendfilename "appendonly.aof"
# fsync()調用告訴操作系統在磁盤上實際寫入數據 而不是等待輸出緩沖區中的更多數據。有些操作系統會在磁盤上刷新數據,而有些操作系統則會盡快刷新。
#
# Redis supports three different modes:
#
# no: 不要fsync,只要操作系統想刷新數據就行了。更快。
# always: 每次寫入僅附加日志后進行fsync。慢點,最安全。
# everysec: 每秒只同步一次。妥協。
#
# 默認值是“everysec”,因為這通常是速度和數據安全之間的正確折衷。這取決於您是否可以將此值放寬到“no”,
# 這樣操作系統可以在需要時刷新輸出緩沖區,以獲得更好的性能(但是如果您能夠接受某些數據丟失的想法,請考慮默認的持久性模式,即快照),
# 或者恰恰相反,使用“總是”這個詞很慢,但比任何時候都安全。
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# 如果不確定,請使用“everysec”。
# appendfsync always
appendfsync everysec
# appendfsync no
# 當AOF fsync策略設置為always或everysec,並且后台保存進程(后台保存或AOF日志后台重寫)正在對磁盤執行大量I/O時,
# 在某些Linux配置中,Redis可能會在fsync()調用上阻塞太長時間。
# 請注意,目前還沒有對此進行修復,因為即使在不同的線程中執行fsync也會阻止我們的同步write(2)調用。
#
# 為了緩解這個問題,可以使用以下選項來防止在進行BGSAVE或bwriteAOF時在主進程中調用fsync()。
#
# 這意味着,當另一個孩子在儲蓄時,Redis的耐久性與“appendfsync none”相同。實際上,這意味着在最壞的情況下(使用默認的Linux設置),可能會丟失最多30秒的日志。
#
# 如果您有延遲問題,請將此選項設置為“是”。否則,將其保留為“否”,從耐用性的角度來看,這是最安全的選擇。
no-appendfsync-on-rewrite no
# 自動重寫僅附加文件。
# Redis能夠自動重寫日志文件隱式調用
# 當AOF日志大小以指定的百分比增長時,BGREWRITEAOF。
#
# 它是這樣工作的:Redis在最近一次重寫之后記住AOF文件的大小(如果重新啟動后沒有重寫,則使用啟動時AOF的大小)。
#
# 將此基本大小與當前大小進行比較。如果當前大小大於指定的百分比,則會觸發重寫。默認當前AOF超過原AOF一倍且大於64mb時觸發重寫。
# 此外,您還需要為要重寫的AOF文件指定最小大小,這對於避免重寫AOF文件非常有用,即使達到了百分比增加,但它仍然很小。
#
# 指定零的百分比以禁用自動AOF重寫功能。
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# 在Redis啟動過程中,當AOF數據加載回內存時,可能會發現AOF文件在末尾被截斷。
# 當運行Redis的系統崩潰時,尤其是在沒有data=ordered選項的情況下掛載ext4文件系統時,可能會發生這種情況
#(但是,當Redis本身崩潰或中止,但操作系統仍然正常工作時,這種情況就不會發生)。
#
# Redis可以在出現錯誤時退出,也可以加載盡可能多的數據(現在是默認值),如果發現AOF文件在結尾處被截斷,就啟動它。以下選項控制此行為。
# 如果aof load truncated設置為yes,則加載一個截斷的aof文件,Redis服務器開始發出一個日志來通知用戶事件。
# 否則,如果該選項設置為“否”,則服務器會因錯誤而中止並拒絕啟動。
# 當該選項設置為no時,用戶需要在重新啟動服務器之前使用“redis check AOF”實用程序修復AOF文件。
#
# 請注意,如果發現AOF文件在中間被損壞,服務器仍將退出並返回一個錯誤。此選項僅適用於Redis將嘗試從AOF文件讀取更多數據,但找不到足夠字節的情況。
aof-load-truncated yes
# 在重寫AOF文件時,Redis能夠在AOF文件中使用RDB前導碼以加快重寫和恢復。啟用此選項時,重寫的AOF文件由兩個不同的節組成:
#
# [RDB file][AOF tail]
#
# 加載時,Redis識別出AOF文件以“Redis”字符串開頭並加載帶前綴的RDB文件,然后繼續加載AOF尾部。
aof-use-rdb-preamble yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet call any write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are a multiple of the node timeout.
#
# cluster-node-timeout 15000
# A replica of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple replicas able to failover, they exchange messages
# in order to try to give an advantage to the replica with the best
# replication offset (more data from the master processed).
# Replicas will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single replica computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the replica will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large cluster-replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
# For maximum availability, it is possible to set the cluster-replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-replica-validity-factor 10
# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least a hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the replica can still perform a
# manual failover, if forced to do so.
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
#
# cluster-replica-no-failover no
# This option, when set to yes, allows nodes to serve read traffic while the
# the cluster is in a down state, as long as it believes it owns the slots.
#
# This is useful for two cases. The first case is for when an application
# doesn't require consistency of data during node failures or network partitions.
# One example of this is a cache, where as long as the node has the data it
# should be able to serve it.
#
# The second use case is for configurations that don't meet the recommended
# three shards but want to enable cluster mode and scale later. A
# master outage in a 1 or 2 shard configuration causes a read/write outage to the
# entire cluster without this option set, with it set there is only a write outage.
# Without a quorum of masters, slot ownership will not change automatically.
#
# cluster-allow-reads-when-down no
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
########################## CLUSTER DOCKER/NAT support ########################
# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instructs the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usual.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# t Stream commands
# m Key-miss events (Note: It is not included in the 'A' class)
# A Alias for g$lshzxet, so that the "AKE" string means all the events
# (Except key-miss events which are excluded from 'A' due to their
# unique nature).
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### GOPHER SERVER #################################
# Redis contains an implementation of the Gopher protocol, as specified in
# the RFC 1436 (https://www.ietf.org/rfc/rfc1436.txt).
#
# The Gopher protocol was very popular in the late '90s. It is an alternative
# to the web, and the implementation both server and client side is so simple
# that the Redis server has just 100 lines of code in order to implement this
# support.
#
# What do you do with Gopher nowadays? Well Gopher never *really* died, and
# lately there is a movement in order for the Gopher more hierarchical content
# composed of just plain text documents to be resurrected. Some want a simpler
# internet, others believe that the mainstream internet became too much
# controlled, and it's cool to create an alternative space for people that
# want a bit of fresh air.
#
# Anyway for the 10nth birthday of the Redis, we gave it the Gopher protocol
# as a gift.
#
# --- HOW IT WORKS? ---
#
# The Redis Gopher support uses the inline protocol of Redis, and specifically
# two kind of inline requests that were anyway illegal: an empty request
# or any request that starts with "/" (there are no Redis commands starting
# with such a slash). Normal RESP2/RESP3 requests are completely out of the
# path of the Gopher protocol implementation and are served as usual as well.
#
# If you open a connection to Redis when Gopher is enabled and send it
# a string like "/foo", if there is a key named "/foo" it is served via the
# Gopher protocol.
#
# In order to create a real Gopher "hole" (the name of a Gopher site in Gopher
# talking), you likely need a script like the following:
#
# https://github.com/antirez/gopher2redis
#
# --- SECURITY WARNING ---
#
# If you plan to put Redis on the internet in a publicly accessible address
# to server Gopher pages MAKE SURE TO SET A PASSWORD to the instance.
# Once a password is set:
#
# 1. The Gopher server (when enabled, not by default) will still serve
# content via Gopher.
# 2. However other commands cannot be called before the client will
# authenticate.
#
# So use the 'requirepass' option to protect your instance.
#
# To enable Gopher support uncomment the following line and set
# the option from no (the default) to yes.
#
# gopher-enabled no
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
stream-node-max-bytes 4096
stream-node-max-entries 100
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# replica -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
#
# client-query-buffer-limit 1gb
# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited to 512 mb. However you can change this limit
# here, but must be 1mb or greater
#
# proto-max-bulk-len 512mb
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporarily raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
dynamic-hz yes
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
rdb-save-incremental-fsync yes
# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
# +--------+------------+------------+------------+------------+------------+
# | 0 | 104 | 255 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 1 | 18 | 49 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 10 | 10 | 18 | 142 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 100 | 8 | 11 | 49 | 143 | 255 |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
# redis-benchmark -n 1000000 incr foo
# redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1
########################### ACTIVE DEFRAGMENTATION #######################
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in a "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
# to use the copy of Jemalloc we ship with the source code of Redis.
# This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don't have fragmentation
# issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
# needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.
# Enabled active defragmentation
# activedefrag no
# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10
# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100
# Minimal effort for defrag in CPU percentage, to be used when the lower
# threshold is reached
# active-defrag-cycle-min 1
# Maximal effort for defrag in CPU percentage, to be used when the upper
# threshold is reached
# active-defrag-cycle-max 25
# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# active-defrag-max-scan-fields 1000
# Jemalloc background thread for purging will be enabled by default
jemalloc-bg-thread yes
# It is possible to pin different threads and processes of Redis to specific
# CPUs in your system, in order to maximize the performances of the server.
# This is useful both in order to pin different Redis threads in different
# CPUs, but also in order to make sure that multiple Redis instances running
# in the same host will be pinned to different CPUs.
#
# Normally you can do this using the "taskset" command, however it is also
# possible to this via Redis configuration directly, both in Linux and FreeBSD.
#
# You can pin the server/IO threads, bio threads, aof rewrite child process, and
# the bgsave child process. The syntax to specify the cpu list is the same as
# the taskset command:
#
# Set redis server/io threads to cpu affinity 0,2,4,6:
# server_cpulist 0-7:2
#
# Set bio threads to cpu affinity 1,3:
# bio_cpulist 1,3
#
# Set aof rewrite child process to cpu affinity 8,9,10,11:
# aof_rewrite_cpulist 8-11
#
# Set bgsave child process to cpu affinity 1,10,11
# bgsave_cpulist 1,10-11