此文轉載必須注明原文地址,請尊重作者的勞動成果! http://www.cnblogs.com/lyongerr/p/5040071.html
目錄
3.10 編譯folly(Facebook Open-source Library) 10
3.10.1 編譯double-conversion. 10
4 mcrouter和memcached的ssl通信... 12
4.2 mcrouter和memcached直接通信.. 12
4.2.4 在stunnel server啟動tcpdump. 13
4.2.6 stunnel server端抓包分析.. 14
4.3 mcrouter和memcached使用stunnel進行通信.. 15
4.3.7 在stunnel server啟動tcpdump. 17
4.3.9 stunnel server端抓包分析.. 18
5.4.1.1 OperationSelectorRoute. 23
1 mcrouter簡介
mcrouter是一個memcached協議的路由器,被facebook用於在他們遍布全球的數據中心中的數十個集群幾千個服務器之間控制流量。它適用於大規模的級別中,在峰值的時候,mcrouter處理接近50億的請求/秒。
2 mcrouter特性
l Memcached ASCII protocol
l Connection pooling
l Multiple hashing schemes
l Prefix routing
l Replicated pools
l Production traffic shadowing
l Online reconfiguration
l Flexible routing
l Destination health monitoring/automatic failover
l Cold cache warm up
l Broadcast operations
l Reliable delete stream
l Multi-cluster support
l Rich stats and debug commands
l Quality of service
l Large values
l Multi-level caches
l IPv6 support
l SSL support
3 mcrouter編譯過程
3.1 編譯環境
功能 |
備注 |
|
192.168.75.130 |
mcrouter編譯機器 |
定制系統/vm |
3.2 配置epel源
yum -y install epel-release
可以yum list 試試,如果不成功需要注釋mirrorlist這行,取消baseurl這行的注釋。需要epel源的原因是由於3.3中許多包都對它有依賴。
3.3 安裝編譯環境
yum -y install bzip2-devel libevent-devel libcap-devel scons \
jemalloc-devel gmp-devel mpfr-devel libmpc-devel wget \
python-devel rpm-build \
m4 cmake libicu-devel chrpath openmpi-devel \
mpich-devel openssl-devel \
glibc-devel.i686 glibc-devel.x86_64 gcc gcc-c++ zlib-devel \
gmp-devel mpfr-devel libmpc-devel \
gflags-devel git bzip2 \
unzip libtool bison flex snappy-devel \
numactl-devel cyrus-sasl-devel
3.4 編譯gcc4.9
mcrouter的編譯必須基於gcc4.8+,folly用到了諸如 chrono 之類的C++11庫,必須使用gcc 4.8以上版本,才能夠完整支持這些用到的C++11特性和標准庫。而我這里選擇的是4.9版本,編譯gcc4.8+的版本需要gmp、mpfr、mpc,故先編譯之。
注:以下所有編譯步驟中,隨着時間的推移,可能安裝包所在路徑在wget的時候會not found,這是正常的,遇到這種情況請通過其他官方渠道下載對應版本,目前我是測試過本文所有軟件包均可以通過給出的鏈接進行下載。
3.4.1 編譯gmp
cd /opt && wget https://gmplib.org/download/gmp/gmp-5.1.3.tar.bz2
tar jxf gmp-5.1.3.tar.bz2 && cd gmp-5.1.3/
./configure --prefix=/usr/local/gmp
make && make install
3.4.2 編譯mpfr
cd /opt && wget http://www.mpfr.org/mpfr-3.1.2/mpfr-3.1.2.tar.bz2
tar jxf mpfr-3.1.2.tar.bz2 ;cd mpfr-3.1.2/
./configure --prefix=/usr/local/mpfr -with-gmp=/usr/local/gmp
make && make install
3.4.3 編譯mpc
cd /opt && wget http://ftp.gnu.org/gnu/mpc/mpc-1.0.1.tar.gz
tar xzf mpc-1.0.1.tar.gz ;cd mpc-1.0.1
./configure --prefix=/usr/local/mpc -with-mpfr=/usr/local/mpfr -with-gmp=/usr/local/gmp
make && make install
3.4.4 編譯gcc-4.9.1
cd /opt && wget http://ftp.gnu.org/gnu/gcc/gcc-4.9.1/gcc-4.9.1.tar.bz2
tar jxf gcc-4.9.1.tar.bz2 ;cd gcc-4.9.1
./configure --prefix=/usr/local/gcc -enable-threads=posix -disable-checking -disable-multilib -enable-languages=c,c++ -with-gmp=/usr/local/gmp -with-mpfr=/usr/local/mpfr/ -with-mpc=/usr/local/mpc/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/mpc/lib:/usr/local/gmp/lib:/usr/local/mpfr/lib/
make && make install
gcc4.9編譯完成以后,需要處理相關的環境變量和庫才能夠使用,否則后面編譯folly和boost會有問題。
3.4.4.1 配置gcc環境變量
echo "/usr/local/gcc/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
echo "/usr/local/mpc/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
echo "/usr/local/gmp/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
echo "/usr/local/mpfr/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf
ldconfig
mv /usr/bin/gcc /usr/bin/gcc_old
mv /usr/bin/g++ /usr/bin/g++_old
mv /usr/bin/c++ /usr/bin/c++_old
ln -s -f /usr/local/gcc/bin/gcc /usr/bin/gcc
ln -s -f /usr/local/gcc/bin/g++ /usr/bin/g++
ln -s -f /usr/local/gcc/bin/c++ /usr/bin/c++
cp /usr/local/gcc/lib64/libstdc++.so.6.0.20 /usr/lib64/.
mv /usr/lib64/libstdc++.so.6 /usr/lib64/libstdc++.so.6.bak
ln -s -f /usr/lib64/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6
使用gcc –v 、g++ --version為4.9.1則代表成功,若這一步沒有達到請不要繼續下面的步驟了,因為后面folly和boost的編譯都是基於這個gcc環境。
3.5 編譯cmake
cd /opt && wget http://www.cmake.org/files/v2.8/cmake-2.8.12.2.tar.gz
tar xvf cmake-2.8.12.2.tar.gz && cd cmake-2.8.12.2
./configure && make && make install
3.6 編譯autoconf
cd /opt && wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
tar xvf autoconf-2.69.tar.gz && cd autoconf-2.69
./configure && make && make install
3.7 編譯glog
cd /opt && wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz
tar xvf glog-0.3.3.tar.gz && cd glog-0.3.3
./configure && make && make install
3.8 編譯ragel
由於編譯ragel之前需要colm、kelbt,故先編譯之,當然你直接編譯ragel也不會有錯,但是會缺少東西。
3.8.1 編譯colm
cd /opt && wget http://www.colm.net/files/colm/colm-0.13.0.2.tar.gz
tar xvf colm-0.13.0.2.tar.gz && cd colm-0.13.0.2
./configure && make && make install
cd /opt
3.8.2 編譯kelbt
cd /opt && wget http://www.colm.net/files/kelbt/kelbt-0.16.tar.gz
tar xvf kelbt-0.16.tar.gz && cd kelbt-0.16
./configure && make && make install
3.8.3 編譯ragel
cd /opt && wget http://www.colm.net/files/ragel/ragel-6.9.tar.gz
tar xvf ragel-6.9.tar.gz && cd ragel-6.9
./configure --prefix=/usr --disable-manual && make && make install
3.9 編譯Boost
Boost必須是Boost 1.51+,這里選擇boost_1_56_0的版本,由於定制系統的python環境是2.6的,而boost1.56必須基於python2.7+,當然不使用python2.6環境編譯boost也能成功,但是后面編譯folly就會有報錯,建議以下所有章節的編譯過程均要基於python2.7+。
3.9.1 編譯python2.7
yum -y install centos-release-SCL
yum -y install python27
scl enable python27 "easy_install pip"
scl enable python27 bash
python --version
3.9.2 編譯boost
cd /opt && wget http://downloads.sourceforge.net/boost/boost_1_56_0.tar.bz2
tar jxf boost_1_56_0.tar.bz2 && cd boost_1_56_0
./bootstrap.sh --prefix=/usr && ./b2 stage threading=multi link=shared
./b2 install threading=multi link=shared
3.10 編譯folly(Facebook Open-source Library)
Folly is an open-source C++ library developed and used at Facebook,Folly有用到double-conversion中的庫,故先編譯之。
3.10.1 編譯double-conversion
rpm -Uvh http://sourceforge.net/projects/scons/files/scons/2.3.3/scons-2.3.3-1.noarch.rpm
cd/opt && git clone https://code.google.com/p/double-conversion/
cd double-conversion && scons install
cd /opt/ && git clone https://github.com/genx7up/folly.git
cp folly/folly/SConstruct.double-conversion /opt/double-conversion/
cd double-conversion && scons -f SConstruct.double-conversion
ln -sf src double-conversion
ldconfig
rm –rf /opt/folly
3.10.2 編譯folly
cd /opt
git clone https://github.com/facebook/folly
cd /opt/folly/folly/
export LD_LIBRARY_PATH="/opt/folly/folly/lib:$LD_LIBRARY_PATH"
export LD_RUN_PATH="/opt/folly/folly/lib"
export LDFLAGS="-L/opt/folly/folly/lib -L/opt/double-conversion -L/usr/local/lib -ldl"
export CPPFLAGS="-I/opt/folly/folly/include -I/opt/double-conversion"
autoreconf -ivf
./configure --with-boost-libdir=/usr/lib/
make && make install
folly make的時候停止在如下界面許久才代表正常,之前嘗試過很快就編譯完folly了,而且也沒有報錯,但是最后編譯mcrouter會有錯。
libtool: compile: g++ -DHAVE_CONFIG_H -I./.. -pthread -I/usr/include -std=gnu++0x -g -O2 -MT futures/Future.lo -MD -MP -MF futures/.deps/Future.Tpo -c futures/Future.cpp -o futures/Future.o >/dev/null 2>&1
3.10.3 解壓folly_test
cd /opt/folly/folly/test
wget https://googletest.googlecode.com/files/gtest-1.7.0.zip
unzip gtest-1.7.0.zip
3.11 編譯mcrouter
3.11.1 准備Thrift庫
cd /opt && git clone https://github.com/facebook/fbthrift.git
cd fbthrift/thrift
ln -sf thrifty.h "/opt/fbthrift/thrift/compiler/thrifty.hh"
export LD_LIBRARY_PATH="/opt/fbthrift/thrift/lib:$LD_LIBRARY_PATH"
export LD_RUN_PATH="/opt/fbthrift/thrift/lib"
export LDFLAGS="-L/opt/fbthrift/thrift/lib -L/usr/local/lib"
export CPPFLAGS="-I/opt/fbthrift/thrift/include -I/opt/fbthrift/thrift/include/python2.7 -I/opt/folly -I/opt/double-conversion"
echo "/usr/local/lib/" >> /etc/ld.so.conf.d/gcc-4.9.1.conf && ldconfig
3.11.2 編譯mcrouter
開始這一步之前必須保證以上所有的編譯步驟完全沒有任何報錯,否則編譯mcrouter會有問題。
cd /opt && git clone https://github.com/facebook/mcrouter.git
cd mcrouter/mcrouter
export LD_LIBRARY_PATH="/opt/mcrouter/mcrouter/lib:$LD_LIBRARY_PATH"
export LD_RUN_PATH="/opt/folly/folly/test/.libs:/opt/mcrouter/mcrouter/lib"
export LDFLAGS="-L/opt/mcrouter/mcrouter/lib -L/usr/local/lib -L/opt/folly/folly/test/.libs"
export CPPFLAGS="-I/opt/folly/folly/test/gtest-1.7.0/include -I/opt/mcrouter/mcrouter/include -I/opt/folly -I/opt/double-conversion -I/opt/fbthrift -I/opt/boost_1_56_0"
export CXXFLAGS="-fpermissive"
autoreconf --install && ./configure --with-boost-libdir=/usr/lib/
make && make install
mcrouter --help
注意:make mcrouter的時候出現如下輸出才代表正常,而且會停在這個界面一會兒。
g++ -DHAVE_CONFIG_H -I.. -I/opt/mcrouter/install/include -DLIBMC_FBTRACE_DISABLE -Wno-missing-field-initializers -Wno-deprecated -W -Wall -Wextra -Wno-unused-parameter -fno-strict-aliasing -g -O2 -std=gnu++1y -MT mcrouter-server.o -MD -MP -MF .deps/mcrouter-server.Tpo -c -o mcrouter-server.o `test -f 'server.cpp' || echo './'`server.cpp
以上是比較順利的情況下所需的完整步驟,通過mcrouter –help可以檢驗是否編譯成功,如編譯失敗可以參考最后的常見錯誤匯總章節。
4 mcrouter和memcached的ssl通信
本文不介紹stunnel安裝以及使用
IP |
功能 |
備注 |
192.168.75.130 |
mcrouter/stunnel client |
定制系統/vm |
192.168.75.131 |
stunnel server |
4.2 mcrouter和memcached直接通信
4.2.1 啟動memcache測試實例
stunnel server啟動一個實例監聽11211端口。
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11211 -u www -t 10 -l 192.168.75.131
l -l <ip_addr>:指定進程監聽的地址;
l -d: 以服務模式運行;
l -u <username>:以指定的用戶身份運行memcached進程;
l -m <num>:用於緩存數據的最大內存空間,單位為MB,默認為64MB;
l -c <num>:最大支持的並發連接數,默認為1024;
l -p <num>: 指定監聽的TCP端口,默認為11211;
l -U <num>:指定監聽的UDP端口,默認為11211,0表示關閉UDP端口;
l -t <threads>:用於處理入站請求的最大線程數,僅在memcached編譯時開啟了支持線程才有效;
l -f <num>:設定Slab Allocator定義預先分配內存空間大小固定的塊時使用的增長因子;
l -M:當內存空間不夠使用時返回錯誤信息,而不是按LRU算法利用空間;
l -n: 指定最小的slab chunk大小;單位是字節;
l -S: 啟用sasl進行用戶認證;
4.2.2 配置mcrouer
stunel client上配置mcrouter的配置文件config.json。
cat config.json
{
"pools": {
"A": {
"servers": [
// hosts of replicated pool, e.g.:
"192.168.75.131:11211",
]
}
},
"route": {
"type": "PrefixPolicyRoute",
"operation_policies": {
"delete": "AllSyncRoute|Pool|A",
"add": "AllSyncRoute|Pool|A",
"get": "LatestRoute|Pool|A",
"set": "AllSyncRoute|Pool|A"
}
}
}
紅色部分為mcrouter的memcached node 監聽的IP和端口, mcrouter啟動后會和它建立通信。
注意: mcrouter和mecached node兩機器間防火牆要互加白名單。
4.2.3 啟動mcrouter
啟動mcrouter並監聽1919端口
mcrouter -p 1919 -f config.json &
4.2.4 在stunnel server啟動tcpdump
tcpdump -i eth1 -nn -A -s 0 -w /home/open/stunnel_test.pcap port 11211
l -i 指定監聽的網絡接口。
l -nn 直接以IP和端口號顯示,而非主機與服務器名稱。
l -w 直接將分組寫入文件中,而不是不分析並打印出來。
l -A 以ASCII格式打印出所有分組,並將鏈路層的頭最小化。
l -s 從每個分組中讀取最開始的snaplen個字節,而不是默認的68個字節。-s 0表示不限制長度,輸出整個包。
4.2.5 寫入測試數據
往mcrouter寫入測試數據
telnet 127.0.0.1 1919
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set testkey1 0 0 3
liu
STORED
set testkey2 0 0 4
yong
STORED
set testkey3 0 0 5
43999
STORED
4.2.6 stunnel server端抓包分析
info字段顯示了明文信息。具體如下圖。
由以上可知mcrouter和memcached在不使用stunnel加密的情況下通信是明文傳輸的。
4.3 mcrouter和memcached使用stunnel進行通信
4.3.1 停止memcached
兩台機器均執行
sh /root/memcached_stop
主要是怕啟動stunnel server端口有沖突
4.3.2 啟動memcached測試實例
stunnel server上
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11211 -u www -t 10 -l 127.0.0.1
4.3.3 配置stunnel
4.3.3.1 server
cat /usr/local/stunnel/etc/stunnel/stunnel.conf
sslVersion = TLSv1
CAfile = /usr/local/stunnel/etc/stunnel/stunnel.pem
verify = 2
cert = /usr/local/stunnel/etc/stunnel/stunnel.pem
pid = /var/run/stunnel/stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
debug = 7
output = /data/logs/stunnel.log
setuid = root
setgid = root
[memcached]
accept = 192.168.75.131:11211
connect = 127.0.0.1:11211
l accept = 192.168.75.131:11211 代表stunnel server監聽的端口
l connect = 127.0.0.1:11211 代表stunnel解密數據后要轉發的目的地
4.3.3.2 client
cat /usr/local/stunnel/etc/stunnel/stunnel.conf
cert = /usr/local/stunnel/etc/stunnel/stunnel.pem
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
verify = 2
CAfile = /usr/local/stunnel/etc/stunnel/stunnel.pem
client = yes
delay = no
sslVersion = TLSv1
output = /data/logs/stunnel.log
[memcached]
accept = 127.0.0.1:11211
connect = 192.168.75.131:11211
4.3.4 啟動stunnel
server和client均執行
/usr/local/stunnel/sbin/stunnel
stunnel server啟動過程中輸出如下信息,顯示成功加載了證書和秘鑰。
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Snagged 64 random bytes from /root/.rnd
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Wrote 1024 new random bytes to /root/.rnd
2015.11.01 16:35:22 LOG7[21016:140179061770176]: RAND_status claims sufficient entropy for the PRNG
2015.11.01 16:35:22 LOG7[21016:140179061770176]: PRNG seeded successfully
2015.11.01 16:35:22 LOG4[21016:140179061770176]: Wrong permissions on /usr/local/stunnel/etc/stunnel/stunnel.pem
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Certificate: /usr/local/stunnel/etc/stunnel/stunnel.pem
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Certificate loaded
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Key file: /usr/local/stunnel/etc/stunnel/stunnel.pem
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Private key loaded
2015.11.01 16:35:22 LOG7[21016:140179061770176]: Loaded verify certificates from /usr/local/stunnel/etc/stunnel/stunnel.pem
4.3.5 修改mcrouter配置文件
cat config.json
{
"pools": {
"A": {
"servers": [
// hosts of replicated pool, e.g.:
"127.0.0.1:11211",
]
}
},
"route": {
"type": "PrefixPolicyRoute",
"operation_policies": {
"delete": "AllSyncRoute|Pool|A",
"add": "AllSyncRoute|Pool|A",
"get": "LatestRoute|Pool|A",
"set": "AllSyncRoute|Pool|A"
}
}
}
127.0.0.1:11211為stunnel client監聽的地址。
4.3.6 啟動mcrouter
mcrouter -p 1919 -f config.json &
4.3.7 在stunnel server啟動tcpdump
tcpdump -i eth1 -nn -A -s 0 -w /home/open/mcrouter1.pcap port 11211
-i 后面為stunel server監聽的網卡,不是memcached監聽的網卡。因為傳給memcached的時候stunnel已經進行解密了。
4.3.8 寫入測試數據
mcrouter上寫入測試數據
telnet 127.0.0.1 1919
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set testkey4 0 0 4
hell
STORED
set testkey5 0 0 12
hello world!
STORED
set testkey6 0 0 3
liu
STORED
在stunnel server端讀取數據進行驗證
telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
get testkey4
VALUE testkey4 0 4
hell
END
get testkey5
VALUE testkey5 0 12
hello world!
END
get testkey6
VALUE testkey6 0 3
liu
END
由以上可知數據已經從stunnel client端傳入到了memcached。
4.3.9 stunnel server端抓包分析
info字段已經沒有顯示明文信息。我們取最后四條數據包來分析,如下圖。
由此可見從mcouter傳入到stunnel server的數據已經加密,無法抓包獲取。
5 mcrouter的分布式測試
5.1 概念
l Pools: Destination hosts are grouped into "pools". A pool is a basic building block of a routing config. At a minimum, a pool consists of an ordered list of destination hosts and a hash function.
l Key:A memcached key is typically a short (mcrouter limit is 250 characters) ASCII string which does not contain any whitespace or control characters.
l Route handles:Routes are composed of blocks called "route handles". Each route handle encapsulates some piece of routing logic, such as "send a request to a single destination host" or "provide failover."
l 普通分布式: 沒有冗余的分布式,即數據分布在不同的memcahed上,而且每個memcached上的數據都不相同。
l 高可用分布式:數據分布在不同的memcahed上,同時每個memcached都有一個冗余的memcached作為互備,即有冗余的分布式+高可用。
5.2 測試環境
IP |
功能 |
備注 |
192.168.75.130 |
mcrouter測試機、 memcached localhost池 |
定制系統/vm |
192.168.75.131 |
memcached bakcup池 |
5.3 mcrouter的普通分布式
5.3.1 用到的路由句柄
5.3.1.1 RandomRoute
l Definition:Routes to one random destination from list of children.
l Properties:children.
5.3.2 mcrouter配置
cat config.json
{
"pools": {
"backup": { "servers": [
"192.168.75.131:11210",
"192.168.75.131:11211",
"192.168.75.131:11212",
] },
"localhost": { "servers": [
"127.0.0.1:11210",
"127.0.0.1:11211",
"127.0.0.1:11212",
] }
},
"route": {
"type": "RandomRoute",
"children" : [ "PoolRoute|localhost", "PoolRoute|backup" ]
}
}
上述配置文件分別定義了兩個名為bakcup和localhost的memcached池,每個池均有三個memcached實例。關鍵詞 RandomRoute是指路由方式,即路由句柄。由於RandomRoute的路由方式符合普通分布式需求,故選擇之。
5.3.3 啟動memcached測試實例
192.168.75.130上
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11212.log 2>&1
192.168.75.131上
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11212.log 2>&1
l -l <ip_addr>:指定進程監聽的地址;
l -d: 以服務模式運行;
l -u <username>:以指定的用戶身份運行memcached進程;
l -m <num>:用於緩存數據的最大內存空間,單位為MB,默認為64MB;
l -c <num>:最大支持的並發連接數,默認為1024;
l -p <num>: 指定監聽的TCP端口,默認為11211;
l -t <threads>:用於處理入站請求的最大線程數,僅在memcached編譯時開啟了支持線程才有效;
l -v:代表打印普通的錯誤或者警告類型的日志信息
l -vv:比-v打印的日志更詳細,包含了客戶端命令和server端的響應信息
l -vvv:則是最詳盡的,甚至包含了內部的狀態信息打印
這里是使用-vv的目的是方便查看測試信息而已。
5.3.4 啟動mcrouter
192.168.75.130上
mcrouter -p 1919 -f /data/backup/config.json
后面不加&,方便輸出調試信息,下文提到的mcrouter輸出均指這類調試信息。
5.3.5 測試過程
5.3.5.1 編寫數據寫入工具
cat setkey.sh
#!/bin/bash
sum=0
num=$1
for i in `seq 1 $num`
do
echo -e "set key${i} 0 0 4\ntest" | nc 127.0.0.1 1919
sum=$((sum+1))
done
echo
echo "total writes: ${sum}"
此腳本方便一次性向mcrouter寫入多條測試數據。
5.3.5.2 編寫數據讀取工具
cat getkey.sh
#!/bin/bash
key=$1
for port in 11210 11211 11212
do
echo "Port $port values:"
echo "get $1" | nc 192.168.75.131 $port
echo
done
此腳本方便讀取數據。
5.3.5.3 寫入測試數據
隨機寫入10w條數據,在192.168.75.130執行
sh setkey.sh 100000
寫入數據的時候輸出信息如下所示
I1113 11:47:54.126411 117652 ProxyDestination.cpp:359] server 192.168.75.131:11212:TCP:ascii-1000 up (1 of 6)
I1113 11:47:54.130604 117652 ProxyDestination.cpp:359] server 192.168.75.131:11210:TCP:ascii-1000 up (2 of 6)
I1113 11:47:54.134222 117652 ProxyDestination.cpp:359] server 127.0.0.1:11211:TCP:ascii-1000 up (3 of 6)
I1113 11:47:54.137917 117652 ProxyDestination.cpp:359] server 127.0.0.1:11212:TCP:ascii-1000 up (4 of 6)
I1113 11:47:54.146790 117652 ProxyDestination.cpp:359] server 127.0.0.1:11210:TCP:ascii-1000 up (5 of 6)
I1113 11:47:54.151669 117652 ProxyDestination.cpp:359] server 192.168.75.131:11211:TCP:ascii-1000 up (6 of 6)
I1113 11:51:49.416658 117652 ProxyDestination.cpp:359] server 127.0.0.1:11212:TCP:ascii-1000 closed (5 of 6)
I1113 11:51:49.416856 117652 ProxyDestination.cpp:359] server 192.168.75.131:11210:TCP:ascii-1000 closed (4 of 6)
I1113 11:51:49.416931 117652 ProxyDestination.cpp:359] server 127.0.0.1:11211:TCP:ascii-1000 closed (3 of 6)
I1113 11:51:49.417023 117652 ProxyDestination.cpp:359] server 192.168.75.131:11212:TCP:ascii-1000 closed (2 of 6)
I1113 11:51:49.417177 117652 ProxyDestination.cpp:359] server 192.168.75.131:11211:TCP:ascii-1000 closed (1 of 6)
I1113 11:51:49.417248 117652 ProxyDestination.cpp:359] server 127.0.0.1:11210:TCP:ascii-1000 closed (0 of 6)
上述輸出中顯示了mcrouter與哪些實例建立了鏈接、與多少個memcached建立了連接,比如第一條紅色輸出,表示當前數據是與192.168.75.131:11211建立了連接,此時mcrouter已經和6個memcached建立了連接。最后一條紅色輸出表示此時和192.168.75.131:11211斷開了連接,同時已經和0個memcached建立連接。
5.3.6 數據分析
memcached實例 |
cmd_set次數 |
bytes_written大小 |
127.0.0.1:11210 |
16881 |
135048 |
127.0.0.1:11211 |
16569 |
132552 |
127.0.0.1:11212 |
16561 |
132488 |
192.168.75.131:11210 |
16824 |
134592 |
192.168.75.131:11211 |
16604 |
132832 |
192.168.75.131:11212 |
16561 |
132488 |
從以上數據可以看出,cmd_set總數是10w次,符合預想的結果。向mcrouter寫入10w條數據,數據幾乎是平均分布在每個memcached實例中,符合普通分布式的特點。如果想觀察的更細,可以看一下對應實例的memcached日志。
5.4 mcrouter的高可用分布式
5.4.1 用到的路由句柄
5.4.1.1 OperationSelectorRoute
l Definition:Sends to different targets based on specified operations.
l Properties:default_policy、operation_policies.
5.4.1.2 WarmUpRoute
l Definition:All sets and deletes go to the target ("cold") route handle. Gets are attempted on the "cold" route handle and, in case of a miss, data is fetched from the "warm" route handle (where the request is likely to result in a cache hit). If "warm" returns a hit, the response is forwarded to the client and an asynchronous request, with the configured expiration time, updates the value in the "cold" route handle.
l Properties:cold、warm、exptime .
5.4.1.3 AllSyncRoute
l Definition:Immediately sends the same request to all child route handles. Collects all replies and responds with the "worst" reply (i.e., the error reply, if any).
l Properties:children.
5.4.2 mcrouter配置
cat config.json
{
"pools": {
"backup": { "servers": [
"192.168.75.131:11210",
"192.168.75.131:11211",
"192.168.75.131:11212",
] },
"localhost": { "servers": [
"127.0.0.1:11210",
"127.0.0.1:11211",
"127.0.0.1:11212",
] }
},
"route": {
"type": "OperationSelectorRoute",
"operation_policies": {
"get": {
"type": "WarmUpRoute",
"cold": "PoolRoute|localhost",
"warm": "PoolRoute|backup",
"exptime": 0
}
},
"default_policy": {
"type": "AllSyncRoute",
"children": [
"PoolRoute|localhost",
"PoolRoute|backup"
]
}
}
}
配置文件中同樣定義了兩個名為backup和localhost的memcached池,每個池包含三個memcached實例。OperationSelectorRoute路由句柄定義了如果是get操作,則先從localhost池里面取數據,若miss,則從backup池取,若從backup池取到數據則同時把數據寫入localhost池。exptime為數據寫入localhost池的有效期,0為永久。而default_policy則定義了除get以外的其他操作(set、add、delete)則通過AllSyncRoute句柄同時寫入localhost和backup池,最后也就是兩個池的數據是互備的。下面來進行測試並驗證。
5.4.3 啟動memcached測試實例
192.168.75.130上:
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 127.0.0.1 -vv >> /tmp/memcached_11212.log 2>&1
192.168.75.131上:
sh memcached_stop
/usr/local/mcc/bin/memcached -d -m 128 -c 4096 -p 11210 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11210.log 2>&1
/usr/local/mcc/bin/memcached -d -m 2048 -c 4096 -p 11211 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11211.log 2>&1
/usr/local/mcc/bin/memcached -d -m 64 -c 4096 -p 11212 -u www -t 10 -l 192.168.75.131 -vv >> /tmp/memcached_11212.log 2>&1
目的是清空上次實驗的殘留數據。
5.4.4 啟動mcrouter
mcrouter -p 1919 -f /data/backup/config.json
后面不加&,方便輸出調試信息。
5.4.5 測試過程
隨機寫入10w條數據,在192.168.75.130執行
sh setkey.sh 100000
5.4.6 數據分析
memcached實例 |
cmd_set次數 |
bytes_written大小 |
127.0.0.1:11210 |
33705 |
269640 |
127.0.0.1:11211 |
33173 |
265384 |
127.0.0.1:11212 |
33122 |
264976 |
192.168.75.131:11210 |
33705 |
269640 |
192.168.75.131:11211 |
33173 |
265384 |
192.168.75.131:11212 |
33122 |
264976 |
從以上數據可以看出,cmd_set總數是20w次,符合預想的結果。向mcrouter寫入10w條數據時,127.0.0.1:11210和192.168.75.131:11210實例寫入的數據次數、大小一致,也就是它們是互備的並且位於不同的池,同時可以看到其他實例也是互備。符合有冗余的分布式特性。
5.4.7 模擬localhost池故障
為了測試高可用,我們手動停止192.168.75.130上的所有實例,模擬故障或數據丟失。
sh memcached_stop
sh memcached_start
在192.168.75.130(loaclhost池)讀取key1、key2,此時由於重啟了memcached所有實例,應該沒有數據。
Ø 讀key1
sh getkey.sh key1
Port 11210 values:
END
Port 11211 values:
END
Port 11212 values:
END
Ø 讀key2
Port 11210 values:
END
Port 11211 values:
END
Port 11212 values:
END
從mcrouter讀取key1、key2,並觀察相應實例的memcached日志和mcrouter輸出。
Ø get key1
mcrouter的輸出如下:
I1113 15:44:45.147187 84128 ProxyDestination.cpp:359] server 127.0.0.1:11212:TCP:ascii-1000 up (1 of 6)
I1113 15:44:45.148263 84128 ProxyDestination.cpp:359] server 192.168.75.131:11212:TCP:ascii-1000 up (2 of 6)
memcached相應實例的日志顯示如下:
127.0.0.1:11212
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key1
>58 END
<58 add key1 0 0 4
>58 STORED
192.168.75.131:11212
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key1
>58 sending key key1
>58 END
Ø get key2
mcrouter的輸出如下:
I1113 15:48:07.653182 84128 ProxyDestination.cpp:359] server 127.0.0.1:11210:TCP:ascii-1000 up (1 of 6)
I1113 15:48:07.654258 84128 ProxyDestination.cpp:359] server 192.168.75.131:11210:TCP:ascii-1000 up (2 of 6)
memcached相應實例的日志顯示如下:
127.0.0.1:11210
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key2
>58 END
<58 add key2 0 0 4
>58 STORED
192.168.75.131:11210
<58 new auto-negotiating client connection
58: Client using the ascii protocol
<58 get key2
>58 sending key key2
>58 END
我們再從192.168.75.130(localhost池)讀取key1、key2。
Ø 讀key1
sh getkey.sh key1
Port 11210 values:
END
Port 11211 values:
END
Port 11212 values:
VALUE key1 0 4
test
END
Ø 讀key2
sh getkey.sh key2
Port 11210 values:
VALUE key2 0 4
test
END
Port 11211 values:
END
Port 11212 values:
END
由此可以看出mcrouter支持高可用功能,但有個缺陷,就是localhost池故障恢復后,備池不會主動且及時的向恢復后的池同步數據,mcrouter需要人工請求后,並判定從localhost池取不到數據后才會再重新寫入數據到localhost池。
5.4.8 結果分析
由5.4.6章節的數據分析和5.4.7章節的日志顯示,說明在mcrouter中寫入10w條數據時,符合高可用分布式的特性。即mcrouter本身具有高可用分布式的特點。驗證了5.4.2章節的說法。
5.5 reload特性
mcrouter的配置文件支持reload,而且默認會主動加載生效,正如官方作者提到(mcrouter supports dynamic reconfiguration so you don't need to restart mcrouter to apply config changes.),如果你配置文件修改出錯並保存,mcrouter會有錯誤提示,並依然保持之前的正確配置。當然這個功能是可選的,你可以在啟動時加上--disable-reload-configs參數,然后你編輯配置文件並保存后,mcrouter不會自動刷新配置。
6 常見錯誤匯總
6.1 編譯folly報錯
6.1.1 報錯1
error: Could not link against boost_thread-mt !
或者
checking whether the Boost::Context library is available... yes
configure: error: Could not find a version of the library!
解決:./configure --with-boost-libdir=/usr/lib/ 加上boost的庫所在路徑
6.1.2 報錯2
/usr/bin/ld: /usr/local/gcc-4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3/../../../../lib64/libiberty.a(cp-demangle.o): relocation R_X86_64_32S against `.rodata' can not be used when making a shared object; recompile with -fPIC
/usr/local/gcc-4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3/../../../../lib64/libiberty.a: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
make[2]: *** [libfolly.la] Error 1
make[2]: Leaving directory `/data/src/folly/folly'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/data/src/folly/folly'
make: *** [all] Error 2
解決:檢查之前的gcc編譯和boost編譯過程有無問題,gcc必須是4.8+,boost必須是1.51+
6.2 編譯mcrouter報錯
g++ -DHAVE_CONFIG_H -I../.. -DLIBMC_FBTRACE_DISABLE -Wno-missing-field-initializers -Wno-deprecated -W -Wall -Wextra -Wno-unused-parameter -fno-strict-aliasing -g -O2 -std=gnu++1y -MT fbi/cpp/libmcrouter_a-LogFailure.o -MD -MP -MF fbi/cpp/.deps/libmcrouter_a-LogFailure.Tpo -c -o fbi/cpp/libmcrouter_a-LogFailure.o `test -f 'fbi/cpp/LogFailure.cpp' || echo './'`fbi/cpp/LogFailure.cpp
fbi/cpp/LogFailure.cpp:24:29: fatal error: folly/Singleton.h: No such file or directory
#include <folly/Singleton.h>
解決:需要檢查folly在make過程中是否有wraning信息輸出 ,有wraning信息的話,需要檢查folly的編譯過程,make mcrouter編譯時如果有類似以下輸出才代表是正確的,而且會停止在這個界面比較久,否則有可能是folly那里make的時候有問題導致了mcrouter最后編譯失敗。
libtool: compile: g++ -DHAVE_CONFIG_H -I./.. -pthread -I/usr/include -std=gnu++0x -g -O2 -MT futures/Future.lo -MD -MP -MF futures/.deps/Future.Tpo -c futures/Future.cpp -o futures/Future.o >/dev/null 2>&1
6.3 啟動stunnel server報錯
.10.31 01:05:50 LOG7[18392:140063756588992]: Certificate: /usr/local/stunnel/etc/private.pem
2015.10.31 01:05:50 LOG3[18392:140063756588992]: Error reading certificate file: /usr/local/stunnel/etc/private.pem
2015.10.31 01:05:50 LOG3[18392:140063756588992]: error stack: 140DC009 : error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib
2015.10.31 01:05:50 LOG3[18392:140063756588992]: SSL_CTX_use_certificate_chain_file: 906D06C: error:0906D06C:PEM routines:PEM_read_bio:no start line
檢查stunnel.conf文件中cert和CAfile字段對應證書是否為相應文件,文件所屬權限是否正確等。
6.4 啟動stunnel client 報錯
2015.11.01 15:31:31 LOG3[71906:140502252201920]: Cannot create pid file /usr/local/stunnel/var/run/stunnel/stunnel.pid
2015.11.01 15:31:31 LOG3[71906:140502252201920]: create: No such file or directory (2)
已經提示非常清楚了,主要是記得在stunnel client的stunnel.conf文件中加上output字段,方便排錯。
總之,有錯誤就看日志和Google。
7 參考資料
http://qiita.com/shivaken/items/8742e0ddc3c72f242d03
http://confluence.sharuru07.jp/pages/viewpage.action?pageId=361455
http://www.tiham.com/cache-cluster/mcrouter-install.html
http://dev.classmethod.jp/cloud/aws/elasticache-carried-mcrouter/
https://github.com/genx7up/docker-mcrouter
http://fuweiyi.com/others/2014/05/15/a-Centos-Squid-Stunnel-proxy.html
http://blog.cloudpack.jp/2014/12/16/router-for-scaling-memcached-with-mcrouter-on-docker/
https://github.com/facebook/mcrouter/wiki
http://www.oschina.net/translate/introducing-mcrouter-a-memcached-protocol-router-for-scaling-memcached-deployments