3fs安裝配置說明
一、安裝
s3fs可以通過兩種方式安裝。
方式一 使用包管理工具通過網絡安裝。
RHEL and CentOS 7 or newer 可以通過EPEL倉庫進行安裝。
[root@nko51 ~]# yum install epel-release
[root@nko51 ~]# yum install s3fs-fuse
[root@nko51 ~]# rpm -qa | grep s3fs
s3fs-fuse-1.90-1.el7.x86_64
[root@nko51 ~]#
截止2022年3月10日,通過yum安裝的s3fs版本為1.90-1。
其他系統請查看官方github
方式二 編譯安裝
[root@nko51 ~]# yum install automake fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel -y # 安裝依賴
[root@nko51 ~]# git clone https://github.com/s3fs-fuse/s3fs-fuse.git # 獲取源碼,也可以打開https://github.com/s3fs-fuse/s3fs-fuse/releases 頁面手動下載源碼包
[root@nko51 ~]# cd s3fs-fuse # 開始編譯安裝
[root@nko51 ~]# ./autogen.sh
[root@nko51 ~]# ./configure
[root@nko51 ~]# make && make install
截止2022年3月10日,通過編譯安裝的s3fs版本為1.91。
二、使用s3fs掛載bucket
獲取AK、SK信息
可以從贊存的管理頁面獲取用戶AK、 SK信息。也可以在服務器后台使用radosgw-admin命令獲取。
[root@4U4N1 ~]# radosgw-admin user info --uid=nko
2022-03-10 10:23:28.031066 7fc6643f58c0 0 WARNING: can't generate connection for zone 70552a0c-7e6f-11ec-abc9-0cc47a88ffbb id cephmetazone: no endpoints defined
{
"user_id": "nko",
"display_name": "nko",
"email": "",
"suspended": 0,
"max_buckets": 1000000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "nko",
"access_key": "WUWB618CCS0UYCEA4V90",
"secret_key": "fpAk9GaKTmStW1bNwz3hHRwHSHkkCLxMtjvO3mHA"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
[root@4U4N1 ~]# radosgw-admin bucket list --uid=nko
2022-03-10 10:24:28.803719 7f8d8e4368c0 0 WARNING: can't generate connection for zone 70552a0c-7e6f-11ec-abc9-0cc47a88ffbb id cephmetazone: no endpoints defined
[
"b01",
"bucket-100w1",
"bucket-10w1",
"bucket-1w1"
]
[root@4U4N1 ~]#
在客戶端創建認證密碼
[root@nko51 ~]# echo "IGAXQ7NGJ6D1CFU4B94C:lgLw8hOb6AFtdPOTAs8TwdXFxJo1ySACMjhRaFBJ" > /etc/passwd-s3fs
[root@nko51 ~]# chmod 600 /etc/passwd-s3fs
調整客戶端時間,與服務器時間一致
[root@nko51 ~]# vim /etc/chrony.conf
server 192.168.40.33
keyfile /etc/chrony.keys
driftfile /var/lib/chrony/chrony.drift
logdir /var/log/chrony
maxupdateskew 100.0
hwclockfile /etc/adjtime
rtcsync
makestep 1 3
[root@nko51 ~]# systemctl restart chronyd
[root@nko51 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-03-10 10:31:36 CST; 6s ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 8475 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 8472 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 8474 (chronyd)
CGroup: /system.slice/chronyd.service
└─8474 /usr/sbin/chronyd
Mar 10 10:31:36 nko51 systemd[1]: Starting NTP client/server...
Mar 10 10:31:36 nko51 chronyd[8474]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS ...6 +DEBUG)Mar 10 10:31:36 nko51 chronyd[8474]: Frequency 19.736 +/- 4.778 ppm read from /var/lib/chrony/chrony.drift
Mar 10 10:31:36 nko51 systemd[1]: Started NTP client/server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@nko51 ~]# chronyc sources -v
210 Number of sources = 1
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 192.168.40.33 3 6 17 65 -7623ns[ +99us] +/- 32ms
^* 標志代表客戶端時間已經與服務器進行過同步。
掛載bucket
[root@nko51 ~]# mkdir /mnt/1w
[root@nko51 ~]# s3fs bucket-1w1 /mnt/1w -o passwd_file=/etc/passwd-s3fs -o url=http://172.18.0.10 \
-o use_path_request_style \
-o use_cache=/dev/shm \
-o kernel_cache \
-o max_background=1000 \
-o max_stat_cache_size=100000 \
-o multipart_size=64 \
-o parallel_count=30 \
-o multireq_max=30 \
-o dbglevel=warn
[root@nko51 ~]#
[root@nko51 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 56G 8.2G 47G 15% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9.0M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda1 1014M 148M 867M 15% /boot
tmpfs 3.2G 0 3.2G 0% /run/user/0
s3fs 16E 0 16E 0% /mnt/1w
存儲空間顯示的16E並不代表存儲空間真的有16EB大小,而是其所能支持的最大空間容量。s3fs並不能探測到該bucket所能存儲的最大數據量,故而如此顯示。
三、讀寫測試
掛載后進行讀寫測試,正常,但是使用ls命令查看所有文件(S3對象),會卡較長時間。
經測試,對於有1萬個對象的bucket,用s3fs掛載后ls一次約需15秒鍾;對於有10萬個對象的bucket,用s3fs掛載后ls一次約需35秒。
對於有100萬對象的bucket,使用ls命令、rsync命令、find命令、以及python的os.listdir函數()和os.walk()函數來獲取其文件列表,均以進程卡死而告終。
對於有2億對象的bucket,使用s3fs掛載后可以正常讀寫,但因掛載目錄下的文件數太多,不能使用ls等命令獲取掛載目錄下的文件列表。