005 Ceph配置文件及用戶管理


一、Ceph的配置文件

        Ceph 配置文件可用於配置存儲集群內的所有守護進程、或者某一類型的所有守護進程。要配置一系列守護進程,這些配置必須位於能收到配置的段落之下。默認情況下,無論是ceph的服務端還是客戶端,配置文件都存儲在/etc/ceph/ceph.conf文件中

        如果修改了配置參數,必須使用/etc/ceph/ceph.conf文件在所有節點(包括客戶端)上保持一致。

        ceph.conf 采用基於 INI 的文件格式,包含具有 Ceph 守護進程和客戶端相關配置的多個部分。每個部分具有一個使用 [name] 標頭定義的名稱,以及鍵值對的一個或多個參數

[root@ceph2 ceph]# cat ceph.conf

[global]   #存儲所有守護進程之間通用配置。它應用到讀取配置文件的所有進程,包括客戶端。配置影響 Ceph 集群里的所有守護進程。
fsid = 35a91e48-8244-4e96-a7ee-980ab989d20d     #這個ID和ceph -s查看的ID是一個
mon initial members = ceph2,ceph3,ceph4   #monitor的初始化配置,定義ceph最初安裝的時候定義的monitor節點,必須在啟動的時候就准備就緒的monitor節點。
mon host = 172.25.250.11,172.25.250.12,172.25.250.13     
public network = 172.25.250.0/24
cluster network = 172.25.250.0/24
[osd]    #配置影響存儲集群里的所有 ceph-osd 進程,並且會覆蓋 [global] 下的同一選項      
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = noatime,largeio,inode64,swalloc
osd journal size = 5120

注:配置文件使用#和;來注釋,參數名稱可以使用空格、下划線、中橫線來作為分隔符。如osd journal size 、 osd_jounrnal_size 、 osd-journal-size是有效且等同的參數名稱

1.1 osd介紹

 一個裸磁盤給ceph后,會被格式化成xfs格式,並且會分兩個分區,一個數據區和一個日志區

[root@ceph2 ceph]# fdisk -l

OSD的id和磁盤對應關系

[root@ceph2 ceph]# df -hT

Ceph的配置文件位置和工作目錄分別為:/etc/ceph和 cd /var/lib/ceph/

[root@ceph2 ceph]# ceph osd tree

二、刪除一個存儲池

2.1 修改配置文件

先配置一個參數為mon_allow_pool_delete為true

[root@ceph1 ceph-ansible]#  vim /etc/ceph/ceph.conf
[global]
fsid = 35a91e48-8244-4e96-a7ee-980ab989d20d
mon initial members = ceph2,ceph3,ceph4
mon host = 172.25.250.11,172.25.250.12,172.25.250.13
public network = 172.25.250.0/24
cluster network = 172.25.250.0/24
[osd]
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = noatime,largeio,inode64,swalloc
osd journal size = 5120
[mon]    #添加配置
mon_allow_pool_delete = true

 2.2 同步各個節點

[root@ceph1 ceph-ansible]# ansible all -m copy -a 'src=/etc/ceph/ceph.conf dest=/etc/ceph/ceph.conf owner=ceph group=ceph mode=0644'

ceph3 | SUCCESS => {
    "changed": true, 
    "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", 
    "dest": "/etc/ceph/ceph.conf", 
    "failed": false, 
    "gid": 1001, 
    "group": "ceph", 
    "md5sum": "8415ae9d959d31fdeb23b06ea7f61b1b", 
    "mode": "0644", 
    "owner": "ceph", 
    "size": 500, 
    "src": "/root/.ansible/tmp/ansible-tmp-1552807199.08-216306208753591/source", 
    "state": "file", 
    "uid": 1001
}
ceph4 | SUCCESS => {
    "changed": true, 
    "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", 
    "dest": "/etc/ceph/ceph.conf", 
    "failed": false, 
    "gid": 1001, 
    "group": "ceph", 
    "md5sum": "8415ae9d959d31fdeb23b06ea7f61b1b", 
    "mode": "0644", 
    "owner": "ceph", 
    "size": 500, 
    "src": "/root/.ansible/tmp/ansible-tmp-1552807199.09-46038387604349/source", 
    "state": "file", 
    "uid": 1001
}
ceph2 | SUCCESS => {
    "changed": true, 
    "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", 
    "dest": "/etc/ceph/ceph.conf", 
    "failed": false, 
    "gid": 1001, 
    "group": "ceph", 
    "md5sum": "8415ae9d959d31fdeb23b06ea7f61b1b", 
    "mode": "0644", 
    "owner": "ceph", 
    "size": 500, 
    "src": "/root/.ansible/tmp/ansible-tmp-1552807199.04-33302205115898/source", 
    "state": "file", 
    "uid": 1001
}
ceph1 | SUCCESS => {
    "changed": false, 
    "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", 
    "failed": false, 
    "gid": 1001, 
    "group": "ceph", 
    "mode": "0644", 
    "owner": "ceph", 
    "path": "/etc/ceph/ceph.conf", 
    "size": 500, 
    "state": "file", 
    "uid": 1001
}
輸出結果

或者配置此選項

[root@ceph1 ceph-ansible]# vim /usr/share/ceph-ansible/group_vars/all.yml

重新執行palybook
[root@ceph1 ceph-ansible]# ansible-playbook site.yml

配置文件修改后不會立即生效,要重啟相關的進程,比如所有的osd,或所有的monitor

[root@ceph2 ceph]#  cat /etc/ceph/ceph.conf

同步成功

2.3 每個節點重啟服務

[root@ceph2 ceph]# systemctl restart ceph-mon@serverc

或者

[root@ceph2 ceph]# systemctl restart ceph-mon.target

也可以使用ansible同時啟動

[root@ceph1 ceph-ansible]#  ansible mons -m shell -a ' systemctl restart ceph-mon.target'  #這種操作堅決不允許在生產環境操作
ceph2 | SUCCESS | rc=0 >>
ceph4 | SUCCESS | rc=0 >>
ceph3 | SUCCESS | rc=0 >>

2.4 刪除池

[root@ceph2 ceph]# ceph osd pool ls
testpool
EC-pool
[root@ceph2 ceph]# ceph osd pool delete EC-pool EC-pool --yes-i-really-really-mean-it
pool 'EC-pool' removed
[root@ceph2 ceph]# ceph osd pool ls
testpool

三、修改配置文件 

3.1  臨時修改一個配置文件

[root@ceph2 ceph]# ceph tell mon.* injectargs '--mon_osd_nearfull_ratio 0.85'     #在輸出信息顯示需要重啟服務,但是配置已經生效
mon.ceph2: injectargs:mon_osd_nearfull_ratio = '0.850000' (not observed, change may require restart) 
mon.ceph3: injectargs:mon_osd_nearfull_ratio = '0.850000' (not observed, change may require restart) 
mon.ceph4: injectargs:mon_osd_nearfull_ratio = '0.850000' (not observed, change may require restart) 
[root@ceph2 ceph]#  ceph tell mon.* injectargs '--mon_osd_full_ratio 0.95'
mon.ceph2: injectargs:mon_osd_full_ratio = '0.950000' (not observed, change may require restart) 
mon.ceph3: injectargs:mon_osd_full_ratio = '0.950000' (not observed, change may require restart) 
mon.ceph4: injectargs:mon_osd_full_ratio = '0.950000' (not observed, change may require restart) 
[root@ceph2 ceph]# ceph daemon osd.0 config show|grep nearfull
    "mon_osd_nearfull_ratio": "0.850000",
[root@ceph2 ceph]# ceph daemon mon.ceph2 config show|grep mon_osd_full_ratio
    "mon_osd_full_ratio": "0.950000",

3.2 元變量介紹

所謂元變量是即Ceph內置的變量。可以用它來簡化ceph.conf文件的配置:

$cluster:Ceph存儲集群的名稱。默認為ceph,在/etc/sysconfig/ceph文件中定義。例如,log_file參數的默認值是/var/log/ceph/$cluster-$name.log。在擴展之后,它變為/var/log/ceph/ceph-mon.ceph-node1.log

$type:守護進程類型。監控器使用mon;OSD使用osd,元數據服務器使用mds,管理器使用mgr,客戶端應用使用client。如在[global]部分中將pid_file參數設定義為/var/run/$cluster/$type.$id.pid,它會擴展為/var/run/ceph/osd.0.pid,表示ID為0的 OSD。對於在ceph-node1上運行的MON守護進程,它擴展為/var/run/ceph/mon.ceph-node1.pid

$id:守護進程實例ID。對於ceph-node1上的MON,設置為ceph-node1。對於osd.1,它設置為1。如果是客戶端應用,這是用戶名

$name:守護進程名稱和實例ID。這是$type.$id的快捷方式

$host:其上運行了守護進程的主機的名稱

補充:

關閉所有ceph進程

[root@ceph2 ceph]# ps -ef|grep "ceph-"|grep -v grep|awk '{print $2}'|xargs kill -9

3.3 組件之間啟用cephx認證配置

 [root@ceph2 ceph]# ceph daemon osd.3 config show|grep grace

 [root@ceph2 ceph]# ceph daemon mon.ceph2 config show|grep auth

修改配置文件

[root@ceph1 ceph-ansible]#  vim /etc/ceph/ceph.conf

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[root@ceph1 ceph-ansible]# ansible all -m copy -a 'src=/etc/ceph/ceph.conf dest=/etc/ceph/ceph.conf owner=ceph group=ceph mode=0644'

[root@ceph1 ceph-ansible]# ansible mons -m shell -a ' systemctl restart ceph-mon.target'

[root@ceph1 ceph-ansible]# ansible mons -m shell -a ' systemctl restart ceph-osd.target'

[root@ceph1 ceph-ansible]# ansible mons -m shell -a ' systemctl restart ceph-mgr.target'

[root@ceph2 ceph]# ceph daemon mon.ceph2 config show|grep auth

cephx驗證

四、Ceph用戶管理

用戶管理需要用戶名和秘鑰

[root@ceph2 ceph]# cat ceph.client.admin.keyring

[client.admin]
    key = AQD7fYxcnG+wCRAARyLuAewyDcGmTPb5wdNRvQ==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"

假如沒有秘鑰環,在執行ceph -s等相關操作就會報錯,如在ceph1執行

[root@ceph1 ceph-ansible]# ceph -s

2019-03-17 16:19:31.824428 7fc87b255700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-03-17 16:19:31.824437 7fc87b255700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2019-03-17 16:19:31.824439 7fc87b255700  0 librados: client.admin initialization error (2) No such file or directory
[errno 2] error connecting to the cluster

4.1 Ceph授權

Ceph把數據以對象的形式存於個存儲池中,Ceph用戶必須具有訪問存儲池的權限能夠讀寫數據

Ceph用caps來描述給用戶的授權,這樣才能使用Mon,OSD和MDS的功能

caps也用於限制對某一存儲池內的數據或某個命名空間的訪問

Ceph管理用戶可在創建或更新普通用戶是賦予其相應的caps

Ceph常用權限說明:

r:賦予用戶讀數據的權限,如果我們需要訪問集群的任何信息,都需要先具有monitor的讀權限

w:賦予用戶寫數據的權限,如果需要在osd上存儲或修改數據就需要為OSD授予寫權限

x:賦予用戶調用對象方法的權限,包括讀和寫,以及在monitor上執行用戶身份驗證的權限

class-read:x的子集,允許用戶調用類的read方法,通常用於rbd類型的池

class-write:x的子集,允許用戶調用類的write方法,通常用於rbd類型的池

*:將一個指定存儲池的完整權限(r、w和x)以及執行管理命令的權限授予用戶

profile osd:授權一個用戶以OSD身份連接其它OSD或者Monitor,用於OSD心跳和狀態報告

profile mds:授權一個用戶以MDS身份連接其他MDS或者Monitor

profile bootstrap-osd:允許用戶引導OSD。比如ceph-deploy和ceph-disk工具都使用client.bootstrap-osd用戶,該用戶有權給OSD添加密鑰和啟動加載程序

profile bootstrap-mds:允許用戶引導MDS。比如ceph-deploy工具使用了client.bootstrap-mds用戶,該用戶有權給MDS添加密鑰和啟動加載程序

4.2 添加用戶

[root@ceph2 ceph]# ceph auth add client.ning mon 'allow r' osd 'allow rw pool=testpool'  #當用戶不存在,則創建用戶並授權;當用戶存在,當權限不變,則不進行任何輸出;當用戶存在,不支持修改權限
added key for client.ning   
[root@ceph2 ceph]# ceph auth add client.ning mon  'allow r' osd 'allow rw'
Error EINVAL: entity client.ning exists but cap osd does not match
[root@ceph2 ceph]# ceph auth get-or-create client.joy mon 'allow r' osd 'allow rw pool=mytestpool'  #當用戶不存在,則創建用戶並授權並返回用戶和key,當用戶存在,權限不變,返回用戶和key,,當用戶存在,權限修改,則返回報錯
[client.joy]
    key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==
[root@ceph2 ceph]#  cat ceph.client.admin.keyring
[client.admin]
    key = AQD7fYxcnG+wCRAARyLuAewyDcGmTPb5wdNRvQ==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"
[root@ceph2 ceph]# ceph auth get-or-create client.joy  mon 'allow r' osd 'allow rw'   #當用戶不存在,則創建用戶並授權只返回key;當用戶存在,權限不變,只返回key;當用戶存在,權限修改,則返回報錯
Error EINVAL: key for client.joy exists but cap osd does not match

4.3 刪除用戶

[root@ceph2 ceph]#  ceph auth get-or-create client.xxx   #創建一個xxx用戶
[client.xxx]
    key = AQAOB45c4KIoCRAAF/kDd7r4uUjKEdEHOSP8Xw==
[root@ceph2 ceph]# ceph auth del client.xxx         #刪除xxx用戶
updated
[root@ceph2 ceph]# ceph auth get client.xxx         #用戶已經刪除
Error ENOENT: failed to find client.xxx in keyring
[root@ceph2 ceph]# ceph auth get client.ning
exported keyring for client.ning
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==
    caps mon = "allow r"
    caps osd = "allow rw pool=testpool"
[root@ceph2 ceph]# ceph auth get-key client.ning
AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==[root@ceph2 ceph]# 

4.4 導出用戶

[root@ceph2 ceph]# ceph auth get-or-create client.ning -o ./ceph.client.ning.keyring
[root@ceph2 ceph]# ll ./ceph.client.ning.keyring
-rw-r--r-- 1 root root 62 Mar 17 16:39 ./ceph.client.ning.keyring
[root@ceph2 ceph]# cat ./ceph.client.ning.keyring
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==

ceph1上驗證驗證操作

[root@ceph2 ceph]# scp ./ceph.client.ning.keyring 172.25.250.10:/etc/ceph/    把秘鑰傳給cph1
[root@ceph1 ceph-ansible]# ceph -s      #發現沒有成功,需要制定ID或用戶
2019-03-17 16:47:01.936939 7fbad2aae700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-03-17 16:47:01.936950 7fbad2aae700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2019-03-17 16:47:01.936951 7fbad2aae700  0 librados: client.admin initialization error (2) No such file or directory
[errno 2] error connecting to the cluster
[root@ceph1 ceph-ansible]# ceph -s --name client.ning   #指定用戶查看
  cluster:
    id:     35a91e48-8244-4e96-a7ee-980ab989d20d
    health: HEALTH_OK
  services:
    mon: 3 daemons, quorum ceph2,ceph3,ceph4
    mgr: ceph4(active), standbys: ceph2, ceph3
    osd: 9 osds: 9 up, 9 in
  data:
    pools:   1 pools, 128 pgs
    objects: 3 objects, 21938 bytes
    usage:   972 MB used, 133 GB / 134 GB avail
    pgs:     128 active+clean
[root@ceph1 ceph-ansible]# ceph osd pool ls --name client.ning
testpool
[root@ceph1 ceph-ansible]# ceph osd pool ls --id ning
testpool
[root@ceph1 ceph-ansible]# 
[root@ceph1 ceph-ansible]# rados -p testpool ls --id ning
test2
test
[root@ceph1 ceph-ansible]# rados -p testpool put aaa /etc/ceph/ceph.conf --id ning   #驗證數據的上傳下載
[root@ceph1 ceph-ansible]# rados -p testpool ls --id ning
test2
aaa
test
[root@ceph1 ceph-ansible]# rados -p testpool get aaa /root/aaa.conf  --name client.ning
[root@ceph1 ceph-ansible]# diff  /root/aaa.conf /etc/ceph/ceph.conf

 4.5 導入用戶

[root@ceph2 ceph]# ceph auth export client.ning -o /etc/ceph/ceph.client.ning-1.keyring 
export auth(auid = 18446744073709551615 key=AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw== with 2 caps)
[root@ceph2 ceph]# ll
total 20
-rw------- 1 ceph ceph 151 Mar 16 12:39 ceph.client.admin.keyring
-rw-r--r-- 1 root root 121 Mar 17 17:32 ceph.client.ning-1.keyring
-rw-r--r-- 1 root root  62 Mar 17 16:39 ceph.client.ning.keyring
-rw-r--r-- 1 ceph ceph 589 Mar 17 16:12 ceph.conf
drwxr-xr-x 2 ceph ceph  23 Mar 16 12:39 ceph.d
-rw-r--r-- 1 root root  92 Nov 23  2017 rbdmap
[root@ceph2 ceph]# cat ceph.client.ning-1.keyring  #帶有授權信息
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==
    caps mon = "allow r"
    caps osd = "allow rw pool=testpool"
[root@ceph2 ceph]# ceph auth get client.ning
exported keyring for client.ning
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==
    caps mon = "allow r"
    caps osd = "allow rw pool=testpool"

 4.6 用戶被刪除,恢復用戶

[root@ceph2 ceph]# cat ceph.client.ning.keyring #秘鑰環沒有權限信息
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==
[root@ceph2 ceph]# ceph auth del client.ning   #刪除這個用戶
updated
[root@ceph1 ceph-ansible]# ll /etc/ceph/ceph.client.ning.keyring  #在客戶端,秘鑰環依然存在 
-rw-r--r-- 1 root root 62 Mar 17 16:40 /etc/ceph/ceph.client.ning.keyring
[root@ceph1 ceph-ansible]# ceph -s --name client.ning   #秘鑰環的用戶被刪除,無效
2019-03-17 17:49:13.896609 7f841eb27700  0 librados: client.ning authentication error (1) Operation not permitted
[errno 1] error connecting to the cluster
[root@ceph2 ceph]# ceph auth import -i ./ceph.client.ning-1.keyring #使用ning-1.keyring恢復
imported keyring
[root@ceph2 ceph]# ceph auth list |grep ning  #用戶恢復
installed auth entries:
client.ning
[root@ceph1 ceph-ansible]# ceph osd pool ls --name client.ning   #客戶端驗證,秘鑰生效
testpool
EC-pool

4.7 修改用戶權限

        兩種方法,一種是直接刪除這個用戶,重新創建具有新權限的用戶,但是會導致使用這個用戶連接的客戶端,都將全部失效。可以使用下面的方法修改

        ceph auth caps 用戶修改用戶授權。如果給定的用戶不存在,直接返回報錯。如果用戶存在,則使用新指定的權限覆蓋現有權限。所以,如果只是給用戶新增權限,則原來的權限需要原封不動的帶上。如果需要刪除原來的權限,只需要將該權限設定為空即可。

[root@ceph2 ceph]# ceph auth get client.joy   #查看用戶權限
exported keyring for client.joy
[client.joy]
    key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==
    caps mon = "allow r"
    caps osd = "allow rw pool=mytestpool"
[root@ceph2 ceph]# ceph osd pool ls
testpool
EC-pool
[root@ceph2 ceph]# ceph auth caps client.joy  mon 'allow r' osd 'allow rw pool=mytestpool,allow rw pool=testpool'  #對用戶joy添加對testpool這個池的權限
updated caps for client.joy
[root@ceph2 ceph]# ceph auth get client.joy   #查看成功添加
exported keyring for client.joy
[client.joy]
    key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==
    caps mon = "allow r"
    caps osd = "allow rw pool=mytestpool,allow rw pool=testpool"
[root@ceph2 ceph]# rados -p testpool put joy /etc/ceph/ceph.client.admin.keyring --id joy    #但是沒有秘鑰,需要導出秘鑰
2019-03-17 18:01:36.602310 7ff35e71ee40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.joy.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-03-17 18:01:36.602337 7ff35e71ee40 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2019-03-17 18:01:36.602340 7ff35e71ee40  0 librados: client.joy initialization error (2) No such file or directory
couldn't connect to cluster: (2) No such file or directory
[root@ceph2 ceph]# ceph auth get-or-create client.joy -o /etc/ceph/ceph.client.joy.keyring  #導出秘鑰
[root@ceph2 ceph]# rados -p testpool put joy /etc/ceph/ceph.client.admin.keyring --id joy    #上傳數據測試測試
[root@ceph2 ceph]# rados -p testpool ls --id joy    #測試成功,用戶權限修改成功
joy
test2
aaa
test
[root@ceph2 ceph]# ceph auth caps client.joy mon 'allow r' osd 'allow rw pool=testpool'   #去掉對mytestpool的權限
updated caps for client.joy
[root@ceph2 ceph]# ceph auth get client.joy    #修改成功
exported keyring for client.joy
[client.joy]
    key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==
    caps mon = "allow r"
    caps osd = "allow rw pool=testpool"
[root@ceph2 ceph]# ceph auth caps client.ning mon '' osd ''    #清除掉所有權限,但是必須保留對mon的讀權限
Error EINVAL: moncap parse failed, stopped at end of ''
[root@ceph2 ceph]# ceph auth caps client.ning mon 'allow r' osd ''   #成功清除所有權限,但是還有mon的權限
updated caps for client.ning
[root@ceph2 ceph]# ceph auth get client.ning
exported keyring for client.ning
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==
    caps mon = "allow r"
    caps osd = ""
[root@ceph2 ceph]# ceph auth get-or-create  joyning    #即也不可以創建空權限用戶,既沒有monitor讀權限的用戶
Error EINVAL: bad entity name

4.8 推送用戶

創建的用戶主要用於客戶端授權,所以需要將創建的用戶推送至客戶端。如果需要向同一個客戶端推送多個用戶,可以將多個用戶的信息寫入同一個文件,然后直接推送該文件

[root@ceph2 ceph]# ceph-authtool -C /etc/ceph/ceph.keyring     #創建一個秘鑰文件
creating /etc/ceph/ceph.keyring
[root@ceph2 ceph]# ceph-authtool ceph.keyring --import-keyring ceph.client.ning.keyring   #把用戶client.ning添加進秘鑰文件
importing contents of ceph.client.ning.keyring into ceph.keyring
[root@ceph2 ceph]# cat ceph.keyring    #查看
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==
[root@ceph2 ceph]# ceph-authtool ceph.keyring --import-keyring ceph.client.joy.keyring    #把用戶client.ning添加進秘鑰文件
importing contents of ceph.client.joy.keyring into ceph.keyring
[root@ceph2 ceph]# cat ceph.keyring    #查看有兩個用戶,可以把這文件推送給客戶端,就可以使用這兩個用戶的權限
[client.joy]
    key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==
[client.ning]
    key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==

 博主聲明:本文的內容來源主要來自譽天教育晏威老師,由本人實驗完成操作驗證,需要的博友請聯系譽天教育(http://www.yutianedu.com/),獲得官方同意或者晏老師(https://www.cnblogs.com/breezey/)本人同意即可轉載,謝謝!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM