windows下使用Rclone掛載ceph對象存儲為本地磁盤


在windows下可以使用Rclone來掛載Ceph對象存儲為本地磁盤,很方便使用。

1、首先下載適用於 Windows 的 rclone 和相關的依賴工具winfsp

https://rclone.org/downloads/

http://www.secfs.net/winfsp/rel/

 

2、安裝軟件

將rclone下載到本地后解壓到一個目錄下

 winfsp下載下來后安裝一下,安裝過程很簡單,按照提示一路點擊下一步即可。

3、配置rclone環境變量

配置rclone環境變量,方便使用,如果不配置環境變量,每次在cmd命令界面輸入rclone命令時需要輸入rclone的絕對路徑

首先打開系統屬性-環境變量

 在系統變量中找到PATH,點擊編輯

 點擊新建,將rclone的存放文件夾路徑輸入即可

打開一個cmd命令行,在界面中輸入命令:rclone version,查看rclone版本信息,輸出如下圖所示,就表示環境變量已經配置好

 4、配置rclone

C:\Windows\system32>rclone config
2021/01/28 19:18:40 NOTICE: Config file "C:\\Users\\Administrator\\.config\\rclone\\rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n                                   ## 選擇"n",創建一個新配置
name> test_gw                       ## 輸入一個名稱,不能包含特殊符號、中文或空格
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
   \ "s3"
 5 / Backblaze B2
   \ "b2"
 6 / Box
   \ "box"
 7 / Cache a remote
   \ "cache"
 8 / Citrix Sharefile
   \ "sharefile"
 9 / Dropbox
   \ "dropbox"
10 / Encrypt/Decrypt a remote
   \ "crypt"
11 / FTP Connection
   \ "ftp"
12 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
13 / Google Drive
   \ "drive"
14 / Google Photos
   \ "google photos"
15 / Hubic
   \ "hubic"
16 / In memory object storage system.
   \ "memory"
17 / Jottacloud
   \ "jottacloud"
18 / Koofr
   \ "koofr"
19 / Local Disk
   \ "local"
20 / Mail.ru Cloud
   \ "mailru"
21 / Mega
   \ "mega"
22 / Microsoft Azure Blob Storage
   \ "azureblob"
23 / Microsoft OneDrive
   \ "onedrive"
24 / OpenDrive
   \ "opendrive"
25 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
26 / Pcloud
   \ "pcloud"
27 / Put.io
   \ "putio"
28 / QingCloud Object Storage
   \ "qingstor"
29 / SSH/SFTP Connection
   \ "sftp"
30 / Sugarsync
   \ "sugarsync"
31 / Tardigrade Decentralized Cloud Storage
   \ "tardigrade"
32 / Transparently chunk/split large files
   \ "chunker"
33 / Union merges the contents of several upstream fs
   \ "union"
34 / Webdav
   \ "webdav"
35 / Yandex Disk
   \ "yandex"
36 / http Connection
   \ "http"
37 / premiumize.me
   \ "premiumizeme"
38 / seafile
   \ "seafile"
Storage> 4                       ## 選擇“4”,因為我們要使用的是s3
** See help for s3 backend at: https://rclone.org/s3/ **

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ "Alibaba"
 3 / Ceph Object Storage
   \ "Ceph"
 4 / Digital Ocean Spaces
   \ "DigitalOcean"
 5 / Dreamhost DreamObjects
   \ "Dreamhost"
 6 / IBM COS S3
   \ "IBMCOS"
 7 / Minio Object Storage
   \ "Minio"
 8 / Netease Object Storage (NOS)
   \ "Netease"
 9 / Scaleway Object Storage
   \ "Scaleway"
10 / StackPath Object Storage
   \ "StackPath"
11 / Tencent Cloud Object Storage (COS)
   \ "TencentCOS"
12 / Wasabi Object Storage
   \ "Wasabi"
13 / Any other S3 compatible provider
   \ "Other"
provider> xiang       ## 輸入一個名稱,或者輸入“3”
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth>                 ## 直接回車
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> 0GA1LO5QXYOAFO4FY1DG                 ## 輸入對象用戶的訪問密鑰
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> h3VcSH0K1vYtIBbc3vz2gvpVX3fAjFAZWwgBzkbT        ## 輸入對象用戶的安全密鑰
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Use this if unsure. Will use v4 signatures and an empty region.
   \ ""
 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
   \ "other-v2-signature"
region>                   ## 直接回車,使用 v4
Endpoint for S3 API.
Required when using an S3 clone.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
endpoint> http://192.168.3.11:7480           ## 輸入網關地址
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
location_constraint>                ## 直接回車
Canned ACL used when creating buckets and storing or copying objects.

This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.

For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl>                 ## 直接回車
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>               ## 直接回車

Remote config
--------------------

[test_gw]
type = s3
provider = xiang
access_key_id = 0GA1LO5QXYOAFO4FY1DG
secret_access_key = h3VcSH0K1vYtIBbc3vz2gvpVX3fAjFAZWwgBzkbT

endpoint = http://192.168.3.11:7480
--------------------

y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>               ## 確認上面的配置正確無誤后,直接回車
Current remotes:

Name                 Type
====                 ====
test_gw              s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q           ##輸入“q”,完成配置

配置完成后,在C:\Users\你的用戶名\.config\rclone文件夾下面可以看到一個名稱為rclone.conf的文件,這個文件就是rclone的配置文件,如果要修改rclone的配置,可以用記事本打開這個文件進行修改

 

 5、掛載ceph對象存儲為本地磁盤

首先我們在ceph節點后台查看一下bucket列表,

radosgw-admin bucket list

在windows下開啟一個cmd命令行,在界面中輸入以下命令:

rclone mount -vv test2:/bucket1  Q: --cache-dir c:\temp  --allow-other --attr-timeout 5m --vfs-cache-mode full --vfs-cache-max-age 2h --vfs-cache-max-size 10G --vfs-read-chunk-size-limit 100M --buffer-size 100M --fast-list --checkers 64 --transfers 64  &
  • rclone mount:rclone 掛載命令

  • -vv 調試模式,將所有運行狀態輸出到終端顯示,以便查看命令執行情況

  • test2:/bucket1:test2 為生成配置文件第一步時設置的name名稱 ,bucket1 是桶名稱

  • Q: :掛載的磁盤盤符名稱,不能用已經分配了的磁盤盤符

  • --cache-dir:上傳文件前會先將文件緩存到這個目錄下,在往桶中寫入

  • --allow-other:允許非當前 rclone 用戶外其它用戶進行訪問

  • --attr-timeout 5m:文件屬性緩存,(大小,修改時間等)的時間。如果 VPS 配置比較低,建議適當提高這個值,避免過多內核交互,降低資源占用。

  • -vfs-cache-mode full:開啟 VFS 文件緩存,可減少 rclone 與 API 交互,同時可提高文件讀寫效率

  • --vfs-cache-max-age 2h:VFS 文件緩存時間,默認是1個小時,注意這個時間是從文件上傳到緩存中成功后的時間而不是文件上傳到遠程成功后的時間,這里設置 24 小時,如果文件很少更改,建議設置更長時間

  • --vfs-cache-max-size 10G:VFS文件緩存上限大小,建議不超過當前空余磁盤的50%,注意實際使用時可能會超過這個上限值,因為首先只有當有文件上傳時才會去檢測一次剩余的緩存容量空間,其次當正在有文件上傳,而這個文件大小已經超出了緩存上限容量的剩余空間時,rclone不會刪除正在上傳的文件的緩存文件,而是待文件上傳到桶成功后,才會去自動清除最開始上傳的緩存文件以使總空間維持在上限以內

  • --vfs-read-chunk-size-limit 100M:分塊讀取大小,這里設置的是100M,可提高文件讀的效率,比如1G的文件,大致分為10個塊進行讀取,但與此同時API請求次數也會增多,從桶中讀取文件時只會下載文件的該參數值+--buffer-size參數大小的內容到本地,這個參數的大小是存在磁盤上,--buffer-size是存在內存中

  • --buffer-size 100M:內存緩存,如果您內存比較小,可降低此值,如果內存比較大,可適當提高

  • --fast-list:如果你文件或者文件夾數量多加上該參數,但會增加內存消耗

  • --checkers 64:並行檢查文件數量,默認為8

  • --transfers 64:文件並行傳輸數量 默認為4

  • --daemon:指后台方式運行,linux上支持,windows下不支持

注意我這里是開啟了調試模式,所以輸出的信息比較多,正常情況下命令執行后出現“The service rclone has been started.”提示,即表示掛載成功,如果不需要開啟調試模式,去除-vv參數即可。

掛載成功后,在我的電腦中可以看到一個Q盤,這個盤就是ceph上名稱為bucket1的桶,讀寫操作就跟使用本地磁盤一樣。

開啟了調試模式時,可以在cmd命令行界面中實時查看文件的上傳進度。

rclone默認是將桶掛載為本地磁盤,加上--fuse-flag --VolumePrefix=\server\share參數可掛載為網絡驅動器(據官方介紹在windows系統下,掛載為網絡驅動器時性能會好一點,這一點我也沒有驗證過),如果要掛載多個桶,將命令中的share更改為其他名稱防止沖突:

rclone mount test2:/bucket1 q: --fuse-flag --VolumePrefix=\server\share --cache-dir D:\media --vfs-cache-mode writes &

 6、開機自動掛載

在cmd命令行界面中執行rclone掛載命令后,不能關閉該命令行窗口,窗口一旦關閉,掛載的盤符就消失不見了,使用起來會不太方便,可以編寫一個VBScript 腳本,然后將該腳本或腳本的快捷方式放在開機啟動文件夾下面,這樣每次系統啟動后,目錄就能自動掛載了。

首先新建一個文本文檔,將以下命令復制進去,然后將文本文檔的后綴改成.vbs

dim objShell 
set objShell=wscript.createObject("WScript.Shell") 
iReturnCode=objShell.Run("rclone mount  test2:/bucket1  Q: --cache-dir c:\temp  --allow-other --attr-timeout 5m --vfs-cache-mode full --vfs-cache-max-age 2h --vfs-cache-max-size 10G --vfs-read-chunk-size-limit 100M --buffer-size 100M --fast-list --checkers 64 --transfers 64",0,TRUE)

在開始菜單的搜索欄內或者在“運行”窗口內輸入以下命令后回車:shell:Common Startup,打開開機啟動文件夾

將vbs腳本放到C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp文件夾下面

 設置完成后,重新啟動一下系統,目錄就自動掛載上了,是不是很方便呢。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM