HDFS-操作命令(Linux環境)


(0)啟動Hadoop集群

[ck@hadoop102 hadoop-2.9.0]$ sbin/start-dfs.sh
[ck@hadoop103 hadoop-2.9.0]$ sbin/start-yarn.sh

(1) -help:輸出這個命令的參數

[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -help rm

(2)-ls:顯示目錄信息

[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -ls /   (查看/目錄)
[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -lsr /  (查看/目錄及子目錄)

(3)-mkdir:在HDFS上創建目錄

[ck@hadoop103 hadoop-2.9.0]$ hadoop fs -mkdir -p /sanguo/shuguo

(4)-moveFromLocal:從本地剪切粘貼到HDFS

[ck@hadoop103 hadoop-2.9.0]$ touch kongming.txt
[ck@hadoop103 hadoop-2.9.0]$ vim kongming.txt
[ck@hadoop103 hadoop-2.9.0]$ bin/hadoop fs -moveFromLocal ./kongming.txt /sanguo/shuguo/

(5)-appendToFile:追加一個文件到已經存在的文件末尾(刪除本地文件)

[ck@hadoop102 hadoop-2.9.0]$ touch liubei.txt
[ck@hadoop102 hadoop-2.9.0]$ vim liubei.txt
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -appendToFile liubei.txt  /sanguo/shuguo/kongming.txt
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -cat /sanguo/shuguo/kongming.txt

(6)-chgrp、-chmod、-chown:Linux文件系統中的用法一樣,修改文件所屬權限

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -chgrp ck /sanguo/shuguo/kongming.txt

(7)-copyFromLocal:從本地文件系統中拷貝到HDFS路徑中

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -copyFromLocal ./caochao.txt /sanguo/shuguo/

(8) -copyToLocal:從HDFS拷貝到本地

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -copyToLocal /sanguo/shuguo/kongming.txt ./

(9) -cp:從HDFS的一個路徑拷貝到HDFS的另一個路徑

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -cp /sanguo/shuguo/kongming.txt /sanguo/

(10)-mv:在HDFS中移動文件

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -mv /sanguo/kongming.txt /

(11)-get:等同於copyToLocal,從HDFS下載到本地

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -get /kongming.txt ./

(12)-getmerge:合並下載多個文件,比如HDFS的目錄/sanguo/shuguo下的文件全部合並下載到本地目錄。

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -getmerge /sanguo/shuguo/* ./zaiyiqi.txt

(13)-put:等同於copyFromLocal

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -put ./LICENSE.txt  /sanguo/shuguo/

(14)-tail:顯示一個文件的末尾

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -tail /sanguo/shuguo/LICENSE.txt

(15)-rm:刪除文件或文件夾

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -rm /sanguo/shuguo/LICENSE.txt

(16)-rmdir:刪除空目錄

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -mkdir /test
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -rmdir /test

(17)-du: 統計文件夾的大小信息

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -du /
366744329  /hadoop-2.9.0.tar.gz
16         /kongming.txt
49         /sanguo
45         /wc.input
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -du -h / (換算顯示)
349.8 M  /hadoop-2.9.0.tar.gz
16       /kongming.txt
49       /sanguo
45       /wc.input
[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -du -h -s /  (目錄總和)
   349.8 M  /

(18)-setrep: 設置HDFS中文件的副本數據

[ck@hadoop102 hadoop-2.9.0]$ hadoop fs -setrep 2 /kongming.txt

        這里設置的副本數只是記錄在NameNode的元數據中,是否真的會有這么多副本,還得看DataNode的數量。因為目前只有3台設備,最多也就3個副本,只有節點數增加到10台時,副本數才能達到10。

 

案例整理來源於atguigu視頻


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM