接着之前繼續API操作的學習
CopyFromLocalFile: 顧名思義,從本地文件拷貝
/** * 使用Java API操作HDFS文件系統 * 關鍵點: * 1)create Configuration * 2)get FileSystem * 3)...It's your HDFS API operation. */ public class HDFSApp { public static final String HDFS_PATH = "hdfs://hadoop000:8020"; FileSystem fileSystem = null; Configuration configuration = null; @Before public void setUp() throws Exception{ System.out.println("setUp-----------"); configuration = new Configuration(); configuration.set("dfs.replication","1"); /* * 構造一個訪問制定HDFS系統的客戶端對象 * 第一個參數:HDFS的URI * 第二個參數:客戶端制定的配置參數 * 第三個參數:客戶端的身份,說白了就是用戶名 */ fileSystem = FileSystem.get(new URI(HDFS_PATH),configuration,"hadoop"); } /* * 拷貝本地文件到HDFS文件系統 */ @Test public void copyFromLocalFile() throws Exception{ Path src = new Path("/home/hadoop/t.txt"); Path dst = new Path("/hdfsapi/test/"); fileSystem.copyFromLocalFile(src,dst); } @After public void tearDown(){ configuration = null; fileSystem = null; System.out.println("----------tearDown------"); } }
方法怎么用?還是那句 哪里不會Ctrl點哪里。
點進CopyFromLocalFile方法源碼得知方法需要兩個參數:本地文件的Path,和目標文件的Path,無返回值。
我們運行該測試類后進入終端使用-ls查看/hdfsapi/test目錄下包含了剛剛copy進來的t.txt文件,測試成功。
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi/test Found 3 items -rw-r--r-- 3 hadoop supergroup 14 2019-04-19 16:31 /hdfsapi/test/a.txt -rw-r--r-- 1 hadoop supergroup 28 2019-04-19 16:50 /hdfsapi/test/c.txt -rw-r--r-- 1 hadoop supergroup 2732 2019-04-20 19:51 /hdfsapi/test/t.txt
如果我們需要從本地拷貝一個大文件,文件越大需要等待的時間自然越長,這么漫長的等待且毫無顯示嚴重影響用戶體驗。
所以在上傳大文件的時候可以添加上傳進度條,在fileSystem下有個create方法帶有進度條的功能:
/** * Create an FSDataOutputStream at the indicated Path with write-progress * reporting. * Files are overwritten by default. * @param f the file to create * @param progress to report progress *在具有寫入進度的指定路徑上創建fsdataoutputstream。 *默認情況下會覆蓋文件。 *@參數 f 要創建的文件 *@參數 progress 報告進度 */ public FSDataOutputStream create(Path f, Progressable progress) throws IOException { return create(f, true, getConf().getInt("io.file.buffer.size", 4096), getDefaultReplication(f), getDefaultBlockSize(f), progress); }
運行測試類,能看到打印顯示,雖然全是點看起來比較抽象,但是比什么都沒有到懷疑死機還是要好點兒。
setUp----------- log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. ...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................----------tearDown------ Process finished with exit code 0 ........................
我們打開終端-ls查看,上傳成功。
[hadoop@hadoop000 software]$ hadoop fs -ls /hdfsapi/test Found 4 items -rw-r--r-- 3 hadoop supergroup 14 2019-04-19 16:31 /hdfsapi/test/a.txt -rw-r--r-- 1 hadoop supergroup 28 2019-04-19 16:50 /hdfsapi/test/c.txt -rw-r--r-- 1 hadoop supergroup 181367942 2019-04-20 20:10 /hdfsapi/test/jdk.zip -rw-r--r-- 1 hadoop supergroup 2732 2019-04-20 19:51 /hdfsapi/test/t.txt
能上傳那就自然會問:怎么下載?直接上代碼,和上面類似就不多介紹了。
/** * 拷貝HDFS文件到本地:下載 * @throws Exception */ @Test public void copyToLocalFile() throws Exception{ Path src = new Path("/hdfsapi/test/t.txt"); Path dst = new Path("/home/hadoop/app"); fileSystem.copyToLocalFile(src,dst); }