1.下載mahout
我下載的最新版:mahout-distribution-0.9
2.把mahout解壓到你想存放的文檔,我是放在/Users/jia/Documents/hadoop-0.20.2,即hadoop的安裝目錄上。
3.為mahout配置環境
打開終端,打開profile文件所在的目錄
JIAS-MacBook-Pro:~ jia$ open /etc
把profile文件復制到桌面,然后編輯,在它后面加入環境變量
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home export HADOOP_HOME=Documents/hadoop-0.20.2 export MAHOUT_HOME=Documents/hadoop-0.20.2/mahout-distribution-0.9 export MAVEN_HOME=Documents/apache-maven-3.2.2 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$MAHOUT_HOME/bin export HADOOP_CONF_DIR=Documents/hadoop-0.20.2/conf export MAHOUT_CONF_DIR=Documents/hadoop-0.20.2/mahout-distribution-0.9/conf export classpath=$classpath:$JAVA_HOME/lib:$MAHOUT_HOME/lib:$HADOOP_CONF_DIR:$MAHOUT_CONF_DIR
然后把桌面上的profile文件覆蓋/etc上的profile,期間要輸入管理員密碼
注意:
1.如果在ubuntu下安裝的是hadoop2.6進行配置的話,路徑為:

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64 export HADOOP_HOME=/home/sendi/hadoop-2.6.0 export MAHOUT_HOME=/home/sendi/mahout-distribution-0.9 export MAVEN_HOME=/home/sendi/apache-maven-3.3.3 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$MAHOUT_HOME/bin export HADOOP_CONF_DIR=/home/sendi/hadoop-2.6.0/etc/hadoop export MAHOUT_CONF_DIR=/home/sendi/mahout-distribution-0.9/conf export classpath=$classpath:$JAVA_HOME/lib:$MAHOUT_HOME/lib:$HADOOP_CONF_DIR:$MAHOUT_CONF_DIR
2.配置MAHOU_CONF_DIR時有些網站說時export MAHOUT_CONF_DIR=Documents/hadoop-0.20.2/mahout-distribution-0.9/src/conf
0.9版本的正確配置是:export MAHOUT_CONF_DIR=Documents/hadoop-0.20.2/mahout-distribution-0.9/conf ,因為當你打開mahout文件夾時,發現沒有src這個目錄
mahout官網上0.9版本有幾個壓縮文件,我自己試過,前面兩個小壓縮文件不行。
這里我選擇的是第5個78M的。
4.檢驗mahout是否配置成功
4.1啟動hadoop
JIAS-MacBook-Pro:hadoop-0.20.2 jia$ bin/start-all.sh
4.2查看mahout
JIAS-MacBook-Pro:mahout-distribution-0.9 jia$ bin/mahout MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath. hadoop binary is not in PATH,HADOOP_HOME/bin,HADOOP_PREFIX/bin, running locally An example program must be given as the first argument. Valid program names are: arff.vector: : Generate Vectors from an ARFF file or directory baumwelch: : Baum-Welch algorithm for unsupervised HMM training canopy: : Canopy clustering cat: : Print a file or resource as the logistic regression models would see it cleansvd: : Cleanup and verification of SVD output clusterdump: : Dump cluster output to text clusterpp: : Groups Clustering Output In Clusters cmdump: : Dump confusion matrix in HTML or text formats concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx) cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally. evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes fkmeans: : Fuzzy K-means clustering hmmpredict: : Generate random sequence of observations by given HMM itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering kmeans: : K-means clustering lucene.vector: : Generate Vectors from a Lucene index lucene2seq: : Generate Text SequenceFiles from a Lucene index matrixdump: : Dump matrix in CSV format matrixmult: : Take the product of two matrices parallelALS: : ALS-WR factorization of a rating matrix qualcluster: : Runs clustering experiments and summarizes results in a CSV recommendfactorized: : Compute recommendations using the factorization of a rating matrix recommenditembased: : Compute recommendations using item-based collaborative filtering regexconverter: : Convert text files on a per line basis based on regular expressions resplit: : Splits a set of SequenceFiles into a number of equal splits rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>} rowsimilarity: : Compute the pairwise similarities of the rows of a matrix runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model runlogistic: : Run a logistic regression model against CSV data seq2encoded: : Encoded Sparse Vector generation from Text sequence files seq2sparse: : Sparse Vector generation from Text sequence files seqdirectory: : Generate sequence files (of Text) from a directory seqdumper: : Generic Sequence File dumper seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives seqwiki: : Wikipedia xml dump to sequence file spectralkmeans: : Spectral k-means clustering split: : Split Input data into test and train sets splitDataset: : split a rating dataset into training and probe parts ssvd: : Stochastic SVD streamingkmeans: : Streaming k-means clustering svd: : Lanczos Singular Value Decomposition testnb: : Test the Vector-based Bayes classifier trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model trainlogistic: : Train a logistic regression using stochastic gradient descent trainnb: : Train the Vector-based Bayes classifier transpose: : Take the transpose of a matrix validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors vectordump: : Dump vectors from a sequence file to text viterbi: : Viterbi decoding of hidden states from given output states sequence
這里需要說明下,當你看到下面的代碼時,以為是錯的,其實不是,原因:
MAHOUT_LOCAL:設置是否本地運行,如果設置這個參數就不會運行hadoop了,一旦設置這個參數,那HADOOP_CONF_DIR 和HADOOP_HOME 這兩個參數的
設置就自動失效了。
當初我就在這個問題上糾結了很久。
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
hadoop binary is not in PATH,HADOOP_HOME/bin,HADOOP_PREFIX/bin, running locally
5.運行mahout的算法
5.1到下面的地址去下載測試數據
http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
5.2創建測試目錄testdata,並把數據導入到這個testdata目錄中
JIAS-MacBook-Pro:hadoop-0.20.2 jia$ bin/hadoop fs -mkdir testdata
5.3把測試數據上傳到hdfs上,不能把測試數據存在mac上用pages建立的文檔上,而是建立一個新的文件命令:touch data
JIAS-MacBook-Pro:hadoop-0.20.2 jia$ bin/hadoop fs -put workspace/data testdata/
5.4運行mahout上的kmeans算法
JIAS-MacBook-Pro:hadoop-0.20.2 jia$ bin/hadoop jar mahout-distribution-0.9/mahout-examples-0.9-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
5.5查看結果
JIAS-MacBook-Pro:~ jia$ cd Documents/hadoop-0.20.2/ JIAS-MacBook-Pro:hadoop-0.20.2 jia$ bin/hadoop fs -ls output/ Found 15 items -rwxrwxrwx 1 jia staff 194 2014-08-03 14:42 /Users/jia/Documents/hadoop-0.20.2/output/_policy drwxr-xr-x - jia staff 136 2014-08-03 14:42 /Users/jia/Documents/hadoop-0.20.2/output/clusteredPoints drwxr-xr-x - jia staff 544 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-0 drwxr-xr-x - jia staff 204 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-1 drwxr-xr-x - jia staff 204 2014-08-03 14:42 /Users/jia/Documents/hadoop-0.20.2/output/clusters-10-final drwxr-xr-x - jia staff 204 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-2 drwxr-xr-x - jia staff 204 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-3 drwxr-xr-x - jia staff 204 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-4 drwxr-xr-x - jia staff 204 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-5 drwxr-xr-x - jia staff 204 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-6 drwxr-xr-x - jia staff 204 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/clusters-7 drwxr-xr-x - jia staff 204 2014-08-03 14:42 /Users/jia/Documents/hadoop-0.20.2/output/clusters-8 drwxr-xr-x - jia staff 204 2014-08-03 14:42 /Users/jia/Documents/hadoop-0.20.2/output/clusters-9 drwxr-xr-x - jia staff 136 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/data drwxr-xr-x - jia staff 136 2014-08-03 14:41 /Users/jia/Documents/hadoop-0.20.2/output/random-seeds