目錄:
Nutch教程原文(如有侵權,通知后立即刪除)
環境搭建
ubuntu17.04 + jdk1.7 + Nutch 1.9 and Solr 4.10.1
參照 https://www.cs.upc.edu/~CAIM/lab/session4crawling.html 的版本說明
參照 https://wiki.apache.org/nutch/NutchTutorial
Nutch教程
介紹
nutch是一個很成熟、已經產品化的爬蟲。Nutch 1.x能做各種粒度的配置,依托在非常適合做批處理的Apache Hadoop數據結構上。可拔插和過程的模塊化有好處,Nutch提供可以拓展接口比如解析(Parse)、索引(Index)和得分過濾器讓用戶實現如Apache Tika 來解析。還有,可拔插的索引存在於Apache Solr、Elastic Search、SolrCloud等等。我們能夠在自動化的方式中找到頁面的超級鏈接,減少了大量的維護工作,比如檢查壞鏈、在搜索結束時創建一個所有訪問過頁面的拷貝。這個教程解答如何使用Nutch和與之集成的Apache Solr。Solr是一個開源的全文搜索框架,使用Solr我們能夠搜索從Nutch中搜索到訪問過的頁面。幸運的是Nutch和Solr的集成是相當的直接。Apache Nutch支持Solr開箱即用,極大的簡化了Nutch-Solr的集成。它同時移除了遺留在用來運行的過去Nutch Web應用的Apache Tomcat上的和在用來建立索引Apache Lucene的依賴。下載二進制發布版本在這http://www.apache.org/dyn/closer.cgi/nutch/。
學習成果
在這個教程結束為止,你將會
- 擁有一個配置的用來爬取的本地爬蟲的啟動器在一台機器上
- 學會如何理解和配置Nutch包括種子列表、URL過濾器的運行時配置等等。
- 執行一個循環爬取並且查視爬取數據庫的結果
- 到Apache Solr中建立好的Nutch爬取記錄的索引來做全文檢索
這個教程的任何問題應該被記錄到Nutch user@列表中。
內容目錄
內容
1. 介紹
2. 學習成果
3. 內容目錄
4. 步驟
5. 要求
6. 安裝Nutch
7. 驗證你的Nutch安裝
8. 爬取你的第一個網站
9. 啟動Solr做搜索
10.驗證Solr的安裝
11.Solr與Nutch集成
12.接下來
步驟:
這個教程是描述Nutch 1.x的安裝和使用。如何編譯並啟動Nutch 2.x與Hbase,查看 Nutch2教程
要求
- Unix環境,或者Windows-Cygwin環境
- Java運行時/開發環境(1.7)
- 源代碼構建 Apache Ant
安裝Nutch
操作1:從一個二進制分發版啟動Nutch
- 從這http://www.apache.org/dyn/closer.cgi/nutch/下載一個二進制包(apache-nutch-1.x.zip)
- 解壓二進制Nutch包。解壓后應該有一個apache-nutch-1.x目錄
- cd apache-nutch-1.x/
這時,我們將要使用 ${NUTCH_RUNTIME_HOME}來作為當前目錄(apache-nutch-1.x/)的參考。
操作2:從源代碼分發版啟動Nutch
高級用戶可能也需要使用源代碼分發版:
- 下載一個源碼包(apache-nutch-1.x-src.zip)
- 解壓
- cd apache-nutch-1.x/
- 在這個目錄里運行 ant (參照 RunNutchInEclipse https://wiki.apache.org/nutch/RunNutchInEclipse) cf. ——》confer
當源代碼分發版使用${NUTCH_RUNTIME_HOME}參照apache-nutch-1.x/runtime/local/.
意味着
- 配置文件應該在apache-nutch-1.x/runtime/local/conf/里修改
- ant clean 將會移除這個目錄(保持修改過文件的拷貝)
- 運行"/bin/nutch"- 如果你看到和下面相同的輸出,你就可以確定安裝正確。
驗證你的Nutch的安裝
·Usage: nutch COMMAND where command is one of:
·readdb read / dump crawl db
·mergedb merge crawldb-s, with optional filtering
·readlinkdb read / dump link db
·inject inject new urls into the database
·generate generate new segments to fetch from crawl db
·freegen generate new segments to fetch from text files
·fetch fetch a segment's pages
·...
一些問題快照技巧:
- 如果你看到"Permiss denied",運行下列命令:
- ·chmod +x bin/nutch
- 如果你看到JAVA_HOME沒有設置,那么設置JAVA_HOME。在Mac,你能運行下列命令或者添加到 ~/.bashrc:
- ·export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.7/Home
- ·# note that the actual path may be different on your system
在Debian或者Ubuntu,你能運行下面命令或者添加到~./bashrc:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
你可能不得不更新你的 /etc/hosts 文件. 如果是你則可以這么添加
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost.localdomain localhost LMC-032857
::1 ip6-localhost ip6-loopback
fe80::1%lo0 ip6-localhost ip6-loopback
爬取你的第一個網站
在一個網站被爬取之前Nutch請求2個配置文件的改變:
1. 定制你的爬取配置,這是最少的,你為你的爬蟲提供一個名字來讓外部的服務器識別。
2. 設置一個來爬取的種子列表
客戶化你的爬蟲配置
l 默認爬取配置能夠被看到並且編輯conf/nutch-default.xml –大多數配置文件能夠不加修改就能被使用。
l conf/nutch-site.xml文件可以作為添加自己的自定義爬去屬性來覆蓋conf/nutch-default.xml。對這個文件唯一需要修改的地方就是覆蓋http.agent.name的value域。i.e.()在conf/nutch-site.xml的http.agent.name屬性的value域中添加你的代理名,舉個例子:
<property>
<name>http.agent.name</name>
<value>My Nutch Spider</value>
</property>
創建一個url種子列表
l 一個url種子列表包括一系列Nutch將要爬取的網站,這些網站一行
l 文件conf/regex-urlfilter.txt會提供規則表達式,規則表達式允許Nutch過濾和縮小要爬取和下載的得網頁資源。
創建一個URL(統一資源定位)種子列表
l mkdir -p urls
l cd urls
l touch seed.txt來創建一個在urls/的文本文件seed.txt,它有以下內容(每行一個URL是你希望Nutch去爬取得每一站點)
http://nutch.apache.org/
(可選)配置規則表達式過濾器
編輯conf/regex-urlfilter.txt並替代
# accept anything else
+.
用一個規則表達式和你希望去爬取的域名匹配。舉個例子,如果你希望限制爬取到nutch.apache.org這個域名,這一行應該為:
+^http://([a-z0-9]*\.)*nutch.apache.org/
注:
+ 匹配前面的子表達式一次或多次。
^ 匹配輸入字符串的開始位置。
()標記值表達式的開始和結束為止。
[標記一個中括號表達式的開始
a- z0-9 字符集合數字集合
* 匹配前面的子表達式零次或多次
\.轉義.
這個將包括在nutch.apache.org域里的任何一個URL。
注釋:在規則url過濾器文本中沒有明確一個域也將會讓和你的urls種子鏈接的所有域被爬取。
使用個人的命令來做全網爬取
注釋:如果你之前修改過文件conf/regex-urlfilter.txt作為覆蓋,你將需要把它改回來。
全網爬取被設計來處理非常大的爬取,這些爬取花上幾個星期來完成,並且運行在多個機器上。這同時許可在爬取過程和增量爬取下有更多的控制。全網爬取不意味着爬取所有這點很重要。我們能限制一個全網爬取為我們想要爬取的一個URLs列表。這將通過使用一個我們曾用到過的當我們使用上面的crawl命令時做到。
一步一步:概念
Nutch數據組成為:
1. 爬取數據庫,或者叫crawldb。這個包括關於Nutch知道的每一個rul的信息,包括他是否被提取的信息,如果是接下來。
2. 鏈接數據庫,或者叫linkdb。這個包含已知的每一個url的列表,包括源url和是鏈接的錨文本。
3. 一個段的集合。一個段是一個URLs的集合,這個集合被提取為一個單元。段是具有以下子目錄的目錄:
- 一個crawl_generate 命名了一個要被提取的URLs集合
- 一個crawl_fetch包含提取每個URL的狀態
- 一個content包含從每個URL檢索的行內容
- 一個parse_text包含每個被解析的文本
- 一個parse_data包含從每個URL解析到的外鏈和元數據
- 一個crawl_parse包含外鏈URLs,被用來更新crawldb
譯者補充:
crawl_fetch,URL的狀態分為動態URL、靜態URL、偽靜態URL
content,原始內容(row content),?到底是指的整個document對象的內容還是?這個待確認
parse_text,被解析的文本指的是?——》初步判斷為文本內容,也就是title、h、p等標簽里的內容——》待驗證
parse_data,外鏈,外部網站的鏈接;元數據(meta data),描述數據的數據(data about data)
一步一步:使用一個URLs列表設置種子到crawldb
可選 1:從DMOZ數據庫進行引導。
可選 2:從一個初始化的種子列表中引導
這個選項在被創建的種子列表(在這被覆蓋)的背景下
bin/nutch inject crawl/crawldb urls
一步一步:提取
為了提取,我們先從數據庫生成一個提取列表:
bin/nutch generate crawl/crawldb crawl/segments
由於被提取,這個命令生成一個為所有頁面的提取列表。這個提取列表被安置在一個新建立的segment目錄。段目錄會在它被創建的時候被命令。我們在shell里保存這個段的名稱,值為s1:
s1=`ls -d crawl/segments/2* | tail -1`
echo $s1
現在我們在這個段上運行:
bin/nutch fetch $s1
接着我們解析條目:
bin/nutch parse $s1
但這個完成,我們用提取的結果更新數據庫:
bin/nutch updatedb crawl/crawldb $s1
現在數據庫包含從初始頁面更新的條目以及從初始集鏈接到的新發現的頁面對應的條目。
現在我們生成並提取一個包含最高分1000頁面的頁面:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s2=`ls -d crawl/segments/2* | tail -1`
echo $s2
bin/nutch fetch $s2
bin/nutch parse $s2
bin/nutch updatedb crawl/crawldb $s2
讓我們在做一個提取循環:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s3=`ls -d crawl/segments/2* | tail -1`
echo $s3
bin/nutch fetch $s3
bin/nutch parse $s3
bin/nutch updatedb crawl/crawldb $s3
在這一點,我們已經提取了幾年頁。讓我們反轉鏈接並且給他們添加索引!
一步一步:反轉鏈接
在我們第一次反轉鏈接建立索引之前,以便我們使用頁面索引傳入的錨文本。
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
我們現在准備用Apache Solr來做索引。
NutchTutorial
Introduction
Nutch is a well matured, production ready Web crawler. Nutch 1.x enables fine grained configuration, relying on Apache Hadoop data structures, which are great for batch processing. Being pluggable and modular of course has it's benefits, Nutch provides extensible interfaces such as Parse, Index and ScoringFilter's for custom implementations e.g. Apache Tika for parsing. Additonally, pluggable indexing exists for Apache Solr, Elastic Search, SolrCloud, etc. We can find Web page hyperlinks in an automated manner, reduce lots of maintenance work, for example checking broken links, and create a copy of all the visited pages for searching over. This tutorial explains how to use Nutch with Apache Solr. Solr is an open source full text search framework, with Solr we can search the visited pages from Nutch. Luckily, integration between Nutch and Solr is pretty straightforward. Apache Nutch supports Solr out-the-box, greatly simplifying Nutch-Solr integration. It also removes the legacy dependence upon both Apache Tomcat for running the old Nutch Web Application and upon Apache Lucene for indexing. Just download a binary release from here.
Learning Outcomes
By the end of this tutorial you will
- Have a configured local Nutch crawler setup to crawl on one machine
- Learned how to understand and configure Nutch runtime configuration including seed URL lists, URLFilters, etc.
- Have executed a Nutch crawl cycle and viewed the results of the Crawl Database
- Indexed Nutch crawl records into Apache Solr for full text search
Any issues with this tutorial should be reported to the Nutch user@ list.
Table of Contents
Contents
- Introduction
- Learning Outcomes
- Table of Contents
- Steps
- Requirements
- Install NutchVerify your Nutch installation
- Crawl your first websiteSetup Solr for search
- Verify Solr installation
- Integrate Solr with Nutch
- Whats Next
Steps
This tutorial describes the installation and use of Nutch 1.x (current release is 1.9). How to compile and set up Nutch 2.x with HBase, see Nutch2Tutorial.
Requirements
-
Unix environment, or Windows-Cygwin environment
- Java Runtime/Development Environment (1.7)
-
(Source build only) Apache Ant: http://ant.apache.org/
Install Nutch
Option 1: Setup Nutch from a binary distribution
-
Download a binary package (apache-nutch-1.X-bin.zip) from here.
-
Unzip your binary Nutch package. There should be a folder apache-nutch-1.X.
-
cd apache-nutch-1.X/
From now on, we are going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory (apache-nutch-1.X/).
Option 2: Set up Nutch from a source distribution
Advanced users may also use the source distribution:
-
Download a source package (apache-nutch-1.X-src.zip)
- Unzip
-
cd apache-nutch-1.X/
-
Run ant in this folder (cf. RunNutchInEclipse)
-
Now there is a directory runtime/local which contains a ready to use Nutch installation.
When the source distribution is used ${NUTCH_RUNTIME_HOME} refers to apache-nutch-1.X/runtime/local/. Note that
-
config files should be modified in apache-nutch-1.X/runtime/local/conf/
-
ant clean will remove this directory (keep copies of modified config files)
Verify your Nutch installation
-
run "bin/nutch" - You can confirm a correct installation if you see something similar to the following:
Usage: nutch COMMAND where command is one of:
readdb read / dump crawl db mergedb merge crawldb-s, with optional filtering readlinkdb read / dump link db inject inject new urls into the database generate generate new segments to fetch from crawl db freegen generate new segments to fetch from text files fetch fetch a segment's pages ...
Some troubleshooting tips:
- Run the following command if you are seeing "Permission denied":
chmod +x bin/nutch
-
Setup JAVA_HOME if you are seeing JAVA_HOME not set. On Mac, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.7/Home
# note that the actual path may be different on your system
On Debian or Ubuntu, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
You may also have to update your /etc/hosts file. If so you can add the following
##
# Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost.localdomain localhost LMC-032857 ::1 ip6-localhost ip6-loopback fe80::1%lo0 ip6-localhost ip6-loopback
Note that the LMC-032857 above should be replaced with your machine name.
Crawl your first website
Nutch requires two configuration changes before a website can be crawled:
- Customize your crawl properties, where at a minimum, you provide a name for your crawler for external servers to recognize
- Set a seed list of URLs to crawl
Customize your crawl properties
-
Default crawl properties can be viewed and edited within conf/nutch-default.xml - where most of these can be used without modification
-
The file conf/nutch-site.xml serves as a place to add your own custom crawl properties that overwrite conf/nutch-default.xml. The only required modification for this file is to override the value field of the http.agent.name
-
i.e. Add your agent name in the value field of the http.agent.name property in conf/nutch-site.xml, for example:
-
<property>
<name>http.agent.name</name> <value>My Nutch Spider</value> </property>
Create a URL seed list
- A URL seed list includes a list of websites, one-per-line, which nutch will look to crawl
-
The file conf/regex-urlfilter.txt will provide Regular Expressions that allow nutch to filter and narrow the types of web resources to crawl and download
Create a URL seed list
-
mkdir -p urls
-
cd urls
-
touch seed.txt to create a text file seed.txt under urls/ with the following content (one URL per line for each site you want Nutch to crawl).
http://nutch.apache.org/
(Optional) Configure Regular Expression Filters
Edit the file conf/regex-urlfilter.txt and replace
# accept anything else
+.
with a regular expression matching the domain you wish to crawl. For example, if you wished to limit the crawl to the nutch.apache.org domain, the line should read:
+^http://([a-z0-9]*\.)*nutch.apache.org/
This will include any URL in the domain nutch.apache.org.
NOTE: Not specifying any domains to include within regex-urlfilter.txt will lead to all domains linking to your seed URLs file being crawled as well.
Using Individual Commands for Whole-Web Crawling
NOTE: If you previously modified the file conf/regex-urlfilter.txt as covered here you will need to change it back.
Whole-Web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines. This also permits more control over the crawl process, and incremental crawling. It is important to note that whole Web crawling does not necessarily mean crawling the entire World Wide Web. We can limit a whole Web crawl to just a list of the URLs we want to crawl. This is done by using a filter just like the one we used when we did the crawl command (above).
Step-by-Step: Concepts
Nutch data is composed of:
- The crawl database, or crawldb. This contains information about every URL known to Nutch, including whether it was fetched, and, if so, when.
- The link database, or linkdb. This contains the list of known links to each URL, including both the source URL and anchor text of the link.
- A set of segments. Each segment is a set of URLs that are fetched as a unit. Segments are directories with the following subdirectories:
-
a crawl_generate names a set of URLs to be fetched
-
a crawl_fetch contains the status of fetching each URL
-
a content contains the raw content retrieved from each URL
-
a parse_text contains the parsed text of each URL
-
a parse_data contains outlinks and metadata parsed from each URL
-
a crawl_parse contains the outlink URLs, used to update the crawldb
-
Step-by-Step: Seeding the crawldb with a list of URLs
Option 1: Bootstrapping from the DMOZ database.
The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.
bin/nutch inject crawl/crawldb dmoz
Now we have a Web database with around 1,000 as-yet unfetched URLs in it.
Option 2. Bootstrapping from an initial seed list.
This option shadows the creation of the seed list as covered here.
bin/nutch inject crawl/crawldb urls
Step-by-Step: Fetching
To fetch, we first generate a fetch list from the database:
bin/nutch generate crawl/crawldb crawl/segments
This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1:
s1=`ls -d crawl/segments/2* | tail -1`
echo $s1
Now we run the fetcher on this segment with:
bin/nutch fetch $s1
Then we parse the entries:
bin/nutch parse $s1
When this is complete, we update the database with the results of the fetch:
bin/nutch updatedb crawl/crawldb $s1
Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.
Now we generate and fetch a new segment containing the top-scoring 1,000 pages:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s2=`ls -d crawl/segments/2* | tail -1` echo $s2 bin/nutch fetch $s2 bin/nutch parse $s2 bin/nutch updatedb crawl/crawldb $s2
Let's fetch one more round:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s3=`ls -d crawl/segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch parse $s3 bin/nutch updatedb crawl/crawldb $s3
By this point we've fetched a few thousand pages. Let's invert links and index them!
Step-by-Step: Invertlinks
Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
We are now ready to search with Apache Solr.
Step-by-Step: Indexing into Apache Solr
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready to go on and index all the resources. For more information see the command line options.
Usage: Indexer <crawldb> [-linkdb <linkdb>] [-params k1=v1&k2=v2...] (<segment> ... | -dir <segments>) [-noCommit] [-deleteGone] [-filter] [-normalize] [-addBinaryContent] [-base64]
Example: bin/nutch index http://localhost:8983/solr crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone
Step-by-Step: Deleting Duplicates
Once indexed the entire contents, it must be disposed of duplicate urls in this way ensures that the urls are unique.
-
Map: Identity map where keys are digests and values are SolrRecord instances (which contain id, boost and timestamp)
-
Reduce: After map, SolrRecords with the same digest will be grouped together. Now, of these documents with the same digests, delete all of them except the one with the highest score (boost field). If two (or more) documents have the same score, then the document with the latest timestamp is kept. Again, every other is deleted from solr index.
Usage: bin/nutch dedup <solr url>
Example: /bin/nutch dedup http://localhost:8983/solr
For more information see dedup documentation.
Step-by-Step: Cleaning Solr
The class scans a crawldb directory looking for entries with status DB_GONE (404) and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.
Usage: bin/nutch clean <crawldb> <index_url>
Example: /bin/nutch clean crawl/crawldb/ http://localhost:8983/solr
For more information see clean documentation.
Using the crawl script
If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.
Nutch developers have written one for you :), and it is available at bin/crawl.
Usage: crawl [-i|--index] [-D "key=value"] <Seed Dir> <Crawl Dir> <Num Rounds>
-i|--index Indexes crawl results into a configured indexer -D A Java property to pass to Nutch calls Seed Dir Directory in which to look for a seeds file Crawl Dir Directory where the crawl/link/segments dirs are saved Num Rounds The number of rounds to run this crawl for Example: bin/crawl -i -D solr.server.url=http://localhost:8983/solr/ urls/ TestCrawl/ 2
The crawl script has lot of parameters set, and you can modify the parameters to your needs. It would be ideal to understand the parameters before setting up big crawls.
Setup Solr for search
-
download binary file from here
-
unzip to $HOME/apache-solr, we will now refer to this as ${APACHE_SOLR_HOME}
-
cd ${APACHE_SOLR_HOME}/example
-
java -jar start.jar
Verify Solr installation
After you started Solr admin console, you should be able to access the following links:
http://localhost:8983/solr/#/
Integrate Solr with Nutch
We have both Nutch and Solr installed and setup correctly. And Nutch already created crawl data from the seed URL(s). Below are the steps to delegate searching to Solr for links to be searchable:
-
Backup the original Solr example schema.xml:
mv ${APACHE_SOLR_HOME}/example/solr/collection1/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/collection1/conf/schema.xml.org
- Copy the Nutch specific schema.xml to replace it:
cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/collection1/conf/
-
Open the Nutch schema.xml file for editing:
vi ${APACHE_SOLR_HOME}/example/solr/collection1/conf/schema.xml
- Comment out the following lines (53-54) in the file by changing this:
<filter class="solr. EnglishPorterFilterFactory" protected="protwords.txt"/>
to this<!-- <filter class="solr. EnglishPorterFilterFactory" protected="protwords.txt"/> -->
-
Add the following line right after the line <field name="id" ... /> (probably at line 69-70)
<field name="_version_" type="long" indexed="true" stored="true"/>
- If you want to see the raw HTML indexed by Solr, change the content field definition (line 80) to:
<field name="content" type="text" stored="true" indexed="true"/>
- Comment out the following lines (53-54) in the file by changing this:
-
Save the file and restart Solr under ${APACHE_SOLR_HOME}/example:
java -jar start.jar
-
run the Solr Index command from ${NUTCH_RUNTIME_HOME}:
bin/nutch solrindex http://127.0.0.1:8983/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/
* Note: If you are familiar with past version of the solrindex, the call signature for running it has changed. The linkdb is now optional, so you need to denote it with a "-linkdb" flag on the command line.
This will send all crawl data to Solr for indexing. For more information please see bin/nutch solrindex
If all has gone to plan, you are now ready to search with http://localhost:8983/solr/admin/.
Whats Next
You may want to check out the documentation for the Nutch 1.X REST API to get an overview of the work going on towards providing Apache CXF based REST services for Nutch 1.X branch.
轉載自:https://www.cnblogs.com/yanyue/p/6828724.html?utm_source=itdadao&utm_medium=referral