...
- Have a configured local Nutch crawler setup to crawl on one machine
- Learned how to understand and configure Nutch runtime configuration including seed URL lists, URLFilters, etc.
- Have executed a Nutch crawl cycle and viewed the results of the Crawl Database
- Indexed Nutch crawl records into Apache Solr for full text search
Any issues with this tutorial should be reported to the Nutch user@ list.
Table of Contents
Table of Contents |
---|
Steps
Note |
---|
This tutorial describes the installation and use of Nutch 1.x (e.g. release cut from the master branch). For a similar Nutch 2.x with HBase tutorial, see Nutch2Tutorial. |
...
- Unix environment, or Windows-Cygwin environment
- Java Runtime/Development Environment (JDK 1.8 11 / Java 811)
- (Source build only) Apache Ant: http https://ant.apache.org/
Install Nutch
Option 1: Setup Nutch from a binary distribution
- Download a binary package (
apache-nutch-1.X-bin.zip
) from here. - Unzip your binary Nutch package. There should be a folder
apache-nutch-1.X
. cd apache-nutch-1.X/
From now on, we are going to use${NUTCH_RUNTIME_HOME
} to refer to the current directory (apache-nutch-1.X/
).
Option 2: Set up Nutch from a source distribution
...
- Download a source package (
apache-nutch-1.X-src.zip
) - Unzip
cd apache-nutch-1.X/
- Run
ant
in this folder (cf. RunNutchInEclipse) - Now there is a directory
runtime/local
which contains a ready to use Nutch installation.
When the source distribution is used${NUTCH_RUNTIME_HOME
} refers toapache-nutch-1.X/runtime/local/
. Note that - config files should be modified in
apache-nutch-1.X/runtime/local/conf/
ant clean
will remove this directory (keep copies of modified config files)
Option 3: Set up Nutch from source
See UsingGit#CheckingoutacopyofNutchandmodifyingit
Verify your Nutch installation
- run "
bin/nutch
" - You can confirm a correct installation if you see something similar to the following:
No Format |
---|
Usage: nutch COMMAND where command is one of: readdb read / dump crawl db mergedb merge crawldb-s, with optional filtering readlinkdb read / dump link db inject inject new urls into the database generate generate new segments to fetch from crawl db freegen generate new segments to fetch from text files fetch fetch a segment's pages ... |
...
- Run the following command if you are seeing "Permission denied":
No Format |
---|
chmod +x bin/nutch |
- Setup
JAVA_HOME
if you are seeingJAVA_HOME
not set. On Mac, you can run the following command or add it to~/.bashrc
:
No Format |
---|
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.811/Home # note that the actual path may be different on your system |
...
- Customize your crawl properties, where at a minimum, you provide a name for your crawler for external servers to recognize
- Set a seed list of URLs to crawl
Customize your crawl properties
- Default crawl properties can be viewed and edited within {{conf/nutch-default.xml }}- where most of these can be used without modification
- The file
conf/nutch-site.xml
serves as a place to add your own custom crawl properties that overwriteconf/nutch-default.xml
. The only required modification for this file is to override thevalue
field of the {{http.agent.name }}- i.e. Add your agent name in the
value
field of thehttp.agent.name
property inconf/nutch-site.xml
, for example:
- i.e. Add your agent name in the
No Format |
---|
<property> <name>http.agent.name</name> <value>My Nutch Spider</value> </property> |
- ensure that the
plugin.includes
property withinconf/nutch-site.xml
includes the indexer asindexer-solr
Create a URL seed list
- A URL seed list includes a list of websites, one-per-line, which nutch will look to crawl
- The file
conf/regex-urlfilter.txt
will provide Regular Expressions that allow nutch to filter and narrow the types of web resources to crawl and download
Create a URL seed list
mkdir -p urls
cd urls
touch seed.txt
to create a text fileseed.txt
underurls/
with the following content (one URL per line for each site you want Nutch to crawl).
No Format |
---|
http://nutch.apache.org/ |
...
- The crawl database, or crawldb. This contains information about every URL known to Nutch, including whether it was fetched, and, if so, when.
- The link database, or linkdb. This contains the list of known links to each URL, including both the source URL and anchor text of the link.
- A set of segments. Each segment is a set of URLs that are fetched as a unit. Segments are directories with the following subdirectories:
- a crawl_generate names a set of URLs to be fetched
- a crawl_fetch contains the status of fetching each URL
- a content contains the raw content retrieved from each URL
- a parse_text contains the parsed text of each URL
- a parse_data contains outlinks and metadata parsed from each URL
- a crawl_parse contains the outlink URLs, used to update the crawldb
Step-by-Step: Seeding the crawldb with a list of URLs
...
Bootstrapping from
...
an initial seed list.
This option shadows the creation of the seed list as covered here.
No Format |
---|
bin/nutch inject crawl/crawldb urls |
Now we have a Web database with your unfetched URLs in it.
Step-by-Step: Fetching
To fetch, we first generate a fetch list from the database:
No Format |
---|
bin/nutch generate crawl/crawldb crawl/segments
|
This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1
:
No Format |
---|
s1=`ls -d crawl/segments/2* | tail -1`
echo $s1
|
Now we run the fetcher on this segment with:
The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)
No Format |
---|
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz
|
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
No Format |
---|
mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
|
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.
No Format |
---|
bin/nutch inject crawl/crawldb dmoz
|
Now we have a Web database with around 1,000 as-yet unfetched URLs in it.
Option 2. Bootstrapping from an initial seed list.
fetch $s1
|
Then we parse the entries:
No Format |
---|
bin/nutch parse $s1
|
When this is complete, we update the database with the results of the fetch:This option shadows the creation of the seed list as covered here.
No Format |
---|
bin/nutch injectupdatedb crawl/crawldb urls |
Step-by-Step: Fetching
$s1
|
Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.
Now we generate and fetch a new segment containing the top-scoring 1,000 pagesTo fetch, we first generate a fetch list from the database:
No Format |
---|
bin/nutch generate crawl/crawldb crawl/segments
|
This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1
:
No Format |
---|
s1=`ls - -topN 1000 s2=`ls -d crawl/segments/2* | tail -1` echo $s1 |
Now we run the fetcher on this segment with:
No Format |
---|
$s2 bin/nutch fetch $s2 bin/nutch fetch $s1 |
Then we parse the entries:
No Format |
---|
parse $s2 bin/nutch updatedb parsecrawl/crawldb $s1 $s2 |
Let's fetch one more roundWhen this is complete, we update the database with the results of the fetch:
No Format |
---|
bin/nutch updatedbgenerate crawl/crawldb $s1 |
Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.
Now we generate and fetch a new segment containing the top-scoring 1,000 pages:
No Format |
---|
bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s2=`ls -crawl/segments -topN 1000 s3=`ls -d crawl/segments/2* | tail -1` echo $s2$s3 bin/nutch fetch $s2$s3 bin/nutch parse $s2$s3 bin/nutch updatedb crawl/crawldb $s2$s3 |
By this point we've fetched a few thousand pages. Let's fetch one more round:invert links and index them!
Step-by-Step: Invertlinks
Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.
No Format |
---|
bin/nutch generateinvertlinks crawl/crawldblinkdb -dir crawl/segments -topN 1000 s3=`ls -d crawl/segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch parse $s3 bin/nutch updatedb crawl/crawldb $s3 |
By this point we've fetched a few thousand pages. Let's invert links and index them!
Step-by-Step: Invertlinks
Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.
No Format |
---|
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
|
We are now ready to search with Apache Solr.
Step-by-Step: Indexing into Apache Solr
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready to go on and index all the resources. For more information see the command line options.
No Format |
---|
Usage: Indexer <crawldb> [-linkdb <linkdb>] [-params k1=v1&k2=v2...] (<segment> ... | -dir <segments>) [-noCommit] [-deleteGone] [-filter] [-normalize] [-addBinaryContent] [-base64]
Example: bin/nutch index crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone
|
Step-by-Step: Deleting Duplicates
Duplicates (identical content but different URL) are optionally marked in the CrawlDb and are deleted later in the Solr index.
MapReduce "dedup" job:
- Map: Identity map where keys are digests and values are CrawlDatum records
- Reduce: CrawlDatums with the same digest are marked (except one of them) as duplicates. There are multiple heuristics available to choose the item which is not marked as duplicate - the one with the shortest URL, fetched most recently, or with the highest score.
No Format |
---|
Usage: bin/nutch dedup <crawldb> [-group <none|host|domain>] [-compareOrder <score>,<fetchTime>,<urlLength>]
|
Deletion in the index is performed by the cleaning job (see below) or if the index job is called with the command-line flag -deleteGone
.
For more information see dedup documentation.
Step-by-Step: Cleaning Solr
The class scans a crawldb directory looking for entries with status DB_GONE (404) and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.
No Format |
---|
Usage: bin/nutch clean <crawldb> <index_url>
Example: /bin/nutch clean crawl/crawldb/ http://localhost:8983/solr
|
For more information see clean documentation.
Using the crawl script
If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.
Nutch developers have written one for you , and it is available at bin/crawl. Here the most common options and parameters:
|
We are now ready to search with Apache Solr.
Step-by-Step: Indexing into Apache Solr
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready to go on and index all the resources. For more information see the command line options.
No Format |
---|
Usage: Indexer (<crawldb> | -nocrawldb) (<segment> ... | -dir <segments>) [general options]
Index given segments using configured indexer plugins
The CrawlDb is optional but it is required to send deletion requests for duplicates
and to read the proper document score/boost/weight passed to the indexers.
Required arguments:
<crawldb> path to CrawlDb, or
-nocrawldb flag to indicate that no CrawlDb shall be used
<segment> ... path(s) to segment, or
-dir <segments> path to segments/ directory,
(all subdirectories are read as segments)
General options:
-linkdb <linkdb> use LinkDb to index anchor texts of incoming links
-params k1=v1&k2=v2... parameters passed to indexer plugins
(via property indexer.additional.params)
-noCommit do not call the commit method of indexer plugins
-deleteGone send deletion requests for 404s, redirects, duplicates
-filter skip documents with URL rejected by configured URL filters
-normalize normalize URLs before indexing
-addBinaryContent index raw/binary content in field `binaryContent`
-base64 use Base64 encoding for binary content
Example:
bin/nutch index crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone
|
Step-by-Step: Deleting Duplicates
Duplicates (identical content but different URL) are optionally marked in the CrawlDb and are deleted later in the Solr index.
MapReduce "dedup" job:
- Map: Identity map where keys are digests and values are CrawlDatum records
- Reduce: CrawlDatums with the same digest are marked (except one of them) as duplicates. There are multiple heuristics available to choose the item which is not marked as duplicate - the one with the shortest URL, fetched most recently, or with the highest score.
No Format |
---|
Usage: bin/nutch dedup <crawldb> [-group <none|host|domain>] [-compareOrder <score>,<fetchTime>,<httpsOverHttp>,<urlLength>]
|
Deletion in the index is performed by the cleaning job (see below) or if the index job is called with the command-line flag -deleteGone
.
For more information see dedup documentation.
Step-by-Step: Cleaning Solr
The class scans a crawldb directory looking for entries with status DB_GONE (404), duplicates or optionally redirects and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.
No Format |
---|
Usage: bin/nutch clean <crawldb> [-noCommit]
Example: bin/nutch clean crawl/crawldb/
|
For more information see clean documentation.
Using the crawl script
If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.
Nutch developers have written one for you , and it is available at bin/crawl. Here the most common options and parameters:
No Format |
---|
Usage: crawl [options] <crawl_dir> <num_rounds>
Arguments:
<crawl_dir> Directory where the crawl/host/link/segments dirs are saved
<num_rounds> The number of rounds to run this crawl for
Options:
-i|--index Indexes crawl results into a configured indexer
-D A Nutch or Hadoop property to pass to Nutch calls overwriting
properties defined in configuration files, e.g.
increase content limit to 2MB:
-D http.content.limit=2097152
(distributed mode only) configure memory of map and reduce tasks:
-D mapreduce.map.memory.mb=4608 -D mapreduce.map.java.opts=-Xmx4096m
-D mapreduce.reduce.memory.mb=4608 -D mapreduce.reduce.java.opts=-Xmx4096m
-w|--wait <NUMBER[SUFFIX]> Time to wait before generating a new segment when no URLs
are scheduled for fetching. Suffix can be: s for second,
m for minute, h for hour and d for day. If no suffix is
specified second is used by default. [default: -1]
-s <seed_dir> Path to seeds file(s)
-sm <sitemap_dir> Path to sitemap URL file(s)
--hostdbupdate Boolean flag showing if we either update or not update hostdb for each round
--hostdbgenerate Boolean flag showing if we use hostdb in generate or not
--num-fetchers <num_fetchers> Number of tasks used for fetching (fetcher map tasks) [default: 1]
Note: This can only be set when running in distributed mode and
should correspond to the number of worker nodes in the cluster.
--num-tasks <num_tasks> Number of reducer tasks [default: 2]
--size-fetchlist <size_fetchlist> Number of URLs to fetch in one iteration [default: 50000]
--time-limit-fetch <time_limit_fetch> Number of minutes allocated to the fetching [default: 180]
--num-threads <num_threads> Number of threads for fetching / sitemap processing [default: 50]
--sitemaps-from-hostdb <frequency> Whether and how often to process sitemaps based on HostDB.
Supported values are:
- never [default]
- always (processing takes place in every iteration)
- once (processing only takes place in the first iteration)
|
No Format |
Usage: crawl [-i|--index] [-D "key=value"] [-s <Seed Dir>] <Crawl Dir> <Num Rounds>
-i|--index Indexes crawl results into a configured indexer
-D... A Java property to pass to Nutch calls
-s <Seed Dir> Directory in which to look for a seeds file
<Crawl Dir> Directory where the crawl/link/segments dirs are saved
<Num Rounds> The number of rounds to run this crawl for
Example: bin/crawl -i -s urls/ TestCrawl/ 2
|
The crawl script has lot of parameters set, and you can modify the parameters to your needs. It would be ideal to understand the parameters before setting up big crawls.
...
Every version of Nutch is built against a specific Solr version, but you may also try a "close" version.
Nutch | Solr |
1.19 | 8.11.2 |
1.18 | 8.5.1 |
1.17 | 8.5.1 |
1.16 | 7.3.1 |
1.15 | 7.3.1 |
1.14 | 6.6.0 |
1.13 | 5.5.0 |
1.12 | 5.4.1 |
To install Solr 7.x8.x (or upwards):
- download binary file from here
- unzip to
$HOME/apache-solr
, we will now refer to this as as${APACHE_SOLR_HOME
} create resources for a new "nutch" Solr core
No Format mkdir -p ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/ cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/_default/* ${APACHE_SOLR_HOME}
/server/solr/configsets/nutch/
copy the Nutch's schema.xml into the Solr
conf
directory(Nutch 1.15 or prior) copy the schema.xml from the conf/ directory:
No Format cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml
mkdir -pNo Format ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
(Nutch 1.16 and upwards) copy the schema.xml from the indexer-solr source folder (source package):
No Format cp .../src/plugin/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
conf
directoryor indexer-solr plugins folder (binary package):
No Format
cp
.../plugins/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
Note for Nutch 1.16: due to NUTCH-2745 the schema.xml is not contained in the 1.16 binary package. Please download the schema.xml from the source repository.
You may also try to use the most recent schema.xml in case of issues launching Solr with this schema.
make sure that there is no managed-schema "in the way":
No Format rm ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/managed-schema
start the solr server
No Format ${APACHE_SOLR_HOME}/bin/solr start
create the nutch core
No Format ${APACHE_SOLR_HOME}/bin/solr create -c nutch -d ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
...
- (Nutch 1.15 and later) edit the file
conf/index-writers.xml
, see IndexWriters - (until Nutch 1.14) add the core name to the Solr server URL:
-Dsolr.server.url=http://localhost:8983/solr/nutch
Verify Solr installation
After you started Solr admin console, you should be able to access the following links:
...