Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Update nutch solr commands, add Solr version for 1.17

...

Bootstrapping from DMOZ

...

The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)

...

Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:

...

The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.

...

Now we have a Web database with around 1,000 as-yet unfetched URLs in it.

...

Now we are ready to go on and index all the resources. For more information see the command line options.

No Format

     Usage: Indexer (<crawldb> [-linkdb <linkdb>] [-params k1=v1&k2=v2...]| -nocrawldb) (<segment> ... | -dir <segments>) [-noCommit] [-deleteGone] [-filter] [-normalize] [-addBinaryContent] [-base64]
     Example: bin/nutch index crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone

Step-by-Step: Deleting Duplicates

Duplicates (identical content but different URL) are optionally marked in the CrawlDb and are deleted later in the Solr index.

MapReduce "dedup" job:

  • Map: Identity map where keys are digests and values are CrawlDatum records
  • Reduce: CrawlDatums with the same digest are marked (except one of them) as duplicates. There are multiple heuristics available to choose the item which is not marked as duplicate - the one with the shortest URL, fetched most recently, or with the highest score.
No Format

     Usage: bin/nutch dedup <crawldb> [-group <none|host|domain>] [-compareOrder <score>,<fetchTime>,<urlLength>]

Deletion in the index is performed by the cleaning job (see below) or if the index job is called with the command-line flag -deleteGone.

For more information see dedup documentation.

Step-by-Step: Cleaning Solr

The class scans a crawldb directory looking for entries with status DB_GONE (404) and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.

No Format

     Usage: bin/nutch clean <crawldb> <index_url>
     Example: /bin/nutch clean crawl/crawldb/ http://localhost:8983/solr

For more information see clean documentation.

Using the crawl script

If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.

Nutch developers have written one for you (smile), and it is available at bin/crawl. Here the most common options and parameters:

general options]

Index given segments using configured indexer plugins

The CrawlDb is optional but it is required to send deletion requests for duplicates
and to read the proper document score/boost/weight passed to the indexers.

Required arguments:

        <crawldb>       path to CrawlDb, or
        -nocrawldb      flag to indicate that no CrawlDb shall be used

        <segment> ...   path(s) to segment, or
        -dir <segments> path to segments/ directory,
                        (all subdirectories are read as segments)

General options:

        -linkdb <linkdb>        use LinkDb to index anchor texts of incoming links
        -params k1=v1&k2=v2...  parameters passed to indexer plugins
                                (via property indexer.additional.params)

        -noCommit       do not call the commit method of indexer plugins
        -deleteGone     send deletion requests for 404s, redirects, duplicates
        -filter         skip documents with URL rejected by configured URL filters
        -normalize      normalize URLs before indexing
        -addBinaryContent       index raw/binary content in field `binaryContent`
        -base64         use Base64 encoding for binary content

Example:
   bin/nutch index crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone

Step-by-Step: Deleting Duplicates

Duplicates (identical content but different URL) are optionally marked in the CrawlDb and are deleted later in the Solr index.

MapReduce "dedup" job:

  • Map: Identity map where keys are digests and values are CrawlDatum records
  • Reduce: CrawlDatums with the same digest are marked (except one of them) as duplicates. There are multiple heuristics available to choose the item which is not marked as duplicate - the one with the shortest URL, fetched most recently, or with the highest score.
No Format
Usage: bin/nutch dedup <crawldb> [-group <none|host|domain>] [-compareOrder <score>,<fetchTime>,<httpsOverHttp>,<urlLength>]

Deletion in the index is performed by the cleaning job (see below) or if the index job is called with the command-line flag -deleteGone.

For more information see dedup documentation.

Step-by-Step: Cleaning Solr

The class scans a crawldb directory looking for entries with status DB_GONE (404), duplicates or optionally redirects and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.

No Format
Usage: bin/nutch clean <crawldb> [-noCommit]
Example: bin/nutch clean crawl/crawldb/

For more information see clean documentation.

Using the crawl script

If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.

Nutch developers have written one for you (smile), and it is available at bin/crawl. Here the most common options and parameters:

No Format
Usage: crawl [options] <crawl_dir> <num_rounds>

Arguments:
  <crawl_dir>                           Directory where the crawl/host/link/segments dirs are saved
  <num_rounds>                          The number of rounds to run this crawl for

Options:
  -i|--index                            Indexes crawl results into a configured indexer
  -D                                    A Nutch or Hadoop property to pass to Nutch calls overwriting
                                        properties defined in configuration files, e.g.
                                        increase content limit to 2MB:
                                          -D http.content.limit=2097152
                                        (distributed mode only) configure memory of map and reduce tasks:
                                          -D mapreduce.map.memory.mb=4608    -D mapreduce.map.java.opts=-Xmx4096m
                                          -D mapreduce.reduce.memory.mb=4608 -D mapreduce.reduce.java.opts=-Xmx4096m
  -w|--wait <NUMBER[SUFFIX]>            Time to wait before generating a new segment when no URLs
                                        are scheduled for fetching. Suffix can be: s for second,
                                        m for minute, h for hour and d for day. If no suffix is
                                        specified second is used by default. [default: -1]
  -s <seed_dir>                         Path to seeds file(s)
  -sm <sitemap_dir>                     Path to sitemap URL file(s)
  --hostdbupdate                        Boolean flag showing if we either update or not update hostdb for each round
  --hostdbgenerate                      Boolean flag showing if we use hostdb in generate or not
  --num-fetchers <num_fetchers>         Number of tasks used for fetching (fetcher map tasks) [default: 1]
                                        Note: This can only be set when running in distributed mode and
                                              should correspond to the number of worker nodes in the cluster.
  --num-tasks <num_tasks>               Number of reducer tasks [default: 2]
  --size-fetchlist <size_fetchlist>     Number of URLs to fetch in one iteration [default: 50000]
  --time-limit-fetch <time_limit_fetch> Number of minutes allocated to the fetching [default: 180]
  --num-threads <num_threads>           Number of threads for fetching / sitemap processing [default: 50]
  --sitemaps-from-hostdb <frequency>    Whether and how often to process sitemaps based on HostDB.
                                        Supported values are:
                                          - never [default]
                                          - always (processing takes place in every iteration)
                                          - once (processing only takes place in the first iteration)
No Format

     Usage: crawl [-i|--index] [-D "key=value"] [-s <Seed Dir>] <Crawl Dir> <Num Rounds>
	-i|--index	Indexes crawl results into a configured indexer
	-D...		A Java property to pass to Nutch calls
	-s <Seed Dir>	Directory in which to look for a seeds file
	<Crawl Dir>	Directory where the crawl/link/segments dirs are saved
	<Num Rounds>	The number of rounds to run this crawl for
     Example: bin/crawl -i -s urls/ TestCrawl/  2

The crawl script has lot of parameters set, and you can modify the parameters to your needs. It would be ideal to understand the parameters before setting up big crawls.

...

Every version of Nutch is built against a specific Solr version, but you may also try a "close" version.

Nutch

Solr

1.178.5.1
1.167.3.1

1.15

7.3.1

1.14

6.6.0

1.13

5.5.0

1.12

5.4.1

To install Solr 7.x (or upwards):

  • download binary file from here
  • unzip to $HOME/apache-solr, we will now refer to this as ${APACHE_SOLR_HOME}
  • create resources for a new "nutch" Solr core

    No Format
    mkdir -p ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/
    cp -r ${APACHE_SOLR_HOME}/server/solr/configsets/_default/* ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/
    


  • copy the Nutch's schema.xml into the Solr conf directory

    • (Nutch 1.15 or prior) copy the schema.xml from the conf/ directory:

      No Format
      cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
      


    • (Nutch 1.16) copy the schema.xml from the indexer-solr source folder (source package):

      No Format
      cp .../src/plugin/indexer-solr/schema.xml ${APACHE_SOLR_HOME}/server/solr/configsets/nutch/conf/
      

      Note: due to NUTCH-2745 the schema.xml is not contained in the binary package. Please download the schema.xml from the source repository.

    • You may also try to use the most recent schema.xml in case of issues launching Solr with this schema.

...