...
- Unix environment, or Windows-Cygwin environment
- Java Runtime/Development Environment (JDK 1.8 11 / Java 811)
- (Source build only) Apache Ant: https://ant.apache.org/
...
- Download a source package (
apache-nutch-1.X-src.zip
) - Unzip
cd apache-nutch-1.X/
- Run
ant
in this folder (cf. RunNutchInEclipse) - Now there is a directory
runtime/local
which contains a ready to use Nutch installation.
When the source distribution is used${NUTCH_RUNTIME_HOME
} refers toapache-nutch-1.X/runtime/local/
. Note that - config files should be modified in
apache-nutch-1.X/runtime/local/conf/
ant clean
will remove this directory (keep copies of modified config files)
Option 3: Set up Nutch from source
See UsingGit#CheckingoutacopyofNutchandmodifyingit
Verify your Nutch installation
...
No Format |
---|
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.811/Home # note that the actual path may be different on your system |
...
NOTE: If you previously modified the file conf/regex-urlfilter.txt
as covered here you will need to change it back.
...
- The crawl database, or crawldb. This contains information about every URL known to Nutch, including whether it was fetched, and, if so, when.
- The link database, or linkdb. This contains the list of known links to each URL, including both the source URL and anchor text of the link.
- A set of segments. Each segment is a set of URLs that are fetched as a unit. Segments are directories with the following subdirectories:
- a crawl_generate names a set of URLs to be fetched
- a crawl_fetch contains the status of fetching each URL
- a content contains the raw content retrieved from each URL
- a parse_text contains the parsed text of each URL
- a parse_data contains outlinks and metadata parsed from each URL
- a crawl_parse contains the outlink URLs, used to update the crawldb
Step-by-Step: Seeding the crawldb with a list of URLs
Bootstrapping from an initial seed list.
This option shadows the creation of the seed list as covered here.
No Format |
---|
bin/nutch inject crawl/crawldb urls |
Bootstrapping from DMOZ
Note: DMOZ closed in 2017. The steps below do not work, you need to get DMOZ's content.rdf.u8.gz from elsewhere.
The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.
- a crawl_parse contains the outlink URLs, used to update the crawldb
Step-by-Step: Seeding the crawldb with a list of URLs
Bootstrapping from an initial seed list.
This option shadows the creation of the seed list as covered here.
No Format |
---|
bin/nutch inject crawl/crawldb |
...
urls |
Now we have a Web database with
...
your unfetched URLs in it.
Step-by-Step: Fetching
To fetch, we first generate a fetch list from the database:
...
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready to go on and index all the resources. For more information see the command line options.
...
Every version of Nutch is built against a specific Solr version, but you may also try a "close" version.
Nutch | Solr |
1.19 | 8.11.2 |
1.18 | 8.5.1 |
1.17 | 8.5.1 |
1.16 | 7.3.1 |
1.15 | 7.3.1 |
1.14 | 6.6.0 |
1.13 | 5.5.0 |
1.12 | 5.4.1 |
...