Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

...

Here's how to get some useful feedback about crawler configurations:

./crawler_launcher --printSupportedCrawlersprintSupportedActions
./crawler_launcher --printSupportedActionsprintSupportedCrawlerActions
./crawler_launcher --printSupportedPreconditions

There where two cralwerIds crawlers that I was particularly interested in using - the MetExtractorProductCrawler and the AutoDetectProductCrawler (the StdProductCrawler does not support meta data extraction).

So, now you want to know more about how to get these crawlers up and running? Ask the crawler!

./crawler_launcher -h --crawlerId MetExtractorProductCrawler--operation --launchStdCrawler
./crawler_launcher --operation --launchMetCrawler
./crawler_launcher --h operation --crawlerId AutoDetectProductCrawlerlaunchAutoCrawler

As you can see there are quiet a few the command line options that need to specified are listed after running the command. My approach was to iteratively add the command line options. The simplest command that you can get some useful feedback from, is to specify the crawlerIDcrawler.

MetExtractorProductCrawler

To get the meta data extractor product crawler working I ran:

./crawler_launcher --crawlerId MetExtractorProductCrawleroperation --launchMetCrawler

The crawler then failed, since there was a command line option that needed to be specified. So I added that option and ran the command again to see where it failed next.

This the complete met extractor command that I eventually ran:

./crawler_launcher
--crawlerId MetExtractorProductCrawler
--filemgrUrl http://localhost:9000Image Removed
--operation --launchMetCrawler
--clientTransferer org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory
--productPath /usr/local/meerkat/data/staging/products/hdf5
--metExtractor org.apache.oodt.cas.metadata.extractors.ExternMetExtractor
--metExtractorConfig /usr/local/meerkat/extractors/katextractor/katextractor.config

...

  1. I had a file manager listening on http://localhost:9000Image Removed.
  2. I've used an external meta data extractor (written in python) to extract data from HDF5 files.
  3. MetExtractorProductCrawler example configuration can be found in the source (allows you to specify how the crawler will run your extractor): https://svn.apache.org/repos/asf/oodt/trunk/metadata/src/main/resources/examples/extern-config.xmlImage Removed

...

MetExtractorProductCrawler, using the TikaCmdLineMetExtractor (an easier approach)

...

NOTE: This extractor is only available in 07-SNAPSHOT+

Without having to create your own custom MetExtractor, you can leverage OODT's Tika extractor do automatically extract as much metadata as it can gather for you. The only thing you need to do is to specify a configuration file, and specify which ProductType you want your products ingested to. Below are examples of the steps you could perform:

Invocation command:
./crawler_launcher
--filemgrUrl http://localhost:9000
--operation --launchMetCrawler
--clientTransferer org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory
--productPath /usr/local/meerkat/data/staging/products/hdf5
--metExtractor org.apache.oodt.cas.metadata.extractors.TikaCmdLineMetExtractor
--metExtractorConfig /usr/local/meerkat/extractors/tikaextractor/tikaextractor.config

Associated configuration file:

Code Block
titletikaextractor.config

ProductType=MyCustomProductType
AutoDetectProductCrawler

To get the auto detect product crawler working I ran:

./crawler_launcher --crawlerId AutoDetectProductCrawleroperation --launchAutoCrawler

I followed a similar approach for getting the MetExtractorProductCrawler working. For completeness, here is my complete command line:

./crawler_launcher
--crawlerId AutoDetectProductCrawleroperation --launchAutoCrawler
--filemgrUrl http://localhost:9000Image Removed
--clientTransferer org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory
--productPath /usr/local/meerkat/data/staging/products/hdf5
--mimeExtractorRepo ../policy/mime-extractor-map.xml

...

In the mime-extractor-map.xml file I needed to configure:

  • magic = "true or false". Not sure what that is yet.
  • a mime type. I used product/hdf5
  • a mimeRepo file. I called mine /usr/local/meerkat/cas-crawler/policy/mimetypes.xml.
  • a preCondComparator. I used CheckThatDataFileSizeIsGreaterThanZero

...

  1. I had a file manager listening on http://localhost:9000Image Removed.
  2. I've used an external meta data extractor (written in python) to extract data from HDF5 files.
  3. AutoDetectProductCrawler example configuration can be found in the source:
    • Uses the same metadata extractor specification file (you will have one of these for each mime-type).
    • Allows you to define your mime-types – that is, give a mime-type for a given filename regular expression.
    • maps your mime-types to extractors.