Extract Metadata From Files: ImageCat

2015-05-11T14:59:05
ID N0WHERE:29995
Type n0where
Reporter N0where
Modified 2015-05-11T14:59:05

Description

Extract Metadata From Files

This is an OODT RADIX application that uses Apache Solr , Apache Tika and Apache OODT to ingest 10s of millions of files (images,but could be extended to other files) in place, and to extract metadata and OCR information from those files/images using Tika and Tesseract OCR .

Shell Prerequisites

Some programs used by ImageCat require the use of the /bin/tcsh shell. You can usually install it on Linux via:

  1. yum install tcsh; or
  2. apt-get install tcsh

Python Prerequisites

  1. pip install xmlrpc
  2. pip install solrpy

Useful Environment Variables

The following environment variables are used in ImageCat.

setenv JAVA_HOME e.g. `readlink -f /usr/bin/java | sed "s:bin/java::"`
setenv OODT_HOME ~/imagecat
setenv GANGLIA_URL http://zipper.jpl.nasa.gov/ganglia/
setenv FILEMGR_URL http://localhost:9000
setenv WORKFLOW_URL http://localhost:9001
setenv RESMGR_URL http://localhost:9002
setenv WORKFLOW_HOME $OODT_HOME/workflow
setenv FILEMGR_HOME $OODT_HOME/filemgr
setenv PGE_ROOT $OODT_HOME/pge
setenv PCS_HOME $OODT_HOME/pcs

Installation

mkdir deploy
git clone <https://github.com/chrismattmann/imagecat.git>
cd imagecat
mvn install
cp -R distribution/target/*.tar.gz ../deploy
cd ../deploy && tar xvzf *.tar.gz
cp -R ../imagecat/solr4 ./solr4 && cp -R ../imagecat/tomcat7 ./tomcat7
edit tomcat7/conf/Catalina/localhost/solr.xml and replace [OODT_HOME] with the path to your deploy dir.
edit deploy/bin/env.sh and deploy/bin/imagecatenv.sh to make sure OODT_HOME is set to the path to your deploy dir.
/bin/bash && source bin/imagecatenv.sh
Copy cas-filemgr-VERSION.jar, cas-workflow-VERSION.jar, cas-crawler-VERSION.jar and cas-pge-VERSION.jar to the resmgr/lib directory.
cd $OODT_HOME/bin && ./oodt start
cd $OODT_HOME/tomcat7/bin && ./startup.sh
cd $OODT_HOME/resmgr/bin/ && ./start-memex-stubs
download roxy-image-list-jpg-nonzero.txt and place it in $OODT_HOME/data/staging
$OODT_HOME/bin/chunker
#win

Observing what’s going on

ImageCat runs 2 Solr deployments, and a full stack OODT Deployment. The URLs are below:

  • http://localhost:8081/solr/imagecatdev – Solr4.10.3-fork Core where SolrCell runs for Image extraction.
  • http://localhost:8081/solr/imagecatoodt – Solr4.10.3-fork Core where OODT’s file catalog is, home to ChunkFiles representing a 50k-sized slice of full file paths from the original file list.
  • http://localhost:8080/opsui/ – Apache OODT OPSUI cockpit to observe ingestion of ChunkFiles, and jobs for ingesting into SolrCell
  • http://localhost:8080/pcs/services/health/report – Apache OODT PCS REST Services showing system health and provenance.

The recommended way to see what’s going on is to check the OPSUI, and then periodically examine $OODT_HOME/data/jobs/crawl/*/logs (where the ingest into SolrCell jobs are executing). By default ImageCat uses 8 ingest processes that can run 8 parallel ingests into SolrCell at a time, with 24 jobs on deck in the Resource Manager waiting to get in.

Each directory in $OODT_HOME/data/jobs/crawl/ is an independent, fully detached job that can be executed independent of OODT to ingest 50K image files into SolrCell and to perform TesesractOCR and EXIF metadata extraction.

Note that sometimes images will fail to ingest, e.g., with a message such as:

INFO: on.SolrException: org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.jpeg.JpegParser@5c0bae4a
OUTPUT:         at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:225)
OUTPUT:         at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
OUTPUT:         at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
OUTPUT:         at org.apache.solr.core.RequestHandler
Apr 15, 2015 9:18:29 PM org.apache.oodt.commons.io.LoggerOutputStream flush

In the Solr Tomcat logs. This is normal, since sometimes the JpegParser will fail to parse the image.

Chunk Files

The overall workflow is as follows:

  1. OODT starts with the original large file that contains _ full file paths _ . It then chunks this file into sizeof(file) / $OODT_HOME/workflow/policy/tasks.xml[urnmemex:Chunker/ChunkSize] sized files.
  2. Each resultant _ ChunkFile _ is then ingested into OODT, by the OODT crawler, which triggers the OODT workflow manager to process a job called _ IngestInPlace _ .
  3. Each _ IngestInPlace _ job grabs its ingested _ ChunkFile _ (stored in $OODT_HOME/data/archive/chunks/) and then runs it through $OODT_HOME/bin/solrcell_ingest which sends the 50k full file paths to http://localhost:8081/solr/imagecatdev/extract (the ExtractingRequestHandler).
  4. 8 IngestInPlace jobs can run at a time.
  5. You can watch http://localhost:8081/solr/imagecatdev build up while it’s going on. This will happen sporadically b/c $OODT_HOME/bin/solrcell_ingest ingests all 50k files in memory, and then sends a commit at the end for efficiency (resulting in 50k * 8 files every ~30-40 minutes).

Cleaning up and checking any failed ingestions

For whatever reason, sometimes ingests fail. If you find anything happens that makes ingests fail, just run:

$OODT_HOME/bin/check_failed

This program will verify all ChunkFiles in Solr and make sure all paths were ingested into Solr. If any weren’t, new ChunkedFiles with the extension _missing.txt will be created and any remaining files will be ingested.

Extract Metadata From Files: ImageCat download