The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
.----.
____ __\\\\\\__
\___'--" .-.
/___>------rtika '0'
/____,--. \----B
"".____ ___-"
// / /
]/
Apache Tika is similar to the Babel fish in Douglas Adam’s book, “The Hitchhikers’ Guide to the Galaxy” (C. Mattmann and Zitting 2011, 3). The Babel fish translates any natural language to any other. While Apache Tika does not yet translate natural languages, it starts to tame the tower of babel of digital document formats. As the Babel fish allowed a person to understand Vogon poetry, Tika allows a computer to extract text and objects from Microsoft Word.
The world of digital file formats is like a place where each community has their own language. Academic, business, government, and online communities use anywhere from a few file types to thousands. Unfortunately, attempts to unify groups around a single format are often fruitless (C. A. Mattmann 2013).
This plethora of document formats has become a common concern. Tika is a common library to address this issue. Starting in Apache Nutch in 2005, Tika became its own project in 2007 and then a component of other Apache projects including Lucene, Jackrabbit, Mahout, and Solr (C. Mattmann and Zitting 2011, 17).
With the increased volume of data in digital archives, and terabyte
sized data becoming common, Tika’s design goals include keeping
complexity at bay, low memory consumption, and fast processing (C. Mattmann and Zitting 2011, 18). The
rtika
package is an interface to Apache Tika that leverages
Tika’s batch processor module to parse many documents fairly
efficiently. Therefore, I recommend using batches whenever possible.
Video, sound and images are important, and yet much meaningful data remains numeric or textual. Tika can parse many formats and extract alpha-numeric characters, along with a few characters to control the arrangement of text, like line breaks.
I recommend an analyst start with a directory on the computer and get
a vector of paths to each file using base::list.files()
.
The commented code below has a recipe. Here, I use test files that are
included with the package.
library('rtika')
library('magrittr')
# Code to get ALL the files in my_path:
# my_path <- "~"
# batch <- file.path(my_path,
# list.files(path = my_path,
# recursive = TRUE))
# pipe the batch into tika_text()
# to get plain text
# test files
<- c(
batch system.file("extdata", "jsonlite.pdf", package = "rtika"),
system.file("extdata", "curl.pdf", package = "rtika"),
system.file("extdata", "table.docx", package = "rtika"),
system.file("extdata", "xml2.pdf", package = "rtika"),
system.file("extdata", "R-FAQ.html", package = "rtika"),
system.file("extdata", "calculator.jpg", package = "rtika"),
system.file("extdata", "tika.apache.org.zip", package = "rtika")
)
<-
text %>%
batch tika_text()
# normal syntax also works:
# text <- tika_text(batch)
The output is a R character vector of the same length and order as the input files.
In the example above, there are several seconds of overhead to start up the Tika batch processor and then process the output. The most costly file was the first one. Large batches are parsed more quickly. For example, when parsing thousands of 1-5 page Word documents, I’ve measured 1/100th of a second per document on average.
Occasionally, files are not parsable and the returned value for the
file will be NA
. The reasons include corrupt files, disk
input/output issues, empty files, password protection, a unhandled
format, the document structure is broken, or the document has an
unexpected variation.
These issues should be rare. Tika works well on most documents, but if an archive is very large there may be a small percentage of unparsable files, and you might want to handle those.
# Find which files had an issue
# Handle them if needed
which(is.na(text))]
batch[#> character(0)
Plain text is easy to search using base::grep()
.
length(text)
#> [1] 7
<-
search grep(pattern = ' is ', x = text)]
text[
length(search)
#> [1] 6
With plain text, a variety of interesting analyses are possible,
ranging from word counting to constructing matrices for deep learning.
Much of this text processing is handled easily with the well documented
tidytext
package (Silge and Robinson
2017). Among other things, it handles tokenization and creating
term-document matrices.
A general suggestion is to use tika_fetch()
when
downloading files from the Internet, to preserve the server Content-Type
information in a file extension.
Tika’s Content-Type detection is improved with file extensions (Tika
also relies on other features such as Magic bytes, which are unique
control bytes in the file header). The tika_fetch()
function tries to preserves Content-Type information from the download
server by finding the matching extension in Tika’s database.
<- tempfile('rtika_')
download_directory
dir.create(download_directory)
<- c('https://tika.apache.org/',
urls 'https://cran.rstudio.com/web/packages/keras/keras.pdf')
<-
downloaded %>%
urls tika_fetch(download_directory)
# it will add the appropriate file extension to the downloads
downloaded#> [1] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_2622743d3a8/rtika_file262275fcbe51.html"
#> [2] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_2622743d3a8/rtika_file26222432f45b.pdf"
This tika_fetch()
function is used internally by the
tika()
functions when processing URLs. By using
tika_fetch()
explicitly with a specified directory, you can
also save the files and return to them later.
Large jobs are possible with rtika
. However, with
hundreds of thousands of documents, the R object returned by the
tika()
functions can be too big for RAM. In such cases, it
is good to use the computer’s disk more, since running out of RAM slows
the computer.
I suggest changing two parameters in any of the tika()
parsers. First, set return = FALSE
to prevent returning a
big R character vector of text. Second, specify an existing directory on
the file system using output_dir
, pointing to where the
processed files will be saved. The files can be dealt with in smaller
batches later on.
Another option is to increase the number of threads, setting
threads
to something like the number of processors minus
one.
# create a directory not already in use.
<-
my_directory tempfile('rtika_')
dir.create(my_directory)
# pipe the batch to tika_text()
%>%
batch tika_text(threads = 4,
return = FALSE,
output_dir = my_directory)
# list all the file locations
<- file.path(
processed_files normalizePath(my_directory),
list.files(path = my_directory,
recursive = TRUE)
)
The location of each file in output_dir
follows a
convention from the Apache Tika batch processor: the full path to each
file mirrors the original file’s path, only within the
output_dir
.
processed_files#> [1] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_262218299c12/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/R-FAQ.html.txt"
#> [2] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_262218299c12/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/calculator.jpg.txt"
#> [3] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_262218299c12/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/curl.pdf.txt"
#> [4] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_262218299c12/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/jsonlite.pdf.txt"
#> [5] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_262218299c12/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/table.docx.txt"
#> [6] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_262218299c12/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/tika.apache.org.zip.txt"
#> [7] "/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/Rtmpt8ekM8/rtika_262218299c12/private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/xml2.pdf.txt"
Note that tika_text()
produces .txt
files,
tika_xml()
produces .xml
files,
tika_html()
produces .html
files, and both
tika_json()
and tika_json_text()
produce
.json
files.
Plain text falls short for some purposes. For example, pagination
might be important for selecting a particular page in a PDF. The Tika
authors chose HTML as a universal format because it offers semantic
elements that are common or familiar. For example, the hyperlink is
represented in HTML as the anchor element <a>
with
the attribute href
. The HTML in Tika preserves this
metadata:
library('xml2')
# get XHTML text
<-
html %>%
batch tika_html() %>%
lapply(xml2::read_html)
# parse links from documents
<-
links %>%
html lapply(xml2::xml_find_all, '//a') %>%
lapply(xml2::xml_attr, 'href')
sample(links[[1]],10)
#> [1] "https://arxiv.org/abs/1403.2805"
#> [2] "https://arxiv.org/abs/1403.2805"
#> [3] "http://jsonlines.org/"
#> [4] "http://github.com/jeroen/jsonlite/issues"
#> [5] "http://ndjson.org"
#> [6] "https://arxiv.org/abs/1403.2805"
#> [7] "http://docs.mongodb.org/manual/reference/program/mongoexport/#cmdoption--query"
#> [8] "http://en.wikipedia.org/wiki/Singleton_(mathematics)"
#> [9] "https://www.opencpu.org/posts/jsonlite-a-smarter-json-encoder"
#> [10] "http://ndjson.org"
Each type of file has different information preserved by Tika’s internal parsers. The particular aspects vary. Some notes:
<div class="page">
.<a>
with the attribute href
.<table>
element. The rvest
package has a
function to get tables of data with
rvest::html_table()
.Note that tika_html()
and tika_xml()
both
produce the same strict form of HTML called XHTML, and either works
essentially the same for all the documents I’ve tried.
The tika_html()
and tika_xml()
functions
are focused on extracting strict, structured HTML as XHTML. In addition,
metadata can be accessed in the meta
tags of the XHTML.
Common metadata fields include Content-Type
,
Content-Length
, Creation-Date
, and
Content-Encoding
.
# Content-Type
%>%
html lapply(xml2::xml_find_first, '//meta[@name="Content-Type"]') %>%
lapply(xml2::xml_attr, 'content') %>%
unlist()
#> [1] "application/pdf"
#> [2] "application/pdf"
#> [3] "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
#> [4] "application/pdf"
#> [5] "text/html; charset=UTF-8"
#> [6] "image/jpeg"
#> [7] "application/zip"
# Creation-Date
%>%
html lapply(xml2::xml_find_first, '//meta[@name="Creation-Date"]') %>%
lapply(xml2::xml_attr, 'content') %>%
unlist()
#> [1] NA NA NA NA NA NA NA
Metadata can also accessed with tika_json()
and
tika_json_text()
. Consider all that can be found from a
single image:
library('jsonlite')
# batch <- system.file("extdata", "calculator.jpg", package = "rtika")
# a list of data.frames
<-
metadata %>%
batch tika_json() %>%
lapply(jsonlite::fromJSON)
# look at metadata for an image
str(metadata[[6]])
#> 'data.frame': 1 obs. of 118 variables:
#> $ Compression Type : chr "Baseline"
#> $ X-TIKA:Parsed-By-Full-Set :List of 1
#> ..$ : chr "org.apache.tika.parser.CompositeParser" "org.apache.tika.parser.DefaultParser" "org.apache.tika.parser.image.JpegParser"
#> $ X-TIKA:content_handler : chr "ToXMLContentHandler"
#> $ Number of Components : chr "3"
#> $ Exif SubIFD:Subject Location : chr "1956 873 610 612"
#> $ Component 2 : chr "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert"
#> $ Component 1 : chr "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert"
#> $ Exif IFD0:X Resolution : chr "72 dots per inch"
#> $ tiff:ResolutionUnit : chr "Inch"
#> $ Exif SubIFD:Scene Type : chr "Directly photographed image"
#> $ Exif SubIFD:Exposure Mode : chr "Auto exposure"
#> $ tiff:Make : chr "Apple"
#> $ Component 3 : chr "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert"
#> $ Exif SubIFD:Components Configuration: chr "YCbCr"
#> $ Exif SubIFD:Metering Mode : chr "Spot"
#> $ Exif SubIFD:White Balance Mode : chr "Auto white balance"
#> $ tiff:BitsPerSample : chr "8"
#> $ Unknown tag (0x0002) : chr "[558 values]"
#> $ Caption Digest : chr "158 218 29 133 38 133 242 158 59 205 21 44 236 23 180 123"
#> $ Exif SubIFD:Sub-Sec Time Original : chr "854"
#> $ Unknown tag (0x0009) : chr "19"
#> $ tiff:Orientation : chr "1"
#> $ tiff:Software : chr "7.1.1"
#> $ X-TIKA:embedded_depth : chr "0"
#> $ geo:long : chr "-118.449578"
#> $ Unknown tag (0x0001) : chr "0"
#> $ tiff:YResolution : chr "72.0"
#> $ Y Resolution : chr "72 dots"
#> $ Coded Character Set : chr "UTF-8"
#> $ Exif SubIFD:Flash : chr "Flash did not fire"
#> $ Thumbnail Height Pixels : chr "0"
#> $ exif:ExposureTime : chr "0.03333333333333333"
#> $ File Size : chr "83654 bytes"
#> $ Exif SubIFD:Exif Version : chr "2.21"
#> $ Exif SubIFD:Focal Length : chr "4.1 mm"
#> $ Exif IFD0:Resolution Unit : chr "Inch"
#> $ Exif SubIFD:Lens Model : chr "iPhone 5s back camera 4.12mm f/2.2"
#> $ Exif SubIFD:Date/Time Original : chr "2014:07:01 09:49:22"
#> $ Exif SubIFD:Sub-Sec Time Digitized : chr "854"
#> $ Unknown tag (0x0007) : chr "1"
#> $ Resolution Units : chr "none"
#> $ File Modified Date : chr "Thu May 04 15:09:51 -07:00 2023"
#> $ Exif SubIFD:Sensing Method : chr "One-chip color area sensor"
#> $ Epoch : chr "0"
#> $ Flags : chr "Valid"
#> $ Image Height : chr "800 pixels"
#> $ Thumbnail Width Pixels : chr "0"
#> $ GPS:GPS Longitude : chr "-118° 26' 58.48\""
#> $ GPS:GPS Longitude Ref : chr "W"
#> $ tiff:Model : chr "iPhone 5s"
#> $ Exif SubIFD:Brightness Value : chr "3.455"
#> $ exif:IsoSpeedRatings : chr "50"
#> $ Exif SubIFD:Exposure Program : chr "Program normal"
#> $ Exif IFD0:Make : chr "Apple"
#> $ GPS:GPS Altitude Ref : chr "Sea level"
#> $ X-TIKA:parse_time_millis : chr "119"
#> $ Exif SubIFD:Aperture Value : chr "f/2.2"
#> $ Exif SubIFD:Date/Time Digitized : chr "2014:07:01 09:49:22"
#> $ Run Time : chr "[104 values]"
#> $ tiff:ImageWidth : chr "600"
#> $ GPS:GPS Altitude : chr "95 metres"
#> $ Exif IFD0:Y Resolution : chr "72 dots per inch"
#> $ Unknown tag (0x0006) : chr "163"
#> $ Exif SubIFD:ISO Speed Ratings : chr "50"
#> $ Number of Tables : chr "4 Huffman tables"
#> $ Exif SubIFD:Exif Image Width : chr "600 pixels"
#> $ X Resolution : chr "72 dots"
#> $ Version : chr "1.1"
#> $ Application Record Version : chr "2"
#> $ Time Created : chr "09:49:22"
#> $ Exif SubIFD:Unique Image ID : chr "bdeb111183eae36c0000000000000000"
#> $ exif:FNumber : chr "2.2"
#> $ Exif SubIFD:Shutter Speed Value : chr "1/30 sec"
#> $ Digital Date Created : chr "2014:07:01"
#> $ resourceName : chr "calculator.jpg"
#> $ GPS:GPS Time-Stamp : chr "16:49:21.000 UTC"
#> $ Exif IFD0:Orientation : chr "Top, left side (Horizontal / normal)"
#> $ Exif SubIFD:F-Number : chr "f/2.2"
#> $ exif:FocalLength : chr "4.12"
#> $ X-TIKA:Parsed-By :List of 1
#> ..$ : chr "org.apache.tika.parser.CompositeParser" "org.apache.tika.parser.DefaultParser" "org.apache.tika.parser.image.JpegParser"
#> $ XMP Value Count : chr "4"
#> $ Exif IFD0:Software : chr "7.1.1"
#> $ tika:file_ext : chr "jpg"
#> $ Value : chr "131501 seconds"
#> $ X-TIKA:content : chr "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n<meta name=\"Compression Type\" content=\"Baseline\" />\"| __truncated__
#> $ Exif IFD0:Date/Time : chr "2014:07:01 09:49:22"
#> $ Date Created : chr "2014:07:01"
#> $ Unknown tag (0x0005) : chr "159"
#> $ GPS:GPS Version ID : chr "2.200"
#> $ Exif SubIFD:Scene Capture Type : chr "Standard"
#> $ geo:lat : chr "34.072006"
#> $ Data Precision : chr "8 bits"
#> $ tika_batch_fs:relative_path : chr "private/var/folders/r9/svkzrjgd2b550nl7cs6rdxph0000gn/T/RtmpSYQOHQ/Rinst25f47fc37a2c/rtika/extdata/calculator.jpg"
#> $ tiff:ImageLength : chr "800"
#> $ Exif SubIFD:Lens Specification : chr "4.12mm f/2.2"
#> $ Exif IFD0:Model : chr "iPhone 5s"
#> $ dcterms:created : chr "2014-07-01T09:49:22"
#> $ dcterms:modified : chr "2014-07-01T09:49:22"
#> $ exif:Flash : chr "false"
#> [list output truncated]
In addition, each specific format can have its own specialized metadata fields. For example, photos sometimes store latitude and longitude:
6]]$'geo:lat'
metadata[[#> [1] "34.072006"
6]]$'geo:long'
metadata[[#> [1] "-118.449578"
Some types of documents can have multiple objects within them. For
example, a .gzip
file may contain many other files. The
tika_json()
and tika_json_text()
functions
have a special ability that others do not. They will recurse into a
container and examine each file within. The Tika authors call the format
jsonRecursive
for this reason.
In the following example, I created a compressed archive of the
Apache Tika homepage, using the command line programs wget
and zip
. The small archive includes the HTML page, its
images, and required files.
# wget gets a webpage and other files.
# sys::exec_wait('wget', c('--page-requisites', 'https://tika.apache.org/'))
# Put it all into a .zip file
# sys::exec_wait('zip', c('-r', 'tika.apache.org.zip' ,'tika.apache.org'))
<- system.file("extdata", "tika.apache.org.zip", package = "rtika")
batch
# a list of data.frames
<-
metadata %>%
batch tika_json() %>%
lapply(jsonlite::fromJSON)
# The structure is very long. See it on your own with: str(metadata)
Here are some of the main metadata fields of the recursive
json
output:
# the 'X-TIKA:embedded_resource_path' field
<-
embedded_resource_path %>%
metadata lapply(function(x){ x$'X-TIKA:embedded_resource_path' })
embedded_resource_path#> [[1]]
#> [1] NA "/index.html" "/site.css"
#> [4] "/external.png" "/icon_info_sml.gif" "/icon_warning_sml.gif"
#> [7] "/icon_error_sml.gif" "/icon_success_sml.gif" "/tika.png"
#> [10] "/mattmann_cover150.jpg" "/asf-logo.gif"
The X-TIKA:embedded_resource_path
field tells you where
in the document hierarchy each object resides. The first item in the
character vector is the root, which is the container itself. The other
items are embedded one layer down, as indicated by the forward slash
/
. In the context of the
X-TIKA:embedded_resource_path
field, paths are not
literally directory paths like in a file system. In reality, the image
icon_info_sml.gif
is within a folder called
images
. Rather, the number of forward slashes indicates the
level of recursion within the document. One slash /
reveals
a first set of embedded documents. Additional slashes /
indicate that the parser has recursed into an embedded document within
an embedded document.
<-
content_type %>%
metadata lapply(function(x){ x$'Content-Type' })
content_type#> [[1]]
#> [1] "application/zip"
#> [2] "application/xhtml+xml; charset=UTF-8"
#> [3] "text/css; charset=ISO-8859-1"
#> [4] "image/png"
#> [5] "image/gif"
#> [6] "image/gif"
#> [7] "image/gif"
#> [8] "image/gif"
#> [9] "image/png"
#> [10] "image/jpeg"
#> [11] "image/gif"
The Content-Type
metadata reveals the first item is the
container and has the type application/zip
. The items after
that are deeper and include web formats such as
application/xhtml+xml
, image/png
, and
text/css
.
<-
content %>%
metadata lapply(function(x){ x$'X-TIKA:content' })
str(content)
#> List of 1
#> $ : chr [1:11] "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n<meta name=\"X-TIKA:Parsed-By\" content=\"org.apache.tik"| __truncated__ "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n<link rel=\"icon\" type=\"image/png\" href=\"./tikaNoTex"| __truncated__ "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n<meta name=\"embeddedRelationshipId\" content=\"tika.apa"| __truncated__ "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n<meta name=\"Transparency Alpha\" content=\"nonpremultip"| __truncated__ ...
The X-TIKA:content
field includes the XHTML rendition of
an object. It is possible to extract plain text in the
X-TIKA:content
field by calling
tika_json_text()
instead. That is the only difference
between tika_json()
and tika_json_text()
.
It may be surprising to learn that Word documents are containers (at
least the modern .docx
variety are). By parsing them with
tika_json()
or tika_json_text()
, the various
images and embedded objects can be analyzed. However, there is an added
complexity, because each document may produce a long vector of
Content-Types
for each embedded file, instead of a single
Content-Type
for the container like tika_xml()
and tika_html()
.
Out of the box, rtika
uses all the available Tika
Detectors and Parsers and runs with sensible defaults. For most, this
will work well.
In future versions, Tika uses a configuration file to customize
parsing. This config file option is on hold in rtika
,
because Tika’s batch module is still new and the config file format will
likely change and be backward incompatible. Please stay tuned.
There is also room for improvement with the document formats common
in the R community, especially Latex and Markdown. Tika currently reads
and writes these formats just fine, captures metadata and recognizes the
MIME type when downloading with tika_fetch()
. However, Tika
does not have parsers to fully understand the Latex or Markdown document
structure, render it to XHTML, and extract the plain text while ignoring
markup. For these cases, Pandoc will be more useful (See: https://pandoc.org/demos.html ).
You may find these resources useful:
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.