The 'jstor_ocr' function in the 'r7283' package for concatenating ocr and metadata from JSTOR's Data for Research
Digital Text Investigations
The digital humanities continues to change the ways in which we draw conclusions about social phenomena. This condition starts from the understanding that for the first time in history, humans can potentially scale the totality of a social phenomenon's appearing. This continuous evolution of study provides new ways to examine data. A key idea in this evolution is the ability to pull together unstructured data and their accompanying metadata as a rejoinder to older forms of content analysis and its related approaches.The JSTOR Data for Research (DfR) arrangement presents such a unique development to work with unstructured data. Subscribers can request large, carefully delineated, corpora for academic investigations. At time of writing there are two options for data requests. The first option allows the subscriber to create search terms, and without a signed contract, scale down the results, and download n-grams (roughly 1-3 combinations are available). The second option allows the subscriber to save a larger search, request optimal character recognition files along with n-grams. The latter arrangement is a little more formal, but both arrangements allow for new opportunities to work with big data in coming to conclusions about social, literary, or otherwise discursive phenomena.
Existing Functions
The R programming language has a peer-reviewed CRAN documented package dedicated to viewing JSTOR DfF data, and it is aptly called jstor (Klebel, 2018). Together with the tidyverse and knitr (Yihui, 2019; Wickham et al., 2020), it enables the user to view zipped JSTOR DfR data and combine information into data frames. The jstor package’s dependency on other packages allows for powerful views on data; for discovery, this makes sense. However, in some cases, after signing a contract with JSTOR for ocr and advanced n-grams, unzipping the files and working through them is as straightforward as just going to work on the data. The job at hand is very straightforward as well. Paste metadata to the appropriate files, save the files, and coerce individual files into a data frame with metadata as variables. In short, after unzipping manageable n's (viz. n = 726), binocular views might not be needed (and to a certain extent, even medial n sizes would do well enough to be left alone).Forward Movement
After viewing several deprecations of jstor's original code after its own submission to peer review, I decided that a serviceable, alternative protocol would be to combine ocr + metadata into .xml formatting as a coding goal. Peeking at zipped files, although considered a good practice at quality control, did not merit such lavish data frame visualizations as with the original jstor vignettes. Viewed from the business of statistical hypothesis testing and working with manageable n-sizes, it was more relevant to just unzip and work with the goods, as it were. Then, in the spirit of leaving metadata multi-purposive, further downstream processing could yield data frames and variables as needed for text analytics, content analysis, and quantitative linguistics. This meant that the unit of analysis par content analytic parlance would be the individual .xml file. Data perseveration techniques were used so that as many variables as possible were included in the final .xml files.Fig. 1 Documentation for jstor_ocr in the r7283 package |
Procedural Automations for jstor_ocr and its Dependencies
The xml2 package was considered a vital dependency to retain within the jstor_ocr code (from the r7283 package)(Martinez, 2019), mostly because of xml2's memory management features (Wickham, Hester and Ooms, 2018). The resulting jstor_ocr code represents an intermediary coder's work (Note well: functional programming did not improve processing speeds at medial n levels, several tests notwithstanding). Assuming an unzipped file location, the code takes the unadulterated main file folder structure of opening to n-grams, metadata, and ocr subfolders, reads in such subfolder contents as lists, parses text files, deletes all tags in .txt files (as this was found to be precarious within file quality), surrounds all text with beginning and ending text tag files, cleans hyphenated words, substitutes out unreadable UTF-8 text, and finally adopts the resulting nodes as xml children to designated xml metadata files. It performs a quality control check, assuring all names in the .txt and .xml files are exact, a latent indication that respective file contents match, according to vector indexes; it does so by comparing indexes, one within a loop against one found with lexical scoping inside the function. It then exports the resulting .xml files to a predesignated folder supplied in the function's second argument. These files are then released to further text cleansing and staging downstream, in preparation for final statistical modeling according to content analytic or quantitative linguistic statistical methods.Conclusion
The constant evolution of archival sharing arrangements makes possible the rapid implementation of standardized code, either through peer review, or through archived binary packages, the latter of which might invoke problem-in-practice documentation and usage. JSTOR's recent innovative and forward thinking arrangement of providing scaled, raw, digital humanities data beckons code-writing methodologists to construct pragmatic code when dealing with small n- to medial n- (and arguably large n-) level text analytic procedural processing. A major assumption of the code presented here is that quality control can be attained in older, albeit basic functions, eliminating some newer published functions entirely. The application of the code is mostly restricted to JSTOR's DfR archival work, featuring the function's ability to pull together unstructured data and their accompanying metadata in a seamless set of files.References
Klebel, T. (2018). Jstor: Import and analyse data from scientific texts.Journal of open source software, 3(28), 883-884 .Retrieved from http://doi.org/https://joss.theoj.org/papers/10.21105/joss.00883
Martinez, M. (2018). r7283: A miscellaneous toolkit. Retrieved from http://github.com/cownr10r/r7283
Wickham, H. et al. (2020). Tidyverse. Retrieved from http://tidyverse.org
Wickham, H., Hester, J. and Ooms, J. (2018). xml2: Parse xml. Retrieved from https://CRAN.R-project.org/package=xml2
Yihui, X. (2019). knitr: A general-purpose package for dynamic report generation in R. Retrieved from https://cran.r-project.org/web/packages/knitr/index.html.
Martinez, M. (2018). r7283: A miscellaneous toolkit. Retrieved from http://github.com/cownr10r/r7283
Wickham, H. et al. (2020). Tidyverse. Retrieved from http://tidyverse.org
Wickham, H., Hester, J. and Ooms, J. (2018). xml2: Parse xml. Retrieved from https://CRAN.R-project.org/package=xml2
Yihui, X. (2019). knitr: A general-purpose package for dynamic report generation in R. Retrieved from https://cran.r-project.org/web/packages/knitr/index.html.