Skip to main content

The 'jstor_ocr' function in the 'r7283' package for concatenating ocr and metadata from JSTOR's Data for Research

Digital Text Investigations

The digital humanities continues to change the ways in which we draw conclusions about social phenomena. This condition starts from the understanding that for the first time in history, humans can potentially scale the totality of a social phenomenon's appearing. This continuous evolution of study provides new ways to examine data. A key idea in this evolution is the ability to pull together unstructured data and their accompanying metadata as a rejoinder to older forms of content analysis and its related approaches.

The JSTOR Data for Research (DfR) arrangement presents such a unique development to work with unstructured data. Subscribers can request large, carefully delineated, corpora for academic investigations. At time of writing there are two options for data requests. The first option allows the subscriber to create search terms, and without a signed contract, scale down the results, and download n-grams (roughly 1-3 combinations are available). The second option allows the subscriber to save a larger search, request optimal character recognition files along with n-grams. The latter arrangement is a little more formal, but both arrangements allow for new opportunities to work with big data in coming to conclusions about social, literary, or otherwise discursive phenomena.

Existing Functions

The R programming language has a peer-reviewed CRAN documented package dedicated to viewing JSTOR DfF data, and it is aptly called jstor (Klebel, 2018). Together with the tidyverse and knitr (Yihui, 2019; Wickham et al., 2020), it enables the user to view zipped JSTOR DfR data and combine information into data frames. The jstor package’s dependency on other packages allows for powerful views on data; for discovery, this makes sense. However, in some cases, after signing a contract with JSTOR for ocr and advanced n-grams, unzipping the files and working through them is as straightforward as just going to work on the data. The job at hand is very straightforward as well. Paste metadata to the appropriate files, save the files, and coerce individual files into a data frame with metadata as variables. In short, after unzipping manageable n's (viz. n = 726), binocular views might not be needed (and to a certain extent, even medial n sizes would do well enough to be left alone).

Forward Movement

After viewing several deprecations of jstor's original code after its own submission to peer review, I decided that a serviceable, alternative protocol would be to combine ocr + metadata into .xml formatting as a coding goal. Peeking at zipped files, although considered a good practice at quality control, did not merit such lavish data frame visualizations as with the original jstor vignettes. Viewed from the business of statistical hypothesis testing and working with manageable n-sizes, it was more relevant to just unzip and work with the goods, as it were. Then, in the spirit of leaving metadata multi-purposive, further downstream processing could yield data frames and variables as needed for text analytics, content analysis, and quantitative linguistics. This meant that the unit of analysis par content analytic parlance would be the individual .xml file. Data perseveration techniques were used so that as many variables as possible were included in the final .xml files.

Fig. 1 Documentation for jstor_ocr in the r7283 package

Procedural Automations for jstor_ocr and its Dependencies

The xml2 package was considered a vital dependency to retain within the jstor_ocr code (from the r7283 package)(Martinez, 2019), mostly because of xml2's memory management features (Wickham, Hester and Ooms, 2018). The resulting jstor_ocr code represents an intermediary coder's work (Note well: functional programming did not improve processing speeds at medial n  levels, several tests notwithstanding).  Assuming an unzipped file location, the code takes the unadulterated main file folder structure of opening to n-grams, metadata, and ocr subfolders, reads in such subfolder contents as lists, parses text files, deletes all tags in .txt files (as this was found to be precarious within file quality), surrounds all text with beginning and ending text tag files, cleans hyphenated words, substitutes out unreadable UTF-8 text, and finally adopts the resulting nodes as xml children to designated xml metadata files. It performs a quality control check, assuring all names in the .txt and .xml files are exact, a latent indication that respective file contents match, according to vector indexes; it does so by comparing indexes, one within a loop against one found with lexical scoping inside the function. It then exports the resulting .xml files to a predesignated folder supplied in the function's second argument. These files are then released to further text cleansing and staging downstream, in preparation for final statistical modeling according to content analytic or quantitative linguistic statistical methods.

Conclusion

The constant evolution of archival sharing arrangements makes possible the rapid implementation of standardized code, either through peer review, or through archived binary packages, the latter of which might invoke problem-in-practice documentation and usage. JSTOR's recent innovative and forward thinking arrangement of providing scaled, raw, digital humanities data beckons code-writing methodologists to construct pragmatic code when dealing with small n- to medial n- (and arguably large n-) level text analytic procedural processing. A major assumption of the code presented here is that quality control can be attained in older, albeit basic functions, eliminating some newer published functions entirely. The application of the code is mostly restricted to JSTOR's DfR archival work, featuring the function's ability to pull together unstructured data and their accompanying metadata in a seamless set of files.

References

Klebel, T. (2018). Jstor: Import and analyse data from scientific texts.Journal of open source software, 3(28), 883-884 .Retrieved from http://doi.org/https://joss.theoj.org/papers/10.21105/joss.00883

Martinez, M. (2018). r7283: A miscellaneous toolkit. Retrieved from http://github.com/cownr10r/r7283

Wickham, H. et al. (2020). Tidyverse. Retrieved from http://tidyverse.org

Wickham, H., Hester, J. and Ooms, J. (2018). xml2: Parse xml. Retrieved from https://CRAN.R-project.org/package=xml2

Yihui, X. (2019). knitr: A general-purpose package for dynamic report generation in R. Retrieved from https://cran.r-project.org/web/packages/knitr/index.html.


Popular posts from this blog

Persisting through reading technical CRAN documentation

 In my pursuit of self learning the R programming language, I have mostly mastered the art of reading through CRAN documentation of R libraries as they are published. I have gone through everything from mediocre to very well documented sheets and anything in between. I am sharing one example of a very good function that was well documented in the 'survey' library by Dr. Thomas Lumley that for some reason I could not process and make work with my data initially. No finger pointing or anything like that here. It was merely my brain not readily able to wrap around the idea that the function passed another function in its arguments.  fig1: the  svyby function in the 'survey' library by Thomas Lumley filled in with variables for my study Readers familiar with base R will be reminded of another function that works similarly called the aggregate  function, which is mirrored by the work of the svyby function, in that both call on data and both call on a function toward...

Digital Humanities Methods in Educational Research

Digital Humanities based education Research This is a backpost from 2017. During that year, I presented my latest work at the 2017  SERA conference in Division II (Instruction, Cognition, and Learning). The title of my paper was "A Return to the Pahl (1978) School Leavers Study: A Distanced Reading Analysis." There are several motivations behind this study, including Cheon et al. (2013) from my alma mater .   This paper accomplished two objectives. First, I engaged previous claims made about the United States' equivalent of high school graduates on the Isle of Sheppey, UK, in the late 1970s. Second, I used emerging digital methods to arrive at conclusions about relationships between unemployment, participants' feelings about their  (then) current selves, their possible selves, and their  educational accomplishm ents. I n the image to the left I show a Ward Hierarchical Cluster reflecting the stylometrics of 153...

Bi-Term topic modeling in R

As large language models (LLMs) have become all the rage recently, we can look to small scale modeling again as a useful tool to researchers in the field with strictly defined research questions that limit the use of language parsing and modeling to the bi term topic modeling procedure. In this blog post I discuss the procedure for bi-term topic modeling (BTM) in the R programming language. One indication of when to use the procedure is when there is short text with a large "n" to be parsed. An example of this is using it on twitter applications, and related social media postings. To be sure, such applications of text are becoming harder to harvest from online, but secondary data sources can still yield insightful information, and there are other uses for the BTM outside of twitter that can bring insights into short text, such as from open ended questions in surveys.   Yan et al. (2013) have suggested that the procedure of BTM with its Gibbs sampling procedure handles sho...