Skip to main content

Persisting through reading technical CRAN documentation

 In my pursuit of self learning the R programming language, I have mostly mastered the art of reading through CRAN documentation of R libraries as they are published. I have gone through everything from mediocre to very well documented sheets and anything in between. I am sharing one example of a very good function that was well documented in the 'survey' library by Dr. Thomas Lumley that for some reason I could not process and make work with my data initially. No finger pointing or anything like that here. It was merely my brain not readily able to wrap around the idea that the function passed another function in its arguments. 


fig1: the svyby function in the 'survey' library by Thomas Lumley
filled in with variables for my study

Readers familiar with base R will be reminded of another function that works similarly called the aggregate function, which is mirrored by the work of the svyby function, in that both call on data and both call on a function towards the end of the line of arguments. However, the difference between the two functions is that svyby is designed to give results on a subset of the survey as defined by a factor. Slot 1 is for the full data, slot 2 is for the factor that we're slicing the data by, slot 3 is for the survey design, slot 4 is for the survey-designed function, and slot 5 allows for removal of NAs (see fig 1). In retrospect, it is an ingenious little function designed: Sleek, smart, and available. 

However, I was unable to break through the library to find the software that I needed. First, I tried to use other functions and add to those snippets to see if I was doing something right, with no such luck. There was weeks of this going on before I finally discovered svyby

Then when I discovered this code, the tildes threw me off. In many cases, calling the object or part of an object in R by quotations is usually how to satisfy arguments in functions. The use of the tilde is admittedly foreign to my eye as an R programmer. But I do not say that as a criticism as the writer of this package is part of the R core team. Maybe there's something in the 'survey' library that is truer than what we've all been doing in other countries? I'm not sure. It's just a different experience. 

However, the use of the term "formula" in the CRAN documentation for slots 1, 2, and 4 threw me off once I realized that svyby is the function that I needed to break down descriptive statistics into subsets of the survey that I was using for my analysis. In some cases, useRs will need a formula to specify subsets in very particular form, as their survey sampling will be complex: this was not the case for me, as I had a very simply sampled survey from which to draw. Therefore, I did not need a formula. I could have substituted a different word in my head altogether for slots 1 and 2 (viz. vector).  In terms of slot 4, this was easy enough to figure out when I realized what was needed for slots 1 and 2 in the arguments for svyby

All of this did not come easy, but instead, it took weeks of problem solving, persistent reading, perspective changing, drilling through technical CRAN documentation and trying variations on code to get to results that worked. I can say that in the time that it took to understand the svyby function I learned more about myself, using functions as arguments, and my data than I thought possible. I am grateful to Dr. Thomas Lumley and the 'survey' package. 



Comments

Popular posts from this blog

Digital Humanities Methods in Educational Research

Digital Humanities based education Research This is a backpost from 2017. During that year, I presented my latest work at the 2017  SERA conference in Division II (Instruction, Cognition, and Learning). The title of my paper was "A Return to the Pahl (1978) School Leavers Study: A Distanced Reading Analysis." There are several motivations behind this study, including Cheon et al. (2013) from my alma mater .   This paper accomplished two objectives. First, I engaged previous claims made about the United States' equivalent of high school graduates on the Isle of Sheppey, UK, in the late 1970s. Second, I used emerging digital methods to arrive at conclusions about relationships between unemployment, participants' feelings about their  (then) current selves, their possible selves, and their  educational accomplishm ents. I n the image to the left I show a Ward Hierarchical Cluster reflecting the stylometrics of 153

Bi-Term topic modeling in R

As large language models (LLMs) have become all the rage recently, we can look to small scale modeling again as a useful tool to researchers in the field with strictly defined research questions that limit the use of language parsing and modeling to the bi term topic modeling procedure. In this blog post I discuss the procedure for bi-term topic modeling (BTM) in the R programming language. One indication of when to use the procedure is when there is short text with a large "n" to be parsed. An example of this is using it on twitter applications, and related social media postings. To be sure, such applications of text are becoming harder to harvest from online, but secondary data sources can still yield insightful information, and there are other uses for the BTM outside of twitter that can bring insights into short text, such as from open ended questions in surveys.   Yan et al. (2013) have suggested that the procedure of BTM with its Gibbs sampling procedure handles sho

Creating Examination Question Banks for ESL Civics Students based on U.S. Form M-638

R and Latex Code in the Service of Exam Questions   The following webpage is under development and will grow with more information. The author abides by the GPL (>= 2) license provided by the "ProfessR" package by showing basic code, but not altering it. The code that is provided here is governed by the MIT license, copyright 2018, while respecting the GPL (>=2) license. Rationale Apart from the limited choices of open sourced, online curriculum building for adult ESL students (viz. elcivics.com), there is a current need to create open-sourced assessments for various levels of student understandings of the English language. While the U.S. Citizenship and Immigration Services (https://www.uscis.gov/citizenship) has valuable lessons for beginning and intermediate ESL civics learners, there exists a need to provide more robust assessments, especially for individuals repeating ESL-based civics courses. This is because the risks and efforts involved in applying