Skip to main content

Thoughts on publishing in a mega-journal

Results of publishing my latest paper

 Just recently I published my first scientific (as opposed to anthropological) article in a mega-journal (See article at https://doi.org/10.1016/j.heliyon.2022.e08888). I searched for a home for the article in two different journals prior, but it had been rejected after one journal editor noted that the article was technically sound. Another editor stated that the topic modeling would be of no interest to readers. The First editor stated that the article was too narrow a topic generally for their readership to be published in their journal. Considering that there were only less than a handful of journals existing that would handle this article, I was given the suggestion by one of the editors to consider the present home for the article, Heliyon

The mega-journal Heliyon

I found Heliyon having taken the editor's advice, knowing very little about the journal other than it being run by Elsevier. I could see that it published articles in different academic subjects, which made it a mega-journal, according to Wikipedia (Wikipedia authors, 2021). Further looking around revealed that it did have its own editorial team for education, and that 95% of the team held positions in universities, and the other 5% of the team held PhDs and were working at education research institutes. Without any other guidance, I looked at the ranking metrics, and with a Scimago journal rank from 2020 of .46  (Elsevier, 2022), I decided to take a calculated risk. I realized that the rankings were not very high from what I knew about journals, but that I did not have many other places to present this paper for peer review.


The state of science and p-values

My paper has non significant p-values, which is part of its case to be made. As of 2022, there is slow change in publishing papers with non-significant p-values. It seems that the scientific process of hypothesis testing and publishing revolves strictly around progress in results, and that failures in interventions, or non-significant results, can get a harder time being published: this was the message I received in one of my rejections from previous submissions to journals on this manuscript. Part of the argument that ultimately came out of the hypothesis test was explaining non-significant results. This was very important, and it painted an image of the status of the landscape for the population I studied.

The other part of my paper did not have non significant p-values, but instead had beta-values. These amounted to important findings, and were part of the constituent make up of parts to whole numbers, so the whole paper did not have non-significant findings, to be sure. So there was space for editors to consider the article as one that had a mixture of both non-significant and beta-value based information.

To be balanced, some psychology journals are accepting paper proposals instead of full blown article write ups that have maturely developed introductions, literature reviews, methodologies and designs, while waiting for results and conclusions to be run and written. What this means is that those journals are attempting to be blinded to bias in null hypothesis testing, and publishing whatever results may come of experiments. But not all journals have this scheme in mind when processing papers for publication.


My experience with Heliyon

Submission to the journal was straightforward. Harnessing the familiar platform and resources that Elsevier uses meant that there were no glitches. Registration and paper component uploads were familiar. The bibliographic details allowed me to use a .Rmd file to compile into a .docx file for upload, the latter of which is more and more becoming the requested format to use (incidentally, I used a LaTeX file set to compile and send the final corrected set of files for final review). I was able to use the APA format for both the .Rmd and .tex files in the Elsevier template to compile the document's bibliography for professional results in both formats.

The peer review process went as predicted. I have undergone peer review now on five occasions. Usually there are three reviewers. Most of the time constructive comments are given, and of the constructive comments, usually 85% of comments are deeply drilled down and extremely useful; because the writings are so constructive, they must be explored to the fullest and a response must be specific.

In my peer review in this case the comments were all of high quality and pushed the paper into its final published form. There were blind spots in the paper of which I was made aware, and there were points to consider that I had not conceived. My personal litmus test for the quality of reviewer comments is that when I can look at my published manuscript and see several months after publication that the paper is in good form and I am personally happy with the results--only then can I say that the comments were solid. In this case, I feel that the reviewers performed a good job and pushed me to get the paper to a solid coherent argument.

When the paper was accepted for publication, the editorial/typesetting team together with the academic editor made suggestions that I accepted. From here it was a matter of days for the manuscript to take on the final form for its online and PDF appearance. I missed 2-3 errors (I was doing final edits at only 8am after only a few hours of sleep), but they are small and do not effect the statistics or numbers. They are easily overlooked.

Overall the experience with Heliyon and their editors did not differ from other editors and my experiences with other journals. This signaled to me that the peer review experience with Heliyon mirrored an academic journal, as it should. It was further proof that I was submitting to a journal that had some process.


Going forward...

Reflecting on the state of where this manuscript landed, would I go with a mega-journal again for publication? This paper presented a special case with only less than a handful of journals that would potentially accept the article, having been rejected by the ones that I had targeted, and given the advice to go with Heliyon, because it was a technically sound paper, but that it did not fit within narrow parameters of the last journal's requirements. Because this was a special case, I made the choice I did. 

There are advantages in going with a mega journal. The journal allows for open access as a default. This means that in the name of science, readers do not hit a paywall, and if they want to read the paper, they can read it for free, so long as they have access to the internet and a printer. The downfall is that unlike with traditional publishing, you as the author must pay for the publishing, which is a different model that most academics will know and agree feels authentic. The upshot is that it presents scientific liberation in that anyone can come to their own conclusions about the legitimacy about your work. Peer review has already taken place about the veracity of methods and claims, so that is accomplished on an academic level. 

There are some that claim that in the future, the work of academia will be swallowed up into the sphere of mega-journals (Wikipedia authors, 2021). Whether this is an advance in knowledge has yet to be seen, and whether it solves the problems presently experienced with paywalls, the high cost of publishing and so on. In the final analysis, it is difficult to move beyond the status quo because of the feel of the prestige of working with what is, although the Elsevier imprint and the peer review process makes both processes (going the traditional route and going with a mega-journal) feel the same to a greater or lesser degree.


Bottom line

The final word on this is that the I feel like the paper received due scrutiny during the peer review process. In comparison to other peer review feedback, the comments were just and robust and made sense in light of what was on paper. The critique asked for some major revisions, and the paper became stronger because of it. The registration process was smooth, as was the publication process. I feel like the paper found its rightful home. 


References

Elsevier. (2022). Heliyon. https://www.cell.com/heliyon/home

Wikipedia Authors. (2021). Heliyon. https://en.wikipedia.org/wiki/Heliyon.

Popular posts from this blog

Persisting through reading technical CRAN documentation

 In my pursuit of self learning the R programming language, I have mostly mastered the art of reading through CRAN documentation of R libraries as they are published. I have gone through everything from mediocre to very well documented sheets and anything in between. I am sharing one example of a very good function that was well documented in the 'survey' library by Dr. Thomas Lumley that for some reason I could not process and make work with my data initially. No finger pointing or anything like that here. It was merely my brain not readily able to wrap around the idea that the function passed another function in its arguments.  fig1: the  svyby function in the 'survey' library by Thomas Lumley filled in with variables for my study Readers familiar with base R will be reminded of another function that works similarly called the aggregate  function, which is mirrored by the work of the svyby function, in that both call on data and both call on a function toward...

Digital Humanities Methods in Educational Research

Digital Humanities based education Research This is a backpost from 2017. During that year, I presented my latest work at the 2017  SERA conference in Division II (Instruction, Cognition, and Learning). The title of my paper was "A Return to the Pahl (1978) School Leavers Study: A Distanced Reading Analysis." There are several motivations behind this study, including Cheon et al. (2013) from my alma mater .   This paper accomplished two objectives. First, I engaged previous claims made about the United States' equivalent of high school graduates on the Isle of Sheppey, UK, in the late 1970s. Second, I used emerging digital methods to arrive at conclusions about relationships between unemployment, participants' feelings about their  (then) current selves, their possible selves, and their  educational accomplishm ents. I n the image to the left I show a Ward Hierarchical Cluster reflecting the stylometrics of 153...

Bi-Term topic modeling in R

As large language models (LLMs) have become all the rage recently, we can look to small scale modeling again as a useful tool to researchers in the field with strictly defined research questions that limit the use of language parsing and modeling to the bi term topic modeling procedure. In this blog post I discuss the procedure for bi-term topic modeling (BTM) in the R programming language. One indication of when to use the procedure is when there is short text with a large "n" to be parsed. An example of this is using it on twitter applications, and related social media postings. To be sure, such applications of text are becoming harder to harvest from online, but secondary data sources can still yield insightful information, and there are other uses for the BTM outside of twitter that can bring insights into short text, such as from open ended questions in surveys.   Yan et al. (2013) have suggested that the procedure of BTM with its Gibbs sampling procedure handles sho...