The fast and wide spread of the coronavirus disease (COVID-19) is a major threat to humankind. People from all walks of life are exposed to infection. To cope with this pandemic, the World Health Organization and other health organizations have recommended social distancing. Consequently, governments and autonomous organizations have set restrictions on people’s movement and employment. Researchers and academics are dealing with similar limitations. Particularly for the social and behavioral sciences, in which some of the research requires interaction with human subjects, it may not be possible to conduct further research or to access laboratories. These impediments imply that considerable social and behavioral research is shrinking globally.
In response, research methods are shifting to online experiments and surveys. For example, several large psychological multi-lab collaborations, such as the COVIDiSTRESS project (https://covidistress.github.io/) or the Psychological Science Accelerator’s COVID-Rapid project (https://psysciacc.org/), are considering implementing online questionnaires. Although various limitations have to be taken into consideration (for example, prolonged experiments result in a substantial number of dropouts: Sasaki & Yamada, 2019), online experiments and surveys enable researchers to collect diverse and comprehensive data in a short period of time (Grootswagers, 2020). Therefore, some social and behavioral researchers, who are currently unable to conduct laboratory experiments, have been able to continue to collect data.
However, publishing limitations persist. While the number of manuscripts uploaded as preprints is growing, the bottleneck of outlets has resulted in many being left unpublished. Some publishers and printers are offering expedited publishing options for medical journals (Beeby, 2020). For example, some journals associated with social behavioral studies are willing to publish COVID-19-related research through fast peer reviews (e.g., Chambers, 2020; “Science in the time of COVID-19,” 2020), and others have waived publication fees (e.g., Frontiers, 2020). At the same time, it is evident that there will only be a limited number of papers published through these emergency methods. Psychology of COVID-19 Preprint Tracker indicates that there are a very large number of COVID-19 preprints in psychology that are awaiting publication (Syed, 2020).
In this article, I focus on “micropublishing,” which is a publishing method that specializes in the accumulation of data with minimal text. It is not yet popular among social and behavioral scientists. I argue that introducing micropublishing to these fields will remedy the lack of publishing options available during the immediate COVID-19 crisis and will establish a paradigm-shifting publishing method that could be used even after this pandemic is addressed.
Academic publishing has changed drastically in the last few decades. Many features of the publication process are not traditionally associated with final publication, that is, open access, open material/data, preprint, pre-registration, and post-publication peer review. In this context, micropublishing of academic papers has emerged. Micropublishing is a term originally coined by independent (mostly private) publishers that print niche and shorter material and publish on-demand for target markets. This method is agile. There has been a movement to take advantage of this and use it for academic publishing. The launch of microPublication Biology (https://www.micropublication.org/) is emblematic of this trend. microPublication Biology offers fast and flexible publication with minimal text (only description1 and method; minimal or no introduction and discussion), rapid peer review, rapid publication (within one week),2 and direct registration of findings to genomic databases (for example, WormBase3). The contents of the paper can be any generic novelty-oriented research, replication experiments, experiments with negative results, proposals, or even methodology. This publication format itself could be disseminated across many academic disciplines as an academic micropublication (I will simply refer to it as “micropublication” hereafter). Table 1 shows a short summary of the characteristics of micropublication and other formats. In some respects, micropublication is a middle ground between preprints and journal full papers. Registrability refers to whether pre-registration is possible, and there is non-refereed pre-registration available in each of the three formats. This is to be expected since authors only need to use a third-party registration site before submission. If authors desire peer review for pre-registration, they currently have the option of registered reports, which is a method for conducting experiments after the protocol has been accepted by peer review and finally reporting it as a full paper. In micropublications, the pre-registration protocol is peer-reviewed and published as a paper when it is accepted. This could be a new option in the pre-registration system, and it will be discussed in more detail later.
|Preprint||Micropublication||Journal full paper|
|Structure||Complete IMRAD or Protocol||Description and Method||Complete IMRAD|
|Registrability||Pre-registration||Pre-registration||Pre-registration and Registered report|
Social and behavioral research can also adopt this format. The introduction of pre-registration (and Registered Reports) and open data in these fields is progressing rapidly (e.g., Nosek, Ebersole, DeHaven, & Mellor, 2018; van ‘t Veer & Giner-Sorolla, 2016). Publishing direct replication reports of previous experiments has also become popular as famous mega-replication papers have been published in Science (Open Science Collaboration, 2015). Similarly, information in social and behavioral sciences, as well as opinion papers, such as this one, are micropublishable. For empirical studies, a reader does not need a detailed introduction but can concentrate on examining the methods and results. Similarly, reviewers can focus on methods and results for their peer review. Micropublication journals are peer reviewed with equally high quality and standards as other journals, but the manuscript is written in a minimalistic form, so reviews inevitably take less time for completion.
To demonstrate this, I have created a micropublication journal in psychology, my area of expertise, entitled “Psychological Micro Reports” (https://sites.google.com/view/pmr-journal/). I have published sample papers as well as fact-based experimental data. This demo journal does not have a Digital Object Identifier (DOI) but has all the components of a psychology journal. For example, the journal employs fast-tracking peer review all year round, and accepted manuscripts are published (released) immediately.
With the COVID-19 situation, social and behavioral sciences evidence is accumulating rapidly through online data collection, but the options to share this information are scarce. Micropublishing would make evidence available quickly. This publication method will be imperative to revitalize academic debate around the COVID-19 situation. An overview of uploaded preprints of COVID-19-related psychological research indicates that there are many overlapping research themes. For example, several studies have been conducted regarding risk perception (Gerhold, 2020; Niepel, Kranz, Borgonovi, & Greiff, 2020; Raude, Debin, Souty, Guerrisi, Turbelin, Falchi, … Colizza, 2020; Wise, Zbozinek, Michelini, Hagan, & mobbs 2020; Zettler, Schild, Lilleholt, & Böhm, 2020), anxiety (Šrol, Mikušková, & Cavojova, 2020; Tabri, Hollingshead, & Wohl, 2020; Zheng, Yao, & Narayanan, 2020), controllability (Everett, Colombatto, Chituc, Brady, & Crockett, 2020; Li, Yang, Dou, & Cheung, 2020; Li, Yang, Dou, Wang, Zhang, & Lin, 2020; Šrol et al., 2020), stress and coping (Gerhold, 2020; Sweeny, Rankin, Cheng, Hou, Long, Meng, … Zhang, 2020), and others that fall into the COVID-19-related research categories recently exemplified by Van Bavel, Baicker, Boggio, Capraro, Cichocka, Cikara at al. (2020). Distributed and overlapping findings should be aggregated in a series. In this current environment, it is desirable to micropublish the first proposal paper that describes an experiment’s methodology or survey method and then publish data based on that protocol in a series of subsequent papers. The deviations, if any, can be noted. Furthermore, as is clear in my demo journal instructions, constraints on generality and other limitations should be mentioned in the text. The disclosure of these deviations and limitations should never require too much space and should not detract from the brevity of the micropublication, because the experiments and data in micropublications are simpler than those in full papers, and there cannot be too many things to disclose.
Fast publishing implies inaccurate and sloppy research, but this publishing system promotes cumulative science. In other words, one paper may not have a strong value on its own, but the combination of multiple papers may be stronger. In addition, since publications are open for editing in micropublication journals, problems of individual papers can easily be corrected.
After the COVID-19 crisis is over, will micropublishing survive? Micropublishing offers many advantages in open science. It leads to flexible paper types (it breaks through the traditional format restrictions for original articles, short reports, and reviews) and opens the possibilities for publishing trivial or null results; this change mitigates publication biases (Mahoney, 1977). In addition, this system has the potential to develop Registered Reports. According to the Center for Open Science, Registered Reports are defined as “peer review before results are known to align scientific values and practices” (https://cos.io/rr/). Registered Reports are now being published in more than 240 journals and are expected to become an important norm for open and reproducible science by suppressing p-hacking (Simmons, Nelson, & Simonsohn, 2011) and HARKing (“hypothesizing after the results are known” coined by Kerr, 1998). In p-hacking and HARKing, the hypothesis and the experimental protocol have to be peer reviewed before the experiment (Stage 1) and no changes are allowed thereafter. Then, the results of the experiment and the interpretations are added and the manuscript is peer-reviewed again (Stage 2). Finally, both stages are compiled as a single paper. This system can prevent a variety of questionable research practices (QRPs) by limiting the degree of freedom the researcher has to establish hypotheses and analyze methods after the experiment is complete.
In this case, how does micropublishing implement Registered Reports? Take the example of my demo journal. The e0001 paper only describes the method of the experiment. This is the same as the Stage 1 protocol for Registered Reports. The e0002 paper, on the other hand, is based on the results. This is the same as the Stage 2 for articles. In other words, Stages 1 and 2 are one study, but separate micro papers: the final form of the registered reports should not be necessarily just one paper. Moreover, the authors of each micro paper can be different. I have proposed a division of labor between the pre-registration and experimental groups in Registered Reports (Ikeda, Xu, Fuji, Zhu, & Yamada, 2019; Yamada, 2018). This format of micropublication is an easy way to implement this recommendation. The division of labor makes the QRPs meaningless as the experiment performers and the hypothesis builders are different. It also allows each researcher to focus on their research strengths, changing the status quo whereby only research experts can become published researchers (Yamada, 2019).
The promotion of micropublication has three major concerns. The first stems from a science communication perspective. In recent years, scholarly information sharing in preprints has flourished, but such preprints have not been peer-reviewed. Nevertheless, there are cases where strong arguments are made on the basis of preprints. For example, making misplaced preprint-based policy recommendations in relation to COVID-19 caused considerable confusion, and possibly resulted in some lives being lost; many such papers related to COVID-19 were retracted in a short period of time, as Retraction Watch reported (https://retractionwatch.com/retracted-coronavirus-covid-19-papers/). A vitally important paper released during the COVID-19 pandemic points out that psychology does not have the capacity to advocate for COVID-19 and needs to start with an evidence level classification (IJzerman et al., 2020). The current paper is proposed based on the idea that micropublication ensures the quality of the evidence while achieving fast publication, and I believe that the format can be applied to the reproducibility problem in psychology as it is. Unlike preprints, micropublications are peer-reviewed. However, neither micropublications nor preprints should be used to make policy recommendations, and they should be distinguished from journal full papers. More generally speaking, strong arguments based on the publication of a single paper should be avoided.
The second concern relates to researcher evaluation. Researchers can quickly enrich their CVs by continuing to only micropublish because of the process’s rapidity and brevity. It will thus be necessary to, at the very least, make micropublications clear in CVs so they can be identified (this is also recommended by microPublication Biology). However, this will only be understood by evaluators who understand micropublication. In fact, this issue is connected to structural reforms on researcher evaluation. Recently, many have questioned whether researchers should be evaluated based on impact factors. However, no new measures have emerged at a level that can be realistically adopted. In recent years, a service called Plaudit (https://plaudit.pub/) has provided a system for determining the reputations of researchers, guaranteed by ORCID, for each article. If this becomes widespread, journal-based assessments such as impact factors will gradually fade away, and the number of micropublications will become irrelevant in the evaluation of researchers.
It should be noted that post-publication peer review may be important in any case. The high time-pressure for micropublication peer reviews can lead to a cursory checking, resulting in a flood of terrible publications. However, as I mentioned earlier, micropublication reviewers will not be distracted by superfluous sections (typical introduction and discussion sections) and can focus on and examine the key points (methods, analysis, etc.). Further, elsewhere, I am discussing a new post-publication peer review model in which open peer review comments on already-published articles are published in journals with a DOI (Ikeda, Yamada, & Takahashi, 2020). We are proposing this publishing model for general journals, but when we combine this with the idea of micropublication journals, very interesting things can happen. That is, a post-publication peer review comment can also be a micropublication paper. This chain of micropublications will be the hallmark of future accumulation and self-correcting science.
A final concern is the trade-off between brevity and theory. In other words, the minimization of the introduction and discussion, as described above, dilutes the description of the relationship between the background theory and experiment in the paper. This is a virtually unavoidable problem, but any experiment that requires so much careful introduction and discussion that it cannot be micropublished should be submitted to journals as a full paper. Micropublication is just one option other than full papers. Every paper should be written in its proper format, and the feature of micropublication is that it deliberately minimizes the theoretical aspects, which should be fine as long as the researcher correctly understands this point.
I recommend that micropublishing be adopted in the fields of social and behavioral sciences, because it would bring numerous benefits during and after the COVID-19 era. It is not necessary to launch a new micropublication journal (as I have with the demonstration of Psychological Micro Reports). Instead, existing journals could introduce a policy to accept such papers. The journal Molecular Brain has an article-type similar to micropublication, known as the Micro Report (https://molecularbrain.biomedcentral.com/submission-guidelines/preparing-your-manuscript/micro-report). Therefore, the Micro Report model could be replicated and implemented instantly. The spread of COVID-19 has dramatically changed research practices. It is imperative that journals adapt accordingly.
2One of the reviewers kindly informed me of the following concern. The micropublication tends to accelerate the frequency of publication, which in some cases may result in an overwhelming quantity of papers being produced. For example, we can imagine the current state of preprint servers as a similar phenomenon. This may lead to reduced accessibility of each paper and a lower signal-to-noise ratio for the required literature information. Therefore, micropublication journals will have to implement an easy-to-use search or filtering system. We need to take advantage of tags, and we should have a state-of-the-art recommendation and curation system in place.
3One of the reviewers kindly informed me that there is a large-scale, micropublication-like example in genomic databases, although it is not peer-reviewed. Benjamin Neale’s Lab (http://www.nealelab.is/uk-biobank) releases very large-scale GWAS data (4203 phenotypes, 361194 individuals) using UK Biobank, but there is only a minimal description of the methods and results for each result set. In addition, a similar effort exists for brain imaging GWAS data (https://open.win.ox.ac.uk/ukbiobank/big40/). In view of this situation, I expect that the demand for micropublications will continue to increase in the future.
The author would like to thank the three reviewers and members of the Japanese Community for Open and Reproducible Science (JCORS) for very constructive discussions.
This research was supported by JSPS KAKENHI (16H03079, 17H00875, 18K12015, and 20H04581).
The author has no competing interests to declare. The author YY is an Associate Editor at Collabra: Psychology and was not involved in the review process of this article.
Drafted and/or revised the article, Approved the submitted version for publication: YY.
Beeby, R. (2020, April 6). Medical journal fast-tracks free publication of Covid-19 research. Research Professional News. https://researchprofessionalnews.com/rr-news-australia-industry-2020-4-medical-journal-fast-tracks-free-publication-of-covid-19-research/
Chambers, C. (2020, March 16). CALLING ALL SCIENTISTS: Rapid evaluation of COVID19-related Registered Reports at Royal Society Open Science. NeuroChambers. http://neurochambers.blogspot.com/2020/03/calling-all-scientists-rapid-evaluation.html
Everett, J. A. C., Colombatto, C., Chituc, V., Brady, W. J., & Crockett, M. (2020, March 20). The effectiveness of moral messages on public health behavioral intentions during the COVID-19 pandemic. DOI: https://doi.org/10.31234/osf.io/9yqs8
Frontiers. (2020). Coronavirus Knowledge Hub: A trusted source for the latest science on SARS-CoV-2 and COVID-19. Retrieved from https://coronavirus.frontiersin.org/
Gerhold, L. (2020, March 25). COVID-19: Risk perception and coping strategies. DOI: https://doi.org/10.31234/osf.io/xmpk4
Grootswagers, T. (2020, March 18). A primer on running human behavioural experiments online. DOI: https://doi.org/10.31234/osf.io/wvm3x
IJzerman, H., Lewis, N. A., Jr., Weinstein, N., DeBruine, L. M., Ritchie, S. J., Vazire, S., … Przybylski, A. K. (2020, April 27). Psychological science is not yet a crisis-ready discipline. DOI: https://doi.org/10.31234/osf.io/whds4
Ikeda, A., Xu, H., Fuji, N., Zhu, S., & Yamada, Y. (2019). Questionable research practices following pre-registration. Japanese Psychological Review, 62(3), 281–295. DOI: https://doi.org/10.31234/osf.io/b8pw9
Ikeda, K., Yamada, Y., & Takahashi, K. (2020, May 26). Post-publication peer review for real. DOI: https://doi.org/10.31234/osf.io/sp3j5
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196–217. DOI: https://doi.org/10.1207/s15327957pspr0203_4
Li, J., Yang, A., Dou, K., & Cheung, R. Y. M. (2020, March 11). Self-control moderates the association between perceived severity of the coronavirus disease 2019 (COVID-19) and mental health problems among the Chinese public. DOI: https://doi.org/10.31234/osf.io/2xadq
Li, J., Yang, A., Dou, K., Wang, L., Zhang, M., & Lin, X. (2020, February 28). Chinese public’s knowledge, perceived severity, and perceived controllability of the COVID-19 and their associations with emotional and behavioural reactions, social participation, and precautionary behaviour: A national survey. DOI: https://doi.org/10.31234/osf.io/5tmsh
Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161–175. DOI: https://doi.org/10.1007/BF01173636
Niepel, C., Kranz, D., Borgonovi, F., & Greiff, S. (2020, March 30). Coronavirus (SARS-CoV-2) fatality risk perception in US adult residents. DOI: https://doi.org/10.31234/osf.io/w52e9
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences of the United States of America, 18, 201708274. DOI: https://doi.org/10.1073/pnas.1708274114
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. DOI: https://doi.org/10.1126/science.aac4716
Raude, J., Debin, M., Souty, C., Guerrisi, C., Turbelin, C., Falchi, A., … & Colizza, V. (2020, March 8). Are people excessively pessimistic about the risk of coronavirus infection? DOI: https://doi.org/10.31234/osf.io/364qj
Sasaki, K., & Yamada, Y. (2019). Crowdsourcing visual perception experiments: A case of contrast threshold. PeerJ, 7, e8339. DOI: https://doi.org/10.7717/peerj.8339
Science in the time of COVID-19. (2020). Nature Human Behaviour, 4, 327–328. DOI: https://doi.org/10.1038/s41562-020-0879-9
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. DOI: https://doi.org/10.1177/0956797611417632
Šrol, J., Mikušková, E. B., & Cavojova, V. (2020, March 31). When we are worried, what are we thinking? Anxiety, lack of control, and conspiracy beliefs amidst the COVID-19 pandemic. DOI: https://doi.org/10.31234/osf.io/f9e6p
Sweeny, K., Rankin, K., Cheng, X., Hou, L., Long, F., Meng, Y., … & Zhang, W. (2020, March 26). Flow in the time of COVID-19: Findings from China. DOI: https://doi.org/10.31234/osf.io/e3kcw
Syed, M. [syeducation] (2020, April 8). COVID-19 preprints are regularly being posted on @PsyArXiv. We need timely and open reviews of them given the higher interest. This doc lists the preprints and tracks reviews. Choose a paper, complete a review, and link it in the doc. Details in the file [Tweet]. Retrieved from https://mobile.twitter.com/syeducation/status/1247561969466576907
Tabri, N., Hollingshead, S., & Wohl, M. J. A. (2020, March 31). Framing COVID-19 as an existential threat predicts anxious arousal and prejudice towards Chinese people. DOI: https://doi.org/10.31234/osf.io/mpbtr
Van Bavel, J. J., Baicker, K., Boggio, P. S., Capraro, V., Cichocka, A., Cikara, M., … & Willer, R. (2020). Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behavior, 4, 460–471. DOI: https://doi.org/10.1038/s41562-020-0884-z
van ‘t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology—A discussion and suggested template. Journal of Experimental Social Psychology, 67(C), 2–12. DOI: https://doi.org/10.1016/j.jesp.2016.03.004
Wise, T., Zbozinek, T. D., Michelini, G., Hagan, C. C., & Mobbs, D. (2020, March 19). Changes in risk perception and protective behavior during the first week of the COVID-19 pandemic in the United States. DOI: https://doi.org/10.31234/osf.io/dz428
Yamada, Y. (2018). How to crack pre-registration: Toward transparent and open science. Frontiers in Psychology, 9, 1831. DOI: https://doi.org/10.3389/fpsyg.2018.01831
Yamada, Y. (2019). Publish but perish regardless in Japan. Nature Human Behaviour, 3, 1035. DOI: https://doi.org/10.1038/s41562-019-0729-9
Zettler, I., Schild, C., Lilleholt, L., & Böhm, R. (2020, March 23). Individual differences in accepting personal restrictions to fight the COVID-19 pandemic: Results from a Danish adult sample. DOI: https://doi.org/10.31234/osf.io/pkm2a
Zheng, M. X., Yao, J., & Narayanan, J. (2020, March 20). Mindfulness buffers the impact of COVID-19 outbreak information on sleep duration. DOI: https://doi.org/10.31234/osf.io/wuh94
The author(s) of this paper chose the Open Review option, and the peer review comments can be downloaded at: http://doi.org/10.1525/collabra.370.pr