The community of research psychologists has access to a large and diverse pool of resources (e.g., time, participants, expertise, geographical locations, etc.) that, collectively, have the potential to produce considerable gains in knowledge, shape public policy, and improve human lives. Despite this potential, these resources may not be collectively used in the most effective manner: Individual researchers often have access to samples that are too small to properly detect the phenomenon of interest, idiosyncrasies in individual samples may not provide evidence about the generalizability of an effect, there may be redundancies in data collection procedures across studies, etc. In addition to being an impediment for scientific progress, these inefficiencies can be viewed as a disservice to the participants who volunteer their time and effort in the belief their data will contribute to reliable and generalizable knowledge, and to the public who entrust researchers to be good stewards of their research resources (e.g., Crutzen, & Peters, 2017). Thus, it is imperative for research psychologists to explore methodological approaches that attempt to address these inefficiencies.
One promising methodological approach is “crowdsourced” research, which is an approach that involves several researchers coordinating their resources to achieve goals that would otherwise be difficult to attain individually. For example, several recent, high-profile, large-scale research projects have demonstrated the potential of crowdsourcing research resources to make substantial contributions (e.g., the “Many Labs” projects, Ebersole et al., 2016; Eerland et al., 2016; Klein et al., 2014; Registered Replication Reports [RRR], Alogna et al. 2014; Cheung et al., 2016; Hagger et al., 2016; Wagenmakers et al., 2016; “The Pipeline Project,” Schweinsberg et al., 2016; the “ManyBabies” project, Frank et al., 2017; see also Schmalz, 2016). In each of these projects, several research teams each conducted a study (a) following the same methods, (b) at different locations and with different samples, and (c) the results from each research team were aggregated into a planned, common analysis from the project’s inception.
Despite the abovementioned examples of multi-site collaborative projects, we believe most researchers do not view such projects as a methodological approach that is available for addressing their research questions or they believe these projects are only used for replications of previously-published studies. Neither of these beliefs are necessarily true. To that end, we are initiating a Nexus—a collection of empirical and theoretical articles that will be published in Collabra: Psychology. The goals of this Nexus are to (a) provide an outlet for crowdsourced empirical projects, (b) assist authors in developing and executing their crowdsourced projects, and (c) demonstrate the flexibility and range of research questions that can be addressed with crowdsourced research methods.
In contrast to many special issues in traditional journals, the common theme of the articles that will be included in the Nexus is the methodological approach rather than the substantive topic. Specifically, each empirical paper will involve several researchers who each collect data at independent sites and who aggregate all of the collected data into a common analysis (most likely a meta-analysis). For the current Nexus we call each multi-site study a “Collection2” (pronounced merely as “collection” but denoted as a type of crowdsourced research project by the capital C and the exponent). The name “Collection2” succinctly describes the methodological approach of these projects because there is a collection of researchers who each collect data at their individual site (i.e., these are collections of collections of data). The current Nexus also will be open to, for example, theoretical critiques of crowdsourced research, commentary articles on the promise or benefits of crowdsourced research, re-analyses of already-completed Collections2, and meta-science articles that are relevant to crowdsourced research.
Here is a non-exhaustive list of the types of Collections2 that are possible for inclusion in the current Nexus and a hypothetical example of each.
Unlike special issues in traditional journals, the Nexus format allows articles to be submitted for publication as they are completed. In short, authors do not have to wait for their projects to be evaluated or published at the same time as other projects (i.e., there is no submission deadline or “publication date” of the special issue). We believe this open-endedness in the timeline will be beneficial because we expect variability in how long different projects will take to complete. Importantly, there is no reason for researchers to opt out of these projects because they do not believe they would be completed in time for inclusion in the Nexus.
The Author Processing Charges (APC) and APC waiver process for Collabra: Psychology also apply to this Nexus. This Nexus is open to all areas of psychological research. In fact, we hope to see submissions that include diversity in the areas of psychology, the research questions addressed, the methods, etc. Randy McCarthy and Chris Chartier will be the lead editors for this Nexus. We will serve as the point of contact for general questions about this Nexus. However, we may enlist ad-hoc editors to provide domain-specific knowledge for some proposals outside our areas of expertise. These ad-hoc editors would serve as the point of contact for specific projects they are handling.
Each Collection2 will have a researcher who is designated as the corresponding researcher. Corresponding researchers will be responsible for submitting the articles through the Collabra: Psychology submission portal, will be the point-of-contact between Collabra: Psychology and all of the contributing researchers within each Collection2, and will take a leadership role in coordinating their Collection2. There are two ways to lead a Collection2: Lead a non-Registered Reports Collection2 or lead a Registered Reports Collection2.
Some Collections2 that are eligible for inclusion in the Nexus may be in progress at the time of this announcement. Or, for whatever reason, authors may choose to design their study, collect data, and submit a manuscript to the Nexus as a traditional, post-data-collection submission (i.e., non-Registered Reports). These submissions will be given full consideration for inclusion in the Nexus and the submitted manuscript will be evaluated in a traditional peer-review process. In this case, the corresponding researcher would serve the same role as a corresponding researcher in the traditional publication process.
Some Collections2 may choose to be submitted as a Registered Report. A description of the general Registered Reports track at Collabra: Psychology can be found here: https://docs.google.com/document/d/1eeXPEC_oc4OYlywC9pRt6seL2xazlT58jAtRsCuEdCI/edit. In a notable departure from the 2-stage Registered Reports process described in this document, the Registered Reports format for this Nexus will involve a 3-stage process: A pre-data Collection2 proposal stage, a recruiting and registering contributing labs stage, and a post-data Collection2 manuscript stage. Thus, the first and third steps will follow Collabra: Psychology’s general Registered Reports process and the second step will be unique to this Nexus. Figure 1 shows the Registered Reports format for this Nexus.
An abbreviated description of this process is provided below with an emphasis on points where the Registered Reports process for the current Nexus departs from Collabra: Psychology’s general Registered Reports process. Authors should consult the Registered Reports process for Collabra: Psychology for more detail.
In Stage 1, the corresponding researcher will submit a proposed crowdsourced research project. Based on the Stage 1 peer-review process, an in-principle acceptance (IPA) may be extended. This IPA will give the researcher permission to move onto Stage 2.
What should be included in the Stage 1 proposal? Broadly, proposed projects will be evaluated like any other study. Namely, what is the research question? How is it (theoretically, practically) relevant? And is the proposed design of the study appropriate for addressing the research question? Aside from these broader issues, there are specifics that must be included in each proposed Collection2. Proposals will include an Introduction section, a Planned Methods section, and a Planned Analyses section. Each of these is described in detail below.
Introduction section for a proposed Collection2. Just like any empirical study, the Introduction section should introduce readers to the relevant background necessary to evaluate the proposed study. Additionally, Collections2 typically involve the use of a lot of research resources (e.g., participants, time, etc.); therefore, proposing authors must make a compelling justification why their proposed Collection2 would best be addressed using a “crowdsourced” methodological approach as opposed to a traditional single-site approach. For example, it may be difficult to justify Collections2 to study effects that are sufficiently established (e.g., a common Stroop task) or to study effects that are extremely speculative because those projects may not be a wise use of the collective research resources.
Planned Methods section for a proposed Collection2. In addition to describing a method that addresses the research question, the Planned Methods for Collections2 must address the following points:
Planned Analyses section for a proposed Collection2. The Planned Analyses for Collections2 must address the following points:
In Stage 2, the corresponding researcher will identify all of the contributing labs for their Collection2. Prior to data collection, the lead researcher must provide a list of contributing labs and a confirmation that each contributing lab understands how authorship will be determined. Once the contributing labs and agreement on the authorship criteria are approved by the editors, a date-and time-stamped document will be posted on the Open Science Framework. The project will then be approved to begin data collection.
Recruiting contributors. Proposing authors may choose to recruit contributing labs on their own. If proposing authors want assistance with recruiting labs, the editors can assist authors with finding contributing labs by advertising on social media, using a mailing list of labs who have expressed interest in contributing to crowdsourced research, etc. Collections2 with an IPA also can post their project to StudySwap, which is an online platform for researchers to find potential collaborators. The editors can assist researchers in using the StudySwap platform. Our goal is to help make each Collection2 a success and we understand that getting contributing labs is an essential component of making these projects successful.
Sticks and Carrots. We believe that identifying contributing labs may be perceived as a challenge to these crowdsourced research projects. We hope this perception is not too much of a deterrent for researchers who want to lead a Collection2. It is worth noting there are several things that can be done to encourage other researchers to contribute to a proposed Collection2.
First, for authors who choose the Registered Reports route, an IPA is likely a great way to assuage researchers’ hesitations about contributing to a Collection2. Once an IPA is obtained, the lead researcher can approach potential contributors and assure them that a successfully-executed project will be accepted for publication. Second, some researchers may offer time or space in the data collection procedure to use as inducements for collaborators (as long as this approach is approved in the initial review process). For example, suppose a primary research project will take 40 minutes and the goal is to get five total labs to contribute to a project. The lead researcher may propose a one-hour data collection procedure where the first 40 minutes will be the primary project and four 5-minute chunks of time will be offered to the other four contributors. Finally, other recruitment tools (e.g., funding, etc.) may be used so long as it is approved during the Stage 1 review process.
In Stage 3, the corresponding researcher will submit a post-data-collection manuscript. This second round of reviews will not focus on the observed results. Instead, these reviews will focus on whether the proposed methods were properly executed, the planned analyses were properly conducted, the results were properly interpreted, and the proposed data sharing plan was followed. Additionally, authors need to include a section detailing any deviations from the proposed study. Things happen during data collection that are unforeseeable. That is fine and cannot be completely avoided. The important thing is these deviations are properly documented and transparently communicated with readers.
When a successfully-completed manuscript is ready to be accepted, authors will post the accepted IPA, the list of contributing labs, the pre-data authorship inclusion criteria (showing that each contributing lab understood the authorship criteria), and any shareable stimuli and data to an online repository (e.g., the Open Science Framework). This will transparently allow readers to compare the final manuscript to what was planned and approved. Links to these materials must be included in the final manuscript.
Notably, if a Collection2 is a replication of a previously-published article or is closely-aligned with an individual researcher who was not part of the project, we may solicit commentaries from original authors to be included in the Nexus. We will consider these invited commentaries on a case-by-case basis, but will be transparent with all of the parties involved.
This Nexus will provide many opportunities for individuals to contribute to crowdsourced research. Please consider joining a Collaboration2 as a contributor or mentor a student who wants to contribute to an appropriate Collaboratio2. Follow StudySwap on Twitter (@Study_Swap), like StudySwap on facebook, or contact us to join our mailing list to receive updates.
The Nexus also will have other ways that authors can contribute. If you are interested in writing an opinion piece or commentary on crowdsourced research in general, or of a specific Collection2, please contact us with your idea. Also, given our encouragement to openly share data, we hope this Nexus is a boon to meta-science. Researchers also are free to submit re-analyses of Collections2 to the Nexus. If you are interested in submitting a re-analysis, please contact us to discuss your idea prior to submission.
And, of course, an invaluable way to help make this Nexus a success is by getting the word out.
We are very excited for the possibilities of this Nexus. We hope this Nexus will generate a lot of informative data and demonstrate how crowdsourced research can be a useful methodological tool for the future of psychological science.
The authors have no competing interests to declare.
Drafting the article or revising it critically for important intellectual content: RM/CC
Final approval of the version to be published: RM/CC
Alogna, V. K. Attaya, M. K. Aucoin, P. Bahník, Š. Birch, S. Birt, A. R. Buswell, K. et al. (2014). Registered replication report: Schooler and engstler-schooler (1990). Perspectives on Psychological Science 9: 556–578, DOI: https://doi.org/10.1177/1745691614545653
Cheung, I. Campbell, L. LeBel, E. P. Ackerman, R. A. Aykutoǧlu, B. Bahník, Š. Carcedo, R. J. et al. (2016). Registered Replication Report: Study 1 from Finkel, Rusbult, Kumashiro, & Hannon (2002). Perspectives on Psychological Science 11(5): 750–764.
Crutzen, R. and Peters, G. J. Y. (2017). Targeting next generations to change the common practice of underpowered research. Frontiers in Psychology 8DOI: https://doi.org/10.3389/fpsyg.2017.01184
Ebersole, C. R. Atherton, O. E. Belanger, A. L. Skulborstad, H. M. Allen, J. M. Banks, J. B. Brown, E. R. et al. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology 67: 68–82, DOI: https://doi.org/10.1016/j.jesp.2015.10.012
Eerland, A. Sherrill, A. M. Magliano, J. P. Zwaan, R. A. Arnal, J. D. Aucoin, P. Crocker, C. et al. (2016). Registered replication report: Hart & Albarracín (2011). Perspectives on Psychological Science 11: 158–171, DOI: https://doi.org/10.1177/1745691615605826
Frank, M. C. Bergelson, E. Bergmann, C. Cristia, A. Floccia, C. Gervain, J. Yurovski, D. et al. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building, DOI: https://doi.org/10.17605/OSF.IO/27B43 Retrieved from https://osf.io/preprints/psyarxiv/27b43/.
Hagger, M. S. Chatzisarantis, N. L. Alberts, H. Anggono, C. O. Batailler, C. Birt, A. R. Calvillo, D. P. et al. (2016). A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science 11: 546–573, DOI: https://doi.org/10.1177/1745691616652873
Klein, R. Ratliff, K. Vianello, M. Adams, R. Jr Bahník, S. Bernstein, M. Cemalcilar, Z. et al. (2014). Data from investigating variation in replicability: A “Many Labs” Replication Project. Social Psychology 45: 142–152, DOI: https://doi.org/10.1027/1864-9335/a000178
Schmalz, X. (2016). The power is in collaboration: Developing international networks to increase the reproducibility of science. The Winnower: Social Science, DOI: https://doi.org/10.15200/winn.146178.82672
Wagenmakers, E. J., Beek, T., Dijkhoff, L. and Gronau, Q. F. (2016). Registered Replication Report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science 11(6): 917–928, DOI: https://doi.org/10.1177/1745691616674458
The author(s) of this paper chose the Open Review option, and the peer review comments are available at:http://doi.org/10.1525/collabra.107.pr