Review article

A Practical Guide for Transparency in Psychological Science

Authors: {'first_name': 'Olivier', 'last_name': 'Klein'},{'first_name': 'Tom E.', 'last_name': 'Hardwicke'},{'first_name': 'Frederik', 'last_name': 'Aust'},{'first_name': 'Johannes', 'last_name': 'Breuer'},{'first_name': 'Henrik', 'last_name': 'Danielsson'},{'first_name': 'Alicia', 'last_name': 'Hofelich Mohr'},{'first_name': 'Hans', 'last_name': 'Ijzerman'},{'first_name': 'Gustav', 'last_name': 'Nilsonne'},{'first_name': 'Wolf', 'last_name': 'Vanpaemel'},{'first_name': 'Michael C.', 'last_name': 'Frank'}


The credibility of scientific claims depends upon the transparency of the research products upon which they are based (e.g., study protocols, data, materials, and analysis scripts). As psychology navigates a period of unprecedented introspection, user-friendly tools and services that support open science have flourished. However, the plethora of decisions and choices involved can be bewildering. Here we provide a practical guide to help researchers navigate the process of preparing and sharing the products of their research (e.g., choosing a repository, preparing their research products for sharing, structuring folders, etc.). Being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. Transparent research practices not only improve the efficiency of individual researchers, they enhance the credibility of the knowledge generated by the scientific community. 

Keywords: transparencyopen sciencetutorial 
 Accepted on 29 May 2018            Submitted on 25 Mar 2018

… until recently I was an open-data hypocrite. Although I was committed to open data, I was not implementing it in practice. … Some of it was a lack of effort. It was a pain to document the data; it was a pain to format the data; it was a pain to contact the library personnel; it was a pain to figure out which data were indeed published as part of which experiments. Some of it was forgetfulness. I had neither a routine nor any daily incentive to archive data. (Rouder, 2016, p. 1063)


Science is a cumulative and self-corrective enterprise; over time the veracity of the scientific literature should gradually increase as falsehoods are refuted and credible claims are preserved (Merton, 1973; Popper, 1963). These processes can optimally occur when the scientific community is able to access and examine the key products of research (materials, data, analyses, and protocols), enabling a tradition where results can be truly cumulative (Ioannidis, 2012). Recently, there has been growing concern that self-correction in psychological science (and scientific disciplines more broadly) has not been operating as effectively as assumed, and a substantial proportion of the literature may therefore consist of false or misleading evidence (Ioannidis, 2005; Johnson, Payne, Wang, Asher, and Mandal, 2016; Klein et al., 2014; Open Science Collaboration, 2015; Simmons, Nelson, & Simonsohn, 2011; Swiatkowski & Dompnier, 2017). Many solutions have been proposed; we focus here on the adoption of transparent research practices as an essential way to improve the credibility and cumulativity of psychological science.

There has never been an easier time to embrace transparent research practices. A growing number of journals, including Science, Nature, and Psychological Science, have indicated a preference for transparent research practices by adopting the Transparency and Openness Promotion guidelines (Nosek et al., 2015). Similarly, a number of major funders have begun to mandate open practices such as data sharing (Houtkoop et al., 2018). But how should individuals and labs make the move to transparency?

The level of effort and technical knowledge required for transparent practices is rapidly decreasing with the exponential growth of tools and services tailored towards supporting open science (Spellman, 2015). While a greater diversity of tools is advantageous, researchers are also faced with a paradox of choice. The goal of this paper is thus to provide a practical guide to help researchers navigate the process of preparing and sharing the products of research, including materials, data, analysis scripts, and study protocols. In the supplementary material, readers can find concrete procedures and resources for integrating the principles we outline in their own research.1 Our view is that being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. These steps not only improve the efficiency of individual researchers, they enhance the credibility of the knowledge generated by the scientific community.

Why Share?

Science is based on verifiability, rather than trust. Imagine an empirical paper with a Results section that claimed that “statistical analyses, not reported for reasons of brevity, supported our findings (details are available upon request)”. Such opaque reporting would be unacceptable, because readers lack essential information to assess or reproduce the findings, namely the analysis methods and their results. Although publication norms for print journals previously supported only sharing verbal descriptions, rather than a broader array of research products, this same logic applies equally.

When study data and analysis scripts are openly available, a study’s analytic reproducibility can be established by re-running the reported statistical analyses, facilitating the detection and correction of any unintended errors in the analysis pipeline (Hardwicke et al., 2018; Peng, 2006; Stodden, 2015; Stodden, Seiler, & Ma, 2018; see supplementary material [SM]: Promoting analytic reproducibility). Once analytic reproducibility has been established, researchers can examine the analytic robustness of the reported findings, by employing alternative analysis specifications (Silberzahn et al., in press; Simonsohn, Simmons, & Nelson, 2015; Steegen, Tuerlinckx, Gelman, & Vanpaemel, 2016), highlighting how conclusions depend on particular choices in data processing and analysis. When stimuli and other research materials are openly available, researchers can conduct replication studies where new data are collected and analyzed using the same procedures to assess the replicability of the finding (Simons, 2014). And once a finding has been shown to be replicable, researchers can investigate its generalisability: how it varies across different contexts and methodologies (Brandt et al., 2014).

Transparency also enhances trust in the validity of statistical inference. Across statistical frameworks, conducting multiple tests and then selectively reporting only a subset may lead to improper and ungeneralisable conclusions (Goodman et al., 2016; Wasserstein & Lazar, 2016). Even if only a single analysis is conducted, selecting it based on post-hoc examination of the data can undermine the validity of inferences (Gelman & Loken, 2014). Transparency regarding analytic planning is thus critical for assessing the status of a particular statistical test on the continuum between exploratory and confirmatory analysis (De Groot, 2014; Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012). Such transparency can be readily achieved by publicly documenting one’s hypotheses, research design, and analysis plan before conducting a study in a process called pre-registration (De Angelis et al. 2004; Nosek, Ebersole, DeHaven, & Mellor, 2018; see SM: Preregistration).

Besides increasing the credibility of scientific findings, transparency also boosts the efficiency of scientific discovery. When information is not shared, the value of a study is limited because its products cannot be reused. In contrast, when research products are shared, subsequent researchers can avoid duplication of effort in data collection and decrease the expense involved in creating stimulus materials and analytic code. Further, sharing research products allows researchers to explore related hypotheses and can inspire new research questions. Shared research products can also be important models for researchers, especially trainees, to use in the development of their own materials and analyses. And, in the case of publicly funded research it should also be considered an ethical impetus to make the results of this publicly funded work available to the public.

Finally, there are practical reasons to embrace transparency. First, public sharing is probably the best protection against data loss, since – as we will discuss – best practices require sharing in durable repositories. Second, open research practices increase visibility and facilitate access to unique opportunities for collaboration, jobs, and funding (McKiernan et al., 2016). Third, data sharing has been associated with a citation benefit (Piwowar, & Vision, 2013). Fourth, and perhaps most importantly: in our own anecdotal experience (cf. Lowndes et al., 2017), a research workflow designed at its core to be shared with others is far more efficient and sustainable to use oneself. Accessing an old project to find data, code, or materials need not trigger nightmares. A useful saying to keep in mind is that “your closest collaborator is you six months ago, but you don’t reply to emails” (Broman, 2016). Research is a finite enterprise for everyone: collaborators leave projects, change jobs, and even die. If work is not shared, it is often lost.

What to Share?

In this section, we review the different parts of the scientific process that can be shared. Our primary recommendations are:

  1. Make transparency a default: If possible, share all products of the research process for which there are no negative constraints (due to e.g., funder, IRB/ethics, copyright, or other contract requirements). While attributes of the data, such as disclosure risk, sensitivity, or size, may limit sharing, there are many options for granting partial and restricted access to the data and associated materials.
  2. If negative constraints prohibit transparency, explicitly declare and justify these decisions in the manuscript (Morey et al., 2016).
  3. Any shared material incrementally advances the goals of increasing verifiability and reuse. Authors need not wait to resolve uncertainty about sharing all products before beginning the process: Bearing in mind any negative constraints (e.g., privacy of participants), any product that is shared is a positive step.

Navigating this space can be difficult (see Figure 1). For this reason, we recommend that lab groups discuss and develop a set of “Standard Operating Procedures” (SOP) to guide the adoption of transparent research practices in a manner that is well-calibrated to their own unique circumstances.2 One part of that organisation scheme is a consistent set of naming conventions and consistent project structure (see SM: Folder structure); an example of project created in accordance with our recommendations is available on The Open Science Framework ( Below, we review each of the different products that can be shared.

Figure 1 

Decision flowchart outlining important considerations when sharing research products.

Study Protocol. A study protocol consists of a detailed written specification of hypotheses, methods, and analysis. For relatively straightforward studies, it may be reasonable to include all of this information in the main body of the primary report of the study. Alternatively, you may wish to share a separate protocol document and provide a higher-level verbal summary in the main report. For certain experimental procedures it may also be beneficial to include instructive video footage. One way to view the study protocol is as a verbal layer that collates, describes, and organises more specific research products, such as materials, software, and analysis code, and informs the reader how they were implemented during your study. Either way, the level of detail should be sufficient to allow others to replicate your work without direct instruction from you.

Materials. What constitutes materials differs widely from application to application, even within psychology. In simpler studies, the materials may be a list of questionnaire items or stimuli presented to participants manually (videos, images, sounds, etc.). In other studies, materials may include elaborate video stimuli, (video-taped) procedures for an interaction with a confederate or participants (Grahe, Brandt, & IJzerman, 2015), or computer code to present stimuli and collect responses. For clinical studies, materials may include case report forms and materials for informed consent. Sharing these materials is valuable for both interpretation of research results and for future investigators. A detailed examination of stimulus materials can lead to insights about a particular phenomenon or paradigm by readers or reviewers. In addition, since these materials are often costly and difficult to produce, lack of sharing will be a barrier for the replication and extension of findings.

Data and Metadata. Sharing data is a critical part of transparency and openness, but investigators must make decisions regarding what data to share. “Raw data” are the data as originally recorded, whether by software, an experimenter, a video camera, or other instrument (Ellis & Leek, 2017). Sharing such data can raise privacy concerns due to the presence of identifying or personal information. In some cases, anonymisation may be possible, while in others (e.g., video data), anonymity may be impossible to preserve and permission for sharing may not be granted. Regardless of the privacy concerns surrounding raw data, it should almost always be possible to share anonymised data in tabular format as they are used in statistical analyses (see SM: Anonymisation). Such data should typically be shared in an easily readable format that does not rely on proprietary software (e.g., comma-separated values, or CSV). Ideally, the script for generating these processed data from the raw data should be made available as well to ensure full transparency (see SM: Automate or thoroughly document all analyses).

One critical ingredient of data sharing is often overlooked: the need for metadata. Metadata is a term describing documentation that accompanies and explains a dataset (see SM: Data documentation). In psychology, metadata typically include information on who collected the data, how and when they were collected, the number of variables and cases in each data file, and dataset-level information such as verbose variable and value labels. Although they can also be shared in standardized, highly structured, machine-readable formats, often metadata are simply a separate document (called a “codebook” or “data dictionary”; see SM: Data documentation) that gives verbal descriptions of variables in the dataset. Researchers do not need to be experts in metadata standards: Knowing the structure of metadata formats is less important than making sure the information is recorded and shared. Machine-readable structure can always be added to documentation by metadata experts after the information is shared.

Analysis Procedure. To ensure full transparency and reproducibility of research findings it is critical to share detailed documentation of how the analytic results reported in a research project were obtained (see SM: Analytic reproducibility). Researchers analyze their data in many different ways, and so the precise product(s) to be shared will vary. Nevertheless, the aim is to provide an exact specification of how to move from raw data to final descriptive and statistical analyses, ensuring complete documentation of any cleaning or transformation of data. For some researchers, documenting analyses will mean sharing, for example, R scripts or SPSS syntax; for others it may mean writing a step-by-step description of analyses performed in non-scriptable software programs such as spreadsheets. In all cases, however, the goal is to provide a recipe for reproducing the precise values in the research report.

One challenge for sharing analyses is the rapid pace of change in hardware and software (SM: Avoid “works on my machine” errors). Some researchers may find it discouraging to try and create a fully-reproducible analytic ecosystem with all software dependencies completely specified (e.g., Boettiger, 2015; SM: Sharing software environments). We have several recommendations. First, do not let the perfect be the enemy of the good. Share and document what you can, as it will provide a benefit compared with not sharing. Second, document the specific versions of the analysis software and packages/add-ons that were used (American Psychological Association, 2010; Eubank, 2016). And finally, when possible, consider using open source software (e.g., R, Python) as accessing and executing code is more likely to be possible in the future compared with commercial packages (Huff, 2017; Morin et al., 2012).

Research Reports. While we primarily focus on research products beyond the standard written report in this guide, research reports themselves (i.e., published papers) also provide important information about how materials were used, how data were collected, and the myriad other details that are required to understand other products. Making research reports publicly available (through “Open Access”) greatly facilitates the use of shared research products. Two main options exist to publish Open Access: Green (posting research online through preprint repositories, like PsyArxiv; or Gold (full open access via the publisher, most of which currently still charge Article Processing Costs). For the Green route, preprints do not typically affect the traditional publication process as most journals do not consider them a ‘prior publication’ (Bourne, Polka, Vale, & Kiley, 2017). A further discussion of Open Access is beyond the scope of this article. However, you can always check a particular journal’s stance on open access by typing its name into the SHERPA/ROMEO database ( This will also tell you whether the journal makes the final article publicly available on its website, and whether this will require you to pay a fee.

When to Share

When it comes to the question of when to share, any time is better than never. However, benefits are maximised when sharing occurs as soon as possible. We consider the possibilities for sharing 1) before data collection, 2) during data collection, 3) when submitting a paper, 4) when the paper is published, 5) at the end of a project, and 6) after a specified embargo period. Figure 2 presents a typical workflow.

Figure 2 

Typical workflow indicating when to share research products at different stages of the research process.

Planning to share. Sharing research products is easier when you have planned for it in advance. For example, it makes sense to store and structure your data files in a systematic manner throughout the data collection period (i.e., to have a basic “data management plan”). Sharing then only requires a few clicks to upload the files. Many researchers justify not sharing their data because of the time and effort it takes (Borgman, 2012; Houtkoop, Chambers, Macleod, Bishop, Nichols, & Wagenmakers, 2018) – starting early helps avoid this problem. Ideally, researchers should create a data management plan at the beginning of their study (for information on how create one, see, e.g., Jones, 2011).

Before data collection. Sharing key study design and analysis information prior to data collection can confer a number of significant benefits, such as mitigating selective reporting bias or “p-hacking” (Nosek et al., 2018; Simmons et al., 2011). Your study protocol – hypotheses, methods, and analysis plan – can be formally “registered” by creating a time-stamped, read-only copy in a public registry (e.g., The Open Science Framework), such that they can be viewed by the scientific community (a “pre-registration”; see SM: Pre-registration). If you wish, it is possible to pre-register the study protocol, but keep it private under an “embargo” for a specified period of time (e.g., until after your study is published).

While embargoes on preregistrations can mitigate the fear of being scooped, flexibility in the release of pre-registered documents limits transparency. For example, researchers may strategically release only those documents that fit the narrative they wish to convey once the results are in. It is therefore preferable to encourage transparency from the outset. At the very least, the scientific community should be able to check whether a study was preregistered and, preferably, have access to the content of this preregistration, regardless of whether it is communicated in the final paper.

“Registered Reports” (Chambers, 2013; Hardwicke & Ioannidis, 2018; also see address this concern by embedding the pre-registration process directly within the publication pipeline. Researchers submit their study protocol to a journal where it undergoes peer-review, and may be offered in principle acceptance for publication before the study has even begun. This practice could yield additional advantages beyond standard pre-registration, such as mitigation of publication bias (because publication decisions are not based on study outcomes), and improved study quality (because authors receive expert feedback before studies begin).

The central purpose of pre-registration is transparency with respect to which aspects of the study were pre-planned (confirmatory) and which were not (exploratory). Viewed from this perspective, pre-registration does not prevent researchers from making changes to their protocol as they go along, or from running exploratory analyses, but simply maintains the exploratory-confirmatory distinction (Wagenmakers et al., 2012). When used appropriately, pre-registration has the potential to reduce bias in hypothesis-testing (confirmatory) aspects of research. This ambition, however, does not preclude opportunities for exploratory research when it is explicitly presented as such (Ioannidis, 2014; Nosek et al., 2018).

During data collection. Study protocols and materials can be readily shared once data collection commences. Rouder (2016) has, additionally, advocated sharing data while they are being collected, a concept he calls “born-open data” (see SM: Born-open data). Born-open-data are automatically uploaded to a public repository, for example, after every day of data collection. Besides the obvious advantages of greater transparency and immediate accessibility, born-open-data can simplify data management (e.g., the published data constitute an off-site backup to a professionally managed data storage). Because of technical and privacy issues, this approach may not be right for every project. However, once the system is set up, sharing data requires minimal effort (other than appropriate maintenance and periodic checking).

Upon paper submission or publication. A more common practice is to share research products when submitting a paper to a journal or when the paper is published. Of these two possibilities, we recommend sharing on submission. First, editors/reviewers may need access to this information in order to properly evaluate your study. Second, sharing on submission adds value to your paper by demonstrating that your research products will be made available to the scientific community. Finally, sharing on submission allows for errors to be caught before publication, reducing the possibility of later public correction. If ‘blind-reviewing’ is important and author names are displayed alongside shared research products, some repositories, such as The Open Science Framework, offer a “view-only” link option, that (partially) circumvents this problem.

After an embargo period. Finally, there may be reasons why researchers cannot or do not want to share all research products immediately. It is possible to archive products in an accessible repository right away, and temporarily delay their release by placing them under an embargo.

How to Share?

Once a researcher has decided to share research products, one of the most important decisions to make is where and how to share. Journals and professional societies often recommend that data and other research products be made “available upon request” (Vasilevsky et al., 2017). However, a number of studies suggest that data requests typically do not result in data access (Alsheikh et al., 2011; Dehnhard et al., 2013; Stodden et al., 2018; Vanpaemel et al., 2015; Vines et al., 2014; Wicherts et al., 2006). Similarly, sharing via personal websites is very flexible and increases accessibility and discoverability compared to sharing on request, but is also not a sustainable solution: Research products may become inaccessible when the personal website is deleted, overhauled, or moved to a different hosting service. Thus we do not recommend either of these options.

Instead, we recommend the use of independent public repositories for sharing research products. When choosing a repository, researchers should consider whether the repository:

  1. Uses persistent and unique identifiers for products (such as DOIs).
  2. Accommodates structured metadata to maximize discoverability and reuse.
  3. Tracks data re-use (e.g., citations, download counts).
  4. Accommodates licensing (e.g., provides the ability to place legal restrictions on data reuse or signal there are no restrictions).
  5. Features access controls (e.g., allows restriction of access to a particular set of users).
  6. Has some persistence guarantees for long-term access.
  7. Stores data in accordance with local legislation (e.g., the new General Data Protection Regulation for the EU,

Within this category, we highlight the Open Science Framework ( This repository satisfies the first six criteria above (the last one being dependent on the exact location of the researchers),3 is easy to use, and provides for sharing the variety of the products listed above (for a detailed tutorial on using the Open Science Framework to share research products, see Soderberg, 2018). Note that some research communities make use of specialized repositories, for example, brain imaging data ( or video and audio recordings ( Such repositories are more likely to have metadata standards and storage capacity calibrated to specific data types. For an overview of other public repositories, see Table 1.

Table 1

Features of selected public repositories that hold psychological data.

Operator(s) For- profit/Non-profit Country/jurisdiction Focus/specialization Costs for data sharer Self-deposit1 Private (i.e., non-public) storage/projects possible Restrictions of access possible (for published/public projects) Embargo period possible Content types2

Code Ocean Code Ocean Non-profit USA None 50 GB of storage, 10 hrs/month cloud computing time (with up to 5 concurrent runs) free for academic users Yes Only before publication of the project No No Software applications, Source code, Structured graphics, Configuration data
DANS EASY Netherlands Organisation for Scientific Research (NOW) & Royal Netherlands Academy of Arts and Sciences (KNAW) Non-profit Netherlands None Free for up to 50GB Yes No Yes Yes Scientific and statistical data formats, Standard office documents, Plain text, Images, Audiovisual data, Raw data, Structured text, Structured graphics, Databases
Dryad Dryad Digital Repository Non-profit USA Medical and life sciences $120 for every data package up to 20GB (+ $50 for each additional 10 GB); no costs for data sharers if charges are covered by a journal or the submitter is based in a country that qualifies for a fee waiver Yes No No Yes Scientific and statistical data formats, Standard office documents, Plain text,
Software applications, Source code, Structured text, other
figshare Digital Science For-profit UK None Free for up to 100GB Yes Yes (up to 20GB for free accounts) Yes3 Yes3 Scientific and statistical data formats, Standard office documents, Plain text, Images, Audiovisual data, Raw data, Archived data, Source code, Structured graphics
GESIS datorium GESIS – Leibniz Institute for the Social Sciences Non-profit Germany Social sciences Free Yes No Yes Yes Scientific and statistical data formats, Standard office documents, Plain text, Raw data, Structured graphics, other
GESIS standard archiving GESIS – Leibniz Institute for the Social Sciences Non-profit Germany Social sciences, survey data (esp. from large or repeated cross-sectional or longitudinal studies) Free No No Yes Yes Scientific and statistical data formats, Standard office documents, Plain text, Archived data
Harvard Dataverse Harvard University, Institute for Quantitative Social Sciences Non-profit USA4 None Free Yes Yes Yes Yes Scientific and statistical data, formats, Standard office documents, Raw data, Archived data, Software applications, Source code, Databases
Mendeley Data Elsevier (in cooperation with DANS) For-profit Netherlands None Free5 Yes Yes No Yes Scientific and statistical data formats, Standard office documents, Plain text, Software applications,
Structured text, Configuration data, other
openICPSR Inter-University Consortium of Political and Social Science Non-profit USA Political and social research, social and behavioral sciences Free up to 2GB6 Yes Yes Yes Yes Scientific and statistical data formats, Standard office documents, Plain text, Archived data, Structured text, Structured graphics
Open Science Framework Center for Open Science Non-profit USA None Free Yes Yes No Yes Scientific and statistical data formats, Standard office documents, Plain text, other
PsychData ZPID – Leibniz Institute for Psychology Information Non-profit Germany Psychology, data for peer-reviewed publications Free No No Yes Yes Scientific and statistical data formats, Standard office documents, Plain text
UK Data Service standard archiving UK Data Archive & Economic and Social Research Council (ESRC) Non-profit UK Social research, esp. large-scale surveys, longitudinal, and qualitative studies Free No No Yes Yes Scientific and statistical data formats, Standard office documents, Plain text, Images, Audiovisual data, Raw data, Structured graphics
UK Data Service ReShare UK Data Archive & Economic and Social Research Council (ESRC) Non-profit UK Social sciences Free Yes No Yes Yes Scientific and statistical data, formats, Standard office documents, Plain text, Images, Audiovisual data, Raw data, Structured graphics
Zenodo European Organization for Nuclear Research & Open Access Infrastructure for Research in Europe (OpenAIRE) Non-profit EU None Free7 Yes No Yes Yes Scientific and statistical data formats, Standard office documents, Plain text, Images, Audiovisual data, Raw data, Archived data, Source code, Structured text, Structured graphics, Networkbased data, other

1 If self-deposit (i.e., researchers can directly upload their own materials) is not possible, this means that the repository is curated (or at least more strongly curated than the others). The advantage of these repositories is that they offer additional help and services by professional archiving staff (e.g., in the creation of study- and variable-level documentation or the conversion of files to nonproprietary formats).

2 We used the content type category from the re3data schema v3.0 here (see Rücknagel et al., 2015).

3 Individual files can be embargoed or made confidential.

4 Dataverse is a special case in several regards. There is the overall Dataverse Project (, then there are different Dataverse repositories (e.g., the Harvard Dataverse: or DataverseNL: by the Dutch Data Archiving and Networked Services) which host multiple individual Dataverses (e.g., by individual universities, research groups or researchers). If the institution a researcher is affiliated with does not have its own Dataverse repository or Dataverse, it is possible to create a Dataverse within the Harvard Dataverse repository. For a more detailed description of Dataverse and its organizational architecture, see King (2007) and Leeper (2014).

5 The FAQ on the Mendeley Data website states that they may introduce a freemium model in the future “for instance charging for storing and posting data, above a certain dataset size threshold” (see

6 If more storage space or additional services are needed the researchers or their institutions can choose to pay for branded OpenICPSR hosting or the “Professional Curation Package” to access all of the ICPSR (curation) services (see

7 The Zenodo terms of use state that “content may be uploaded free of charge by those without ready access to an organized data centre”.

We recommend to share on a platform, such as the OSF, that make it possible to attribute a unique and persistent URL (such as a DOI) to the project. Several studies have indicated that regular URLs used by journals to link to supplementary files can often break over time, severing access to research products (Evangelou, Trikalinos, & Ioannidis, 2005; Gertler & Bullock, 2017). Using persistent URLs increases the chances that research products will be accessible for the long term.

Sharing can raise a number of legal and ethical issues, and these vary between countries and between institutions. Handling these is vastly simplified by addressing them ahead of time. For example, consent forms (see SM: Informed consent) can explicitly request participant consent for public data sharing, as it can be hard or impossible to obtain retroactive consent. Additionally, researchers should always clarify any requirements of their institution, granting agency, and intended publication venue. Below we review issues related to privacy and licensing.

Considering participants’ privacy can be both an ethical issue and a legal requirement (for example, The United States’ Health Insurance Portability and Accountability Act and The European Union’s General Data Protection Regulation, see SM: EU Data Protection Guidelines). In short, researchers must take appropriate precautions to protect participant privacy prior to sharing data. Fortunately, many datasets generated during psychological research either do not contain identifying information, or can be anonymised (“de-identified”) relatively straightforwardly (see SM: Anonymisation). However, some forms of data can be quite difficult to anonymise (e.g., genetic information, video data, or structural neuroimaging data; Gymrek et al., 2013; Sarwate et al., 2014), and require special considerations beyond the scope of this article. Because it is often possible to identify individuals based on minimal demographic information (e.g., postal code, profession, age; Sweeney, 2000), researchers should consult with their ethics board to find out the appropriate legal standard for anonymisation.

One further legal concern for sharing research products is their ownership. Researchers often assume that publicly available research products have been placed in the public domain, that is, that the authors waive all property rights. However, by default, researchers retain full copyright of the products they create. Unless they are published with a permissive license, the products technically cannot be used or redistributed without approval from the authors – despite scientific norms to the contrary. Thus, to reduce uncertainty about copyright, shared products should ideally be licensed using an easy-to-understand standard license such as a Creative Commons (CC; or Open Data Commons (ODC; In the spirit of openness we recommend to release research products into the public domain by using maximally permissive licenses, such as CC0 and ODC-PDDL, or to condition re-use only on attribution (e.g., CC-BY and ODC-BY). Licensing research products is as easy as including a text file alongside your research products containing a statement such as “All files in this repository are licensed under a Creative Commons Zero License (CC0 1.0)”.

So, why not share?

Given all of the arguments we and others have presented, why would researchers products still not share their data? Beyond the concerns described above (privacy, etc.), one commonly heard worry is that researchers will make use of shared resources to gain academic precedence (“scooping”; Houtkoop et al. 2018). In our view, this worry is usually unwarranted. Most subfields of psychology are not so competitive as to be populated with investigators who are looking to race to publish a particular finding. In addition, in many cases the possibility of being scooped is likely outweighed by the benefits of increased exposure, as noted by Gary King:4 “The thing that matters the least is being scooped. The thing that matters the most is being ignored”. Researchers who are truly concerned about being scooped – whether justifiably or not – can simply wait to share their materials, code, and data until after they publish, or release research products under a temporary embargo. Such embargoes slow verification and reuse, but they are far better than not sharing at all.

Another worry is that errors will be revealed by others checking original data, or original conclusions will be challenged by alternative analyses (Houtkoop et al., 2018). Indeed, it seems likely that errors will be discovered and conclusions will be challenged as widespread adoption of transparent research practices adds fuel to the idling engines of scientific self-correction and quality control, such as replication and peer review. It is understandable that researchers worry about errors being discovered in their own work, but such errors are inevitable – we are after all, only human. A rise in informed critical discourse will be healthy for science and make discovery of such errors normative. We believe that more, rather than less, transparency is the best response. Honesty and transparency are likely to enhance – rather than diminish – one’s standing as a scholar (Fetterman & Sassenberg, 2015).

Researchers may also be concerned that learning and then implementing transparent research practices will be too time-consuming (Houtkoop et al., 2018). In our experience, there is indeed a significant time-cost to learning such practices. Nonetheless, these should not necessarily be embraced and mastered at once. It is often through “baby steps”, via trial and error, that the practice of open science can become natural and habitual. It helps to include, for example, “research milestones” in one’s workflow. Adding in milestones also contributes to an optimal teaching strategy, with students learning how to engage in open science practices in small steps. Besides, there are major benefits that make this time well spent for the individual researcher. First, transparent research practices are often synonymous with good research management practices, and therefore increase efficiency in the longer term. For example, it is much easier to locate stimuli from an old project or re-use analysis code when it is well-documented and available in a persistent online repository. Second, transparent practices can lead to benefits in terms of citation and reuse of one’s work (see SM: incitenvising sharing). Finally, transparent research practices inspire confidence in one’s own research findings, allowing one to more readily identify fertile avenues for future studies that are truly worth investing resources in.


The field of psychology is engaged in an urgent conversation about the credibility of the extant literature. Numerous research funders, institutions, and scientific journals have endorsed transparent and reproducible research practices through the TOP guidelines (Nosek et al., 2015)5 and major psychology journals have begun implementing policy changes that encourage or mandate sharing (see e.g., Kidwell et al., 2016; Nuijten et al., 2017). Meanwhile, the scientific ecosystem is shifting and evolving. A new open science frontier has opened, and flourishes with a plethora of potential tools and services to help researchers adopt transparent research practices.

Here we have sketched out a map to help researchers navigate this exciting new terrain. Like any map, some aspects will become outdated as the landscape evolves over time: Exciting new tools to make research transparency even more user-friendly and efficient are already on the horizon. Nevertheless, many of the core principles will remain the same, and we have aimed to capture them here. Our view is that being an open scientist means adopting a few straightforward research management practices, which lead to less error-prone, reproducible research workflows with each incremental step adding positive value. Doing so will improve the efficiency of individual researchers and it will enhance the credibility of the knowledge generated by the scientific community.

Additional Files

The additional file for this article can be found as follows:


1One of the major burdens facing scientists is keeping up with the evolution in standards and resources. That’s why the SM will be updated regularly and collaboratively. This “live” version is available at will therefore differ from the publisher’s version. 

2For example, see the SOP for the Nosek group ( and the Green group ( 

3OSF, which is located in the United States, satisfies US legislation. It has also adapted its privacy policy and terms of use to comply with the GDPR: Note, however, that compliance with these legislations depends on the use of proper anonymisation procedures by the researchers (see Supplementary Material). 

5See also guidelines for specific fields and types of research: the CONSORT statement (randomized controlled trial:, the ARRIVE guidelines (animal research: or the PRISMA statement (meta-analysis: The EQUATOR website ( lists the main reporting guidelines. 


The authors thank Tim van der Zee and Kai Horstmann for helpful comments and Daniël Lakens for his help at the start of this project. We also thank Christoph Stahl and Tobias Heycke for allowing us to use their data and materials for the example project (from Heycke, Aust, & Stahl, 2017) and Luce Vercammen for proofreading the manuscript. Any remaining errors are the authors’ responsibility.

Funding Information

This work was partly funded by the French National Research Agency in the framework of the “Investissements d’avenir” program (ANR-15-IDEX-02) awarded to Hans IJzerman. Tom Hardwicke was supported by a general support grant awarded to METRICS from the Laura and John Arnold Foundation.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

  • OK initiated the project.
  • He coordinated it with MF.
  • All authors contributed to the writing and commented on previous versions.
  • FA designed the example project.

Author Information

Olivier Klein, Center for Social and Cultural Psychology, Brussels, Belgium. Tom E. Hardwicke, Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA; Frederik Aust, Department of Psychology, University of Cologne, Cologne, Germany; Johannes Breuer, Data Archive for the Social Sciences, GESIS – Leibniz Institute for the Social Sciences, Cologne, Germany; Alicia Hofelich Mohr, Liberal Arts Technologies and Innovation Services, College of Liberal Arts, University of Minnesota, Minneapolis, Minnesota, USA; Hans IJzerman, LIP/PC2S, Université Grenoble Alpes, Grenoble, Isère, France; Gustav Nilsonne, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm Sweden, Stress Research Institute, Stockholm University, Stockholm, Sweden, and Department of Psychology, Stanford University, Stanford, USA; Wolf Vanpaemel, Research Group of Quantitative Psychology and Individual Differences, University of Leuven, Leuven, Belgium; Michael C. Frank, Department of Psychology, Stanford University, Stanford, California, USA.


  1. American Psychological Association. (2010). Publication Manual of the American Psychological Association (6th edition). Washington, DC: American Psychological Association. 

  2. Boettiger, C. (2015). An Introduction to Docker for Reproducible Research. ACM SIGOPS Operating Systems Review, 49(1), 71–79. DOI: 

  3. Borgman, C. L. (2012). The conundrum of sharing research data. Journal of the Association for Information Science and Technology, 63(6), 1059–1078. DOI: 

  4. Bourne, P. E., Polka, J. K., Vale, R. D., & Kiley, R. (2017). Ten simple rules to consider regarding preprint submission. PLoS Computational Biology, 13(5), e1005473–6. DOI: 

  5. Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., van’t Veer, A., et al. (2014). The replication recipe: What makes for a convincing replication?, Journal of Experimental Social Psychology, 50, 217–224. DOI: 

  6. Broman, K. (2016). Steps toward reproducible research [Slides]. Retrieved from: 

  7. Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49(3), 609–610. DOI: 

  8. De Angelis, C., Drazen, J. M., Frizelle, F. A. P., Haug, C., Hoey, J., Horton, R., Weyden, M. B. V. D., et al. (2004). Clinical trial registration: A statement from the International Committee of Medical Journal Editors. New England Journal of Medicine, 351(12), 1250–1251. DOI: 

  9. De Groot, A. D. (2014). The meaning of “significance” for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han LJ van der Maas]. Acta psychologica, 148, 188–194. DOI: 

  10. Dehnhard, I., Weichselgartner, E., & Krampen, G. (2013). Researcher’s willingness to submit data for data sharing: A case study on a data archive for psychology. Data Science Journal, 12, 172–180. DOI: 

  11. Ellis, S. E., & Leek, J. T. (2017). How to share data for collaboration. PeerJ Preprints, 5, e3139v5. DOI: 

  12. Eubank, N. (2016). Lessons from a Decade of Replications at the Quarterly Journal of Political Science. PS: Political Science & Politics, 49(2), 273–276. DOI: 

  13. Evangelou, E., Trikalinos, T. A., & Ioannidis, J. P. A. (2005). Unavailability of online supplementary scientific information from articles published in major journals. FASEB Journal: Official Publication of the Federation of American Societies for Experimental Biology, 19(14), 1943–1944. DOI: 

  14. Fetterman, A. K., & Sassenberg, K. (2015). The reputational consequences of failed replications and wrongness admission among scientists. PLoS ONE, 10(12): e0143723. DOI: 

  15. Gelman, A., & Loken, E. (2014). The statistical crisis in science. American Scientist, 102(6), 460–465. DOI: 

  16. Gertler, A. L., & Bullock, J. G. (2017). Reference Rot: An Emerging Threat to Transparency in Political Science. PS: Political Science & Politics, 50(01), 166–171. DOI: 

  17. Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean? Science Translational Medicine, 8, 1–6. DOI: 

  18. Grahe, J., Brandt, M. J., & IJzerman, H. (2015). The Collaborative Education and Replication Project. Retrieved from: DOI: 

  19. Gymrek, M., McGuire, A. L., Golan, D., Halperin, E., & Erlich, Y. (2013). Identifying personal genomes by surname inference. Science, 339(6117), 321–324. DOI: 

  20. Hardwicke, T. E., & Ioannidis, J. P. A. (2018, April 16). Mapping the universe of Registered Reports. DOI: 

  21. Hardwicke, T. E., Mathur, M. B., MacDonald, K. E., Nilsonne, G., Banks, G. C., Frank, M. C., et al. (2018, March 19). Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition. Retrieved from: 

  22. Heycke, T., Aust, F., & Stahl, C. (2017). Subliminal influence on preferences? A test of evaluative conditioning for brief visual conditioned stimuli using auditory unconditioned stimuli. Royal Society Open Science, 4. DOI: 

  23. Houtkoop, B., Chambers, C., Macleod, M., Bishop, D., Nichols, T., & Wagenmakers, E. J. (2018). Data sharing in psychology: A survey on barriers and preconditions. Advances in Methods and Practices in Psychological Science. Advance online publication. DOI: 

  24. Huff, K. (2017). Lessons Learned. In: Kitzes, J., Turek, D., & Deniz, F. (Eds.), The Practice of Reproducible Research: Case Studies and Lessons from the Data-Intensive Sciences. Oakland, CA: University of California Press. Retrieved from: 

  25. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. DOI: 

  26. Ioannidis, J. P. A. (2012). Why science is not necessarily self-correcting. Perspectives on Psychological Science, 7(6), 645–654. DOI: 

  27. Ioannidis, J. P. A. (2014). Clinical trials: what a waste. British Medical Journal, 349. DOI: 

  28. Johnson, V. E., Payne, R. D., Wang, T., Asher, A., & Mandal, S. (2016). On the reproducibility of psychological science. Journal of the American Statistical Association, 5(4). DOI: 

  29. Jones, S. (2011). How to Develop a Data Management and Sharing Plan. Retrieved from: 

  30. Kidwell, M. C., Lazarevic, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., Nosek, B. A., et al. (2016). Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. PLoS biology, 1–15. DOI: 

  31. Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Jr., Bahník, Š., Bernstein, M. J., et al. (2014). Investigating variation in replicability: A Many Labs replication project. Social Psychology, 45, 142–152. DOI: 

  32. Lowndes, J. S. S., Best, B. D., Scarborough, C., Afflerbach, J. C., Frazier, M. R., O’Hara, C. C., et al. (2017). Our path to better science in less time using open data science tools. Nature Ecology & Evolution, 1(6), 0160–7. DOI: 

  33. McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., et al. (2016). How open science helps researchers succeed. eLife, 5, 1–19. DOI: 

  34. Merton, R. K. (1973). The Sociology of Science. Theoretical and Empirical Investigations. Chicago: University of Chicago Press. 

  35. Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., Lewandowsky, S., Morey, C. C., Newman, D. P., Schönbrodt, F., Vanpaemel, W., Wagenmakers, E.-J., & Zwaan, R. A. (2016). The peer reviewers’ openness initiative: Incentivising open research practices through peer review. Royal Society Open Science, 3, 1–7. DOI: 

  36. Morin, A., Urban, J., Adams, P. D., Foster, I., Sali, A., Baker, D., & Sliz, P. (2012). Shining Light into Black Boxes. Science, 336(6078), 159–160. DOI: 

  37. Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., et al. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. DOI: 

  38. Nosek, B. A., Ebersole, C. R., DeHaven, A., & Mellor, D. (2018). The preregistration revolution. PNAS. Advance online publication. DOI: 

  39. Nuijten, M. B., Borghuis, J., Veldkamp, C. L., Dominguez-Alvarez, L., Van Assen, M. A., & Wicherts, J. M. (2017). Journal Data Sharing Policies and Statistical Reporting Inconsistencies in Psychology. Collabra: Psychology, 3(1). DOI: 

  40. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 1–8. DOI: 

  41. Peng, R. D. (2006). Reproducible epidemiologic research. American Journal of Epidemiology, 163(9), 783–789. DOI: 

  42. Piwowar, H. A., & Vision, T. J. (2013). Data reuse and the open data citation advantage. Peer J, 1, e175. DOI: 

  43. Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge. London: Routledge and Kegan Paul. 

  44. Rouder, J. N. (2016). The what, why, and how of born-open data. Behavior research methods, 48(3), 1062–1069. DOI: 

  45. Rücknagel, J., Vierkant, P., Ulrich, R., Kloska, G., Schnepf, E., Fichtmüller, D., Reuter, E., Semrau, A., Kindling, M., Pampel, H., Witt, M., Fritze, F., van de Sandt, S., Klump, J., Goebelbecker, H.-J., Skarupianski, M., Bertelmann, R., Schirmbacher, P., Scholze, F., Kramer, C., Fuchs, C., Spier, S., & Kirchhoff, A. (2015). Metadata Schema for the Description of Research Data Repositories: version 3.0. DOI: 

  46. Sarwate, A. D., Plis, S. M., Turner, J. A., Arbabshirani, M. R., & Calhoun, V. D. (2014). Sharing privacy-sensitive access to neuroimaging and genetics data: a review and preliminary validation. Frontiers in neuroinformatics, 8, 35. DOI: 

  47. Silberzahn, R., Uhlmann, E. L., Martin, D., Anselmi, P., Aust, F., Awtrey, E. C., Carlsson, R., et al. (in press). Many analysts, one dataset: Making transparent how variations in analytical choices affect results. Advances and Methods in Psychological Science. 

  48. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. DOI: 

  49. Simons, D. J. (2014). The value of direct replication. Perspectives on Psychological Science, 9(1), 76–80. DOI: 

  50. Soderberg, C. K. (2018). Using OSF to Share Data: A Step-by-Step Guide. Advances in Methods and Practices in Psychological Science. Advance online publication. DOI: 

  51. Spellman, B. A. (2015). A short (personal) future history of revolution 2.0. Perspectives on Psychological Science, 10(6), 886–899. DOI: 

  52. Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing transparency through a multiverse analysis. Perspectives on Psychological Science, 11(5), 702–712. DOI: 

  53. Stodden, V. (2015). Reproducing statistical results. Annual Review of Statistics and Its Application, 2(1), 1–19. DOI: 

  54. Stodden, V., Seiler, J., & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. PNAS, 115(11), 2584–2589. DOI: 

  55. Sweeney, L. (2000). Simple demographics often identify people uniquely. 

  56. Świątkowski, W., & Dompnier, B. (2017). Replicability crisis in social psychology: Looking at the past to find new pathways for the future. International Review of Social Psychology, 30(1), 111–124. DOI: 

  57. Vanpaemel, W., Vermorgen, M., Deriemaecker, L., & Storms, G. (2015). Are we wasting a good crisis? The availability of psychological research data after the storm. Collabra, 1, 1–5. DOI: 

  58. Vasilevsky, N. A., Minnier, J., Haendel, M. A., & Champieux, R. E. (2017). Reproducible and reusable research: Are journal data sharing policies meeting the mark? PeerJ, 5, e3208. DOI: 

  59. Vines, T. H., Albert, A. Y., Andrew, R. L., Débarre, F., Bock, D. G., Franklin, M. T., Rennison, D. J., et al. (2014). The availability of research data declines rapidly with article age. Current Biology, 24(1), 94–97. DOI: 

  60. Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632–638. DOI: 

  61. Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s statement on p-values: Context, process, and purpose. The American Statistician, 70(2), 129–133. DOI: 

  62. Wicherts, J., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61, 726–728. DOI: 

Peer review comments

The author(s) of this paper chose the Open Review option, and the peer review comments are available at: