SUMMARY

Scientific Integrity

Between 2-4% of researchers admit to have falsified or fabricated their data. The prevalence of such unethical behavior can be as high as 10% in some disciplines or countries. Data falsification is an extreme form of questionable research practices that are both less problematic and much more widespread: surveys on different disciplines have shown that more than half of researchers make some form of selective reporting or add new data until they obtain significant results. Unethical practices harm the global quality of research.

Provided they have been validated by peer review, fabricated, distorted or selected data are featured in literature review or meta-analysis. They can in turn influence directions for future research or, even, policy decisions with wide range implications in health, economics or politics.

Negative incentives have attracted significant attention: publishers, institutions and research evaluators tend to favor unprecedented research that does not simply confirm common hypotheses. The lack of proper tools, standards and workflows to deal efficiently with data is also a fundamental issue. In most disciplines, data collection is not well organized nor maintained: it has been estimated that as much as half of the datasets created from the 1990s in life science are already lost. Questionable research practices partly stem from common deficiencies in scientific data management.

Open science and data sharing have recently emerged as a common framework to solve issues of research integrity. While initially focused on access to publications, the open science movement is more broadly concerned with transparency at all the stages of the research lifecycle. The diffusion of datasets in open repositories and infrastructures has already largely solved major issues of long term preservation. It also ensures that potential errors or adjustments to statistical indicators can be subsequently corrected, as posterior analysis and replications have access to the original data source.

Open science is now increasingly integrated into ethical standards, following community-led initiatives like the TOP Guidelines (2014). The European Code of Conduct for Research Integrity from 2017 includes full requirements for open access, open data and reproducible workflow: “Researchers, research institutions and organizations ensure access to data is as open as possible, as closed as necessary”. The Hong Kong principles for assessing researchers (2020) acknowledge open science as one of the five pillars of scientific integrity.

Open science is changing the nature of the debate over research integrity which had remained until now largely detached from the public space. 60%-90% of the audience of open scientific platforms comes from non-academic professionals and private citizens. This increased diffusion creates new responsibilities but also new opportunities to involve non-academic stakeholders in the spirit of citizen science.

This article is published on this website and as an independently updated Wikipedia article.

Wikipedia

Plan

Notes

Books & Thesis

  • Babbage, Charles (1830). Reflections on the Decline of Science in England: And on Some of Its Causes, by Charles Babbage (1830). To which is Added On the Alleged Decline of Science in England, by a Foreigner (Gerard Moll) with a Foreword by Michael Faraday (1831). B. Fellowes.
  • Broad, William J.; Wade, Nicholas (1983). Betrayers of the Truth. Simon and Schuster. ISBN 978-0-671-44769-4.
  • Pimple, Kenneth D., ed. (2017-05-15). Research Ethics. Routledge. ISBN 978-1-351-90400-1.

Reports

  • Pauly, Gerhard (2021). OSCAR open science code of conduct (Report). Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V.
  • Henriet, M Pierre; Ouzoulias, M Pierre (2021). Promouvoir et protéger une culture partagée de l’intégrité scientifique (Report). Assemblée nationale.

Journal articles

Other sources

Scientific Integrity

Langlais, Pierre-Carl
CC BY 4.0
published on 2 May 2024
Citer Imprimer Linkedin Bluesky

Langlais, Pierre-Carl, « Scientific Integrity », Petite encyclopédie de la science ouverte / Small encyclopedia of Open Science, published on 2 May 2024.
DOI : https://doi.org/10.52949/59
URL : https://encyclo.ouvrirlascience.fr/articles/scientific-integrity/

×

ARTICLE

Research integrity or Scientific integrity or is a form of scientific ethics that deals with “best practices” or rules of professional practice of researchers.

First introduced in the 19th century by Charles Babbage, the concept of research integrity became relevant in the late 1970s. A series of publicized scandals in the United States led to heightened debates on the ethical norms of sciences and the limitations of the self-regulation processes implemented by scientific communities and institutions. Formalized definitions of scientific misconducts and code of conducts became the main policy response after 1990. In the 21th century, codes of conduct on research integrity are widespread. Along with institutional-level or national-level code, major international texts include the European Charter for Researchers (2005), the Singapore statement on research integrity (2010), the European Code of Conduct for Research Integrity (2011 & 2017) and the Hong Kong principles for assessing researchers (2020).

Scientific literature on research integrity fall mostly into two categories: mapping of the definitions and categories, especially in regard with scientific misconduct and empirical survey of the attitudes and practices of scientists.[footnote “Laine 2018, p. 52”] Following the development of code of conducts, taxonomies of non-ethical uses have been significantly expanded, beyond the long established forms of scientific fraud (plagiarism, falsification and fabrication of results). Definitions of questionable research practices and the debate over reproducibility also target a grey area of dubious scientific results, which may not be the outcome of voluntary manipulations.

The concrete impact of codes of conducts and other measure put in place to ensure research integrity remain uncertain. Several cases studies have highlighted that while the principles of the code of conduct adheres to common scientific ideals, they are seen as remote from actual work practices and their efficiency is criticized.

After 2010, debates on research integrity have been increasingly linked with open science. International code of conducts and national legislation on research integrity have officially endorsed open sharing of scientific outputs (publications, date or code) as ways to limit questionable research practices and to enhance reproducibility. References ot open science have incidentally opened of the debate over scientific integrity beyond academic communities, as it increasingly concern a wider audience of scientific readers.

Definition and history

Research integrity or scientific integrity has become an autonomous concept within scientific ethics in the late 1970s. In contrast with other forms of ethical misconducts, the debate over research integrity is focused on “victimless offence” that only hurts “the robustness of scientific record and public trust in science”.[footnote “Laine 2018, p. 50”] Infractions to research integrity include chiefly “data fabrication, falsification, or plagiarism”.[footnote “Laine 2018, p. 50”] In that sense, research integrity mostly deal with the internal process of science. It can be treated as commmunity issue, that should not involve external observers: “research integrity is more autonomously defined and regulated by the community, while research ethics (again, a narrow definition) has closer links to legislation”.[footnote “Laine 2018, p. 50”]

Emergence of the issue (1970-1980)

Before the 1970s, ethical issues were largely focused on the conduct of medical experimentations, especially in regards to human tests. In 1803, the “code” of Thomas Percival created a moral foundation for experimental treatments that “was built upon fairly regularly” throughout the next two centuries, notably by Walter Reed in 1898 or by the Berlin code in 1900[footnote “Pimple 2017, p. XV”] After the Second World War, the Nazi human experimentations motivated the development of international, widely acknowledged ethic code of research ethics, such as the Nuremberg code (1947) or the World Medical Association Declaration of Helsinki.[footnote “Pimple 2017, p. XVI”]

According to Kenneth Pimple, Charles Babbage was the first author to set aside the specific issue of scientific integrity.[footnote “Pimple 2017, p. XVI”] In the Reflections on the Decline of Science in England, and on Some of its Causes, first published in 1830, Babbage identified four classes of scientific frauds[footnote “Babbage 1830, p. 176”] from outright forging to varied degrees of arrangments and cooking with the data or the methods.

Research integrity became a major debated topics in biological sciences after 1970, due to a combination of factors: the development of advanced data analysis methods, the growing commercial relevancy of fundamental research[footnote “Löppönen & Vuorio 2013, p. 3”], and the increased focus of federal funding agencies in the context of big science.[footnote “Pimple 2017, p. XVIII”] In 1974, the “painted mouse incident” attracted an unprecedented media attention: William Summerlin inked a black dot on a mouse to claim a treatment has been a success.[footnote “Woolf 1988, p. 69”] Between 1979 and 1981, several major cases of scientific frauds and plagiarism draw a larger focus to the issue from researchers and policy-makers in the United States[footnote “Löppönen & Vuorio 2013, p. 3”]: as much as four important frauds occurred in the summer of 1980.

At the time, the “scientific community responded to reports of ‘scientific fraud’ (as it was often called) by asserting that such cases are rare and that neither errors nor deception can be hidden for long because of science’s self-correcting nature”[footnote “Pimple 2017, p. XVIII”] A journalist of Science, William Brad, took the opposite position and laid an influential contribution to the issue of research integrity. In answer to the US House of Representatives Science and Technology subcommittee, he highlighted that “cheating in science was nothing new” but, until recently, “had been handled as an internal affair”. In a detailed investigation co-signed with Nicholas Wade, Betrayers of Science, Brad described scientific fraud as a structural problem: “As more cases of frauds broke into public view (…) we wondered if fraud wasn’t a quite regular minor feature of the scientific landscape (…) Logic, replication, peer review — all had been successfully defied by scientific forgers, often for extended periods of time”[footnote “Broad & Wade 1983, p. 8”]. Other early assessment of the systematicity of scientific frauds presented a more nuanced picture.[footnote “Pimple 2017, p. XIX”] For Patricia Wolff, along with a few obvious manipulations, there were a wide range of grey areas, that were due to the complexity of fundamental research: “the boundaries between egregious self-deception, culpable carelessness, fraud, and just plain error, can be very blurred indeed”.[footnote “Woolf 1988, p. 80”] Characteristically, the debate led to a reevaluation of past scientific practices. In 1913, a well-known scientific experiment on electron charge by Robert Millikan was explicitely based on discarding some results that would not agree with the underlying theory: while well-received at the time, this work came to be considered as a textbook example of scientific manipulation by the 1980s.[footnote “Whitbeck 2004, p. 49”]

Formalization of research integrity (1990-2020)

By the end of the 1980s, the amplification of misconduct scandals and the heightened political and public scrutiny put scientists in a difficult position in the United States and elsewhere: “The tone of the 1988 US congressional oversight hearings, chaired by Rep. John Dingell (D-MI), that investigated how research institutions were responding to misconduct allegations reinforced many scientists’ view that both they and scientific research itself were under siege.”[footnote “Whitbeck 2004, p. 50”] The main answer was procedural: research integrity has “been codified into numerous codes of conduct field specific, national, and international alike.”[footnote “Laine 2018, p. 49”] This policy response largely stemmed from research communities, funders and scientific administrators. In the United States, the United States Public Health Service and the National Science Foundation adopted “similar definitions of misconduct in science” in 1989 and 1991.[footnote “Pimple 2017, p. XIX”] The concepts of research integrity and its reverse, scientific misconduct were especially relevant from the perspective funding bodies, since it made it possible to “delineate the research-related practices that merit intervention”[footnote “Pimple 2017, p. XX”]: lack of integrity led not only to unethical but inefficient research and funds have better to be allocated elsewhere.

After 1990, there has been a “veritable explosion of scientific codes of conduct”.[footnote “Schuurbiers et al. 2009”] In 2007 the OECD published a report on best practices for promoting scientific integrity and preventing misconduct in science (Global Science Forum). Major examples of international texts include:

  • European Charter for Researchers (2005)
  • the Singapore statement on research integrity (2010)[footnote “Singapore Statement on Research Integrity (PDF). 2010.”]
  • European Code of Conduct for Research Integrity of All European Academies (ALLEA) and the European Science Foundation (ESF) (2011 revised in 2017[footnote “ALLEA publishes revised edition of The European Code of Conduct for Research Integrity. All European Academies (ALLEA). 2017.”]).

There are no global estimates of the total number of code of conduct related to research integrity.[footnote “Laine 2018, p. 53”] A UNESCO project, the Global Ethics Observatory (no longer accessible after 2021), referenced 155 codes of conduct[footnote “Database 5: code of conduct, Unesco, archived in 2021 by Internet Archive”] but “this is probably just a fraction of the total number of codes produced in recent years.”[footnote “Schuurbiers et al. 2009”] Codes have been created in highly diverse settings and have a wide variation in scale and ambition: along with national-scale codes, there codes for scientific societies, institutions or R&D services.[footnote “Laine 2018, p. 53”] While theses normative texts may frequently share a core of common principles, there have been a growing concern “over fragmentation, lack of interoperability and varying understandings of central terms can be sensed”.[footnote “Laine 2018, p. 52”]

Taxonomy and classification

In code of conducts, the definition of research integrity is usually negative: the collection of norms aims to single out different forms of unethical research and scientific misconduct with varying degrees of gravity.

The multiplication of codes of conduct has also corresponded with an expansion of scope. While the initial debate was focused on “three deadly sins of scientific and scholarly research: fabrication, falsification and plagiarism”, attention has later shifted “to the lesser breaches of research integrity”.[footnote “Bouter 2020, p. 2364”] In 1830, Charles Babbage introduced the first taxonomy of scientific frauds that already encover some forms of questionable research practices : hoaxing (a voluntary fraud “far from justifiable”[footnote “Babbage 1830, p. 176”]), forging (“whereas the forger is one who, wishing to acquire a reputation for science, records observations which he has never made”[footnote “Babbage 1830, p. 177”]), trimming (which “consists in clipping off little bits here and there from those observations which differ most in excess from the mean”[footnote “Babbage 1830, p. 178”]) and cooking. Cooking is the main focus of Babbage as an “art of various forms, the object of which is to give to ordinary observations the appearance and character of those of the highest degree of accuracy”[footnote “Babbage 1830, p. 178”]. It falls done under several sub-cases such as data selection (“if a hundred observations are made, the cook must be very unlucky if he cannot pick out fifty or twenty to do the serving up”[footnote “Babbage 1830, p. 179”], model/algorithm selection (“another approved receipt) is to calculate them by two different formulae”[footnote “Babbage 1830, p.179”]) or use of different constants.[footnote “Babbage 1830, p. 180”]

In the late 20th century, this classification has been greatly expanded and have come to encompass a wider range of deficiencies than intentional frauds. The formalization of research integrity entailed a structural change in the vocabularies and the concept associated with it.[footnote “Pimple 2002, p. 199”] By the end of the 1990s, use of the expression “scientific fraud” was discouraged in the United States, in favor a “semi-legal term”: scientific misconducts. The scope of scientific misconducts is expansive: along with data fabrication, falsification and plagiarism it includes “other serious deviations” that are demonstrably done in bad faith[footnote “Pimple 2002, p. 200”] The associated concept of questional research practice, first incepted in a 1992 report of the Committee on Science, Engineering, and Public Policy, has an even broader scope, as it also encompass potentially non-intentional research failures (such as inadequacies in the research data management process).[footnote “Pimple 2002, p. 202”] In 2016, a study identified as much as 34 questionable research practices or “degree of freedom”, that can occur at all the steps of the project (the initial hypothesis, the design of the study, collection of the data, the analysis and the reporting).[footnote “Wicherts et al. 2016”]

After 2005, research integrity has been additionally redefined through the perspective of research reproducibility and, more specifically, of the “reproducibility crisis”. Studies of reproducibility suggest that there is continuum between irreproducibility, questionable research practices and scientific misconducts: “Reproducibility is not just a scientific issue; it is also an ethical one. When scientists cannot reproduce a research result, they may suspect data fabrication or falsification.”[footnote “Resnik Shamoo 2017”] In this context, ethical debates are less focused on a few highly publicized scandals and more on the suspicion that the standard scientific process is broken and fail to meet its own standard.

Current landscape and issues

Prevalence of ethical issues

In 2009, a meta-analysis of 18 surveys estimated that less than 2% of scientists “admitted to have fabricated, falsified or modified data or results at least once”. Real prevalence may be under-estimated due to self-reporting: regarding “the behaviour of colleagues admission rates were 14.12%”.[footnote “Fanelli 2009”] Questionable research practices are more widespread as more than one thrid of the respondents admit to have done it once.[footnote “Fanelli 2009”] A large 2021 survey of 6,813 respondents in the Netherlands found significantly higher estimate, with 4% of the respondents engaging in data fabrication and more than half of the respondents engaging in questionable research practices.[footnote “Gopalakrishna et al. 2021”] Higher rates can be either attributed to a deterioration of ethic norms or to “the increased awareness of research integrity in recent years”.[footnote “Gopalakrishna et al. 2021, p. 5”] The higher rates of self-declared scientific misconducts are found in the medical and life science, with at much as 10.4% respondents surveyed in the Nerthelands admitting a scientific fraud (either fabrication of falsification of the data).[footnote “Gopalakrishna et al. 2021, p. 5”]

Other forms or scientific misconducts or questional research practices are both less problematic and much more widespread. A 2012 survey of 2,000 psychologists found that “the percentage of respondents who have engaged in questionable practices was surprisingly high”[footnote “John, Loewenstein Prelec 2012, p. 525.”], especially in regard to selective reporting.[footnote “John, Loewenstein Prelec 2012, p. 525.”] A 2018 survey of 807 researchers in ecology an evolutionary biology showed that 64% “did not report results because they were not statistically significant”, 42% have decided to collect additional data “after inspecting whether results were statistically significant” and 51% “reported an unexpected finding as though ithad been hypothesised from the start”.[footnote “Fraser et al. 2018, p. 1.”] As they come from self-declared survey, theses estimations are likely to be underestimated and questionable research practices may be even more mainstream.[footnote “Fraser et al. 2018, p. 12.”]

Implementation and assessment of code of conducts

Several cases studies and retrospective analysis have been devoted to the reception of code of conducts in scientific communities. They frequently highlight a discrepancy between the theoretical norms and the “the lived morality of researchers”.[footnote “Laine 2018, p. 54”]

In 2004, Caroline Whitbeck underlined that the enforcement of a few formal rules have overall failed to answer to a structural “erosion or neglect” of scientific trust.[footnote “Whitbeck 2004, p. 48”] In 2009, Schuurbiers, Osseweijer and Kinderler led a series of interviews in the aftermath of the Dutch code of conducts on research integrity, first incepted in 2005. Overall, most respondents were unaware of the code and other complementary ethical recommendations.[footnote “Schuurbiers et al. 2009, p. 218”] While the principles “were seen to reflect the norms and values within science rather well”, they seemed to separated from the actual work practices, that “may lead to morally complex situations”.[footnote “Schuurbiers et al. 2009, p. 222”] Respondents were also critical of the underlying individualist philosophy of the code, that shifted the entire blame to individual researchers without taking into account institutional or community-wide issues.[footnote “Schuurbiers et al. 2009, p. 224”] In 2015, a survey of “64 faculty members at a large southwestern university” in the United States “yielded similar results”[footnote “Laine 2018, p. 54”]: a significant share of the respondents were not aware of the existing ethical guidelines and the communication process remained poor.[footnote “Giorgini et al. 2015, p. 10”] In 2019, a case study on italian universities noted that the proliferation of research codes “has a reactive nature because codes of ethics are drawn up in response to scandals and as a result are punitive and negative, with lists of prohibitions”.[footnote “Mion et al. 2019”]

Code of conducts on research integrity may have a more significant impact on professional identity. Development of research codes has been equated to an internalization of issues related to research integrity into the scientific social circles and its close associate with disputed results, which made it a typical form of “knowledge club” governance. In contrast with a wider range of ethical issues that may intersect with more general social debates (such as gender equality), research integrity belongs to a form of professional ethics analogous to the ethical standards applied by journalists or medical practicians.[footnote “Laine 2018, p. 51”] As such, they do not only create a common moral framework but also, incidentally, “justify the existence of the profession as separate from other professions”.[footnote “Laine 2018, p. 51”] While the impact of codes on actual ethical practices remain difficult to assess, they have a more measurable impact on the professionalization of research, by transforming unformal norms and uses into a set of predefined principles: “codes in general are supported both by those pursuing them as a vehicle to encourage the greater professionalization of biologists (e.g., an initial stage to introducing professional licensing) and those seeking them to forestall any further regulation.”[footnote “Rappert 2007, p. 8”]

Research integrity and open science

In the 2000s and 2010s, scientific integrity was gradually reframed in the context of open science, and increased accessibility to scientific publications. The debate on research reproducibility has significantly contributed to this evolution. While not explicitely mentioned in the seminal essay of John Ioannidis, Why Most Published Research Findings Are False, data sharing has become a leading recommendation to enhance research reproducibility such as the TOP Guidelines.

Ethics of open science

The underlying ethical principles of open science predates the development of an organized open science movement. In 1973, Robert K. Merton theoretized a normative “ethos of science” structured on a “norm of disclosure”. This norm “was far from universally accepted” in the early development of scientific communities and has remained “one of the many ambivalent precepts contained in the institution of science.”[footnote “Merton 1973, p. 337.”] Disclosure was counterbalanced by the limitations of the publication and evaluation process, that tended to slow down the divulgation of research results.[footnote “Merton 1973, p. 337.”] In the early 1990s, this norm of disclosure was reframed as norm of “openness” or “open science”.[footnote “Partha David 1994.”]

The early open access and open science movements emerged partly as a reaction against the large corporate model that has come to dominate scientific publishing since the Second World War[footnote “Suber 2012, p. 29”] Open science was not framed as a radical transformation of scientific communication but as a realization of core underlying principles, already visible at the start of the scientific revolution of the 17th and the 18th century: the autonomy and self-governance of scientific communities and the divulgation of research results.[footnote “Rentier 2019, p. 19.”]

Since 2000, the open science movement has expanded beyond access to scientific outputs (publication, data or software) to encompass the entire process of scientific production. The reproducibility crisis has been an instrumental factor in this development, as it moved the debates over the definition open science further from scientific publishing. In 2018, Vicente-Saez and Martinez-Fuentes have attempted to map the common values shared by the standard definitions of open science in the English-speaking scientific literature indexed on Scopus and the Web of science.[footnote “Vicente-Saez Martinez-Fuentes 2018.”] Access is no longer the main dimension of open science, as it has been extended by more recent committments toward transparency, collaborative work and social impact.[footnote “Vicente-Saez Martinez-Fuentes 2018, p. 2.”] These diverse conceptual dimensions “encompasses (Graph 5) the emerging trends on Open Science such as open code, open notebooks, open lab books, science blogs, collaborative bibliographies, citizen science, open peer review, or pre-registration”[footnote “Vicente-Saez Martinez-Fuentes 2018, p. 7.”]

Through this process, open science has been increasingly structured over a consisting set of ethical principles: “novel open science practices have developed in tandem with novel organising forms of conducting and sharing research through open repositories, open physical labs, and transdisciplinary research platforms. Together, these novel practices and organising forms are expanding the ethos of science at universities.”[footnote “Vicente-Saez, Gustafsson Van den Brande 2020, p. 1.”]

Codification of open science ethics

The translation of the ethical values of open science toward applied recommendation has been mostly undertaken by institutional and communities initiatives until the 2010s. The TOP guidelines were elaborated in 2014 by a committee for Transparency and Openness Promotion that included “disciplinary leaders, journal editors, funding agency representatives, and disciplinary experts largely from the social and behavioral sciences”.[footnote “Nosek et al. 2015, p. 1423.”] The guidelines rely on eight standards, with different levels of compliance. While the standards are modular, they also aim to articulate a consistent ethos of science as “they also complement each other, in that commitment to one standard may facilitate adoption of others.”[footnote “Nosek et al. 2015, p. 1423.”]. The highest level of compliance for each standard include the following requirements:

  • Citation standards (1), providing “appropriate citation for data and materials” for each publication.[footnote “Nosek et al. 2015, p. 1424.”]
  • Data transparency (2), Analytic methods transparency (3) and Research materials transaparency (4) with all the relevant data, code and resarch materials stored on a “trusted repository” and all analysis being already reproduced independently prior to publication.[footnote “Nosek et al. 2015, p. 1424.”]
  • Design and analysis transparency (5) with dedicated standards for “review and publication”.[footnote “Nosek et al. 2015, p. 1424.”]
  • Preregistration of studies (6) and Preregistration of analysis plans (7) with publications providing “link and badge in article to meeting requirements”.[footnote “Nosek et al. 2015, p. 1424.”]
  • Replication (8) with the journal using “Registered Reports as a submission option for replication studies with peer review”.[footnote “Nosek et al. 2015, p. 1424.”]

In 2018, Heidi Laine attempted to establish a nearly-exhaustive list of “ethical principles associated with open science”[footnote “Laine 2018, p. 58”]:

Operationalization of open science principles in ethical codes of conduct (Laine, 2018)
Scientific activity Open science principles Singapore Statement (2010) Montreal Statement (2013) Finnish Code of conduct (2012) European Code of conduct (2017)
(Full titles given below)
Publication Open Access Full requirements Full requirements Full requirements Full requirements
Research Data Open scientific data Partial requirements Mention/encouragement Mention/encouragement Full requirements
Research Methods Reproducibility Mention/encouragement Mention/encouragement Mention/encouragement Full requirements
Evaluation Open Evaluation No mention No mention No mention No mention
Collaboration Citizen Science, open collaboration No mention No mention No mention No mention
Communication Citizen Science, science communication Mention/encouragement No mention Mention/encouragement Mention/encouragement

This categorization has to contend with the diversity of approaches and values associated with the open science movement and their ongoing evolutions, as the “term will likely remain as fluid as any other attempt to coin a complex system of practices, values and ideologies in one term”.[footnote “Laine 2018, p. 56”] Laine identified a significant variation in the way open science principles have been embedded in four major codes of conducts and statement on research integrity: the Singapore Statement on Research Integrity (2010), the Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations (2013), the Responsible Conduct of Research and Procedures for Handling Allegations of Misconduct in Finland (2012) and the European Code of Conduct for Research Integrity (2017). Access to research publications is recommended in all four codes. Integrations of data sharing and reproducibility practices are less obvious, and vary from a tacit approval to detailed support, in the case of the later European Code of Conduct: “The European code pays data management almost an equal amount of attention as publishing and is also in this sense the most advanced of the four CoCs.”[footnote “Laine 2018, p. 65”] Yet, important areas of open science, are consistently ignored, especially regarding the development of open science infrastructure, increased transparency of evaluation or support for citizen science and wider social impact. Overall, Laine found “none of the evaluated CoCs to be in blatant contradiction with the ethical principles of open science, but only the European code of conduct can be said to actively support and give guidance on open science.”
After 2020, new forms of open science code of conduct have explicitely claim to “foster the ethos of open scientific practices”.[footnote “Pauly 2021, p. 5”] First adopted in July 2020, the Hong Kong principles for assessing researchers acknowledge open science as one of the five pillars of scientific integrity: “It seems clear that the various modalities of open science need to be rewarded in the assessment of researchers because these behaviors strongly increase transparency, which is a core principle of research integrity.”[footnote “Moher et al. 2020, p. 6”]

Research integrity and society

While there is still a continuum between the procedural norms of the code of conducts and the range of values encompassed by open science, open science has significantly altered the setting and the context of the ethical debate. Open scientific productions can be universally shared in theory: their dissemination is not constrained to the classic membership model of the “knowledge club”. Implications are wider as well, as potential misuses of scientific publications is no longer limited to professional scientists. The discrepancy was already visible in the late 2000s, although it was framed under “different buzzwords”[footnote “Laine 2018, p. 54”]: in a case study on the implementation of the Duth code of conduct, Schuubiers, Osseweijer and Kinderlerer already identified a “shift in practices” that “goes by many names like Mode 2 science, post-normal science, or post-academic science” that a diverse array of transfrom such as technological evolution in the management of research, increased involvement of private actors, open innovation or open access.[footnote “Schuurbiers et al. 2009, p. 229.”] These structural trends were not well covered by the existing code of conducts.[footnote “Schuurbiers et al. 2009, p. 229.”]

In the 1990s and the 2000s, the conversation over research integrity has become increasingly professionalized and detached from the public space. The shift toward open science may potentially contradict this trend, as the range of interesting parties and potential reusers of scientific production has expanded well beyond professional academic circles. In 2018, Heidi Laine underlines that established codes of conduct have not yet taken this decisive step: “The one aspect where even the European code falls short of a full recognition of open science is in crossing the traditional professional borders of the research community, i.e. citizen science, open collaboration and science communication.”[footnote “Laine 2018, p. 68”] By not taking into account this new framework, existing codes of conducts risk to become increasingly out of touch with the reality of scientific practices:

If the ethical aspects of open science continue to be left out of RCR (Responsible Code of Research) guidance and ponderings, the research community risks losses on both fronts: open science as well as RI (Research integrity). Open science is just as much about values and ethics as it is about technology. Most of all it is about the role of science in society. It is perhaps the most all-encompassing value discussion that the research community has ever known, and the research integrity angle and community of experts risks being side-lined.[footnote “Laine 2018, p. 69”]

The broadened conversation over scientific integrity led to an increased implication of political institutions and representatives, beyond specialized scientific committee and funders. In 2021, the French government passed a decree on scientific integrity, that called for generalization of open science practices[footnote “Décret n° 2021-1572 du 3 décembre 2021 relatif au respect des exigences de l’intégrité scientifique”].