CommentaryPolicy

Institutional Review Boards, Professionalism, and the Internet

See allHide authors and affiliations

Science Translational Medicine  05 May 2010:
Vol. 2, Issue 30, pp. 30cm15
DOI: 10.1126/scitranslmed.3000884

Abstract

Even as the Internet generates pressures that erode professional authorities of all kinds, it also provides opportunities for researchers and their institutional review boards to bolster their status as trusted sources. To this end, we must work to improve clinical protocol design and approval procedures and maintain the integrity of the study participant recruitment process in clinical trials.

The translational enterprise by definition requires collaboration among multiple components within organizations and between them and the public. From research design onward through study participant recruitment and physician/patient treatment decisions, translational medicine depends on a modicum of trust among the various parties. If an ability to provide sound judgments in the face of the uncertainties of disease is the sine qua non of being a professional in health care (1), what challenges and opportunities does the Internet pose to translational medicine researchers and clinical trial managers and sponsors being perceived as trusted sources (Fig. 1)? This Commentary focuses on two: (i) communication and workflow between researchers and institutional review boards (IRBs) and (ii) the fruits of their collaborations—clinical trials and study participant recruitment. In this context, the term IRB represents IRBs as entities in themselves and also as vital components of human research protection programs.

Fig. 1. Gutenberg to Google.

For its potential to transform authority and trust relations of all kinds, nothing like the Internet in scale has occurred since the invention of mass printing by Gutenberg around the year 1440.

CREDIT: C. BICKEL/SCIENCE TRANSLATIONAL MEDICINE

Professionalism in IRB review and recruitment of clinical trial participants is challenged by the Internet’s ability to expose and create variation, a seemingly innocuous word that lies at the heart of science and its management. According to W. Edwards Deming, a statistician who wrote extensively on statistics and management: “The central problem of management in all its aspects, including planning, procurement, manufacturing, research, sales, personnel, accounting, and law, is to understand better the meaning of variation. …” (2)

RESEARCHERS AND IRBS

Most readers would agree, I expect, that researchers want to select clinical research participants by the most expedient means possible that are consistent with the safety of the participants. Clinical principal investigators (PIs) come from diverse disciplines and engage in clinical research with wide ranges of risk and potential benefit. Clinical trial protocols may be devised for a single study or may consist of prepackaged documents for participation in multicenter trials. Regardless of the risk-to-benefit ratio or protocol type, PIs typically submit their protocols, wait weeks for review by an IRB, receive suggestions and corrections from the IRB, revise and resubmit, wait for another IRB review, and perhaps revise and resubmit and wait for an additional review. Most IRB requests for revision take one of three forms: (i) changes in the logistics and supervision of the research, as in requesting that someone with an MD degree be present while participants undergo a physical test; (ii) alterations in the research design, such as suggesting that researchers collect different data; or (iii) revisions to patient consent forms.

What do researchers say about this process? The phrase “death by a thousand duck bites” seems to describe the experience of many (3). Writing in the Annals of Internal Medicine, a Stanford team estimated that it took $56,000 in administrative costs and over 15,000 pages of paper to tweak an already-approved research protocol that simply compared the progress made by patients who had attended two different types of addiction treatment programs (4). And a 2008 report in Science found that paperwork is among the biggest threats to pricing large clinical trials out of reach (5). Researchers complain that obtaining participant consent has become onerous as well. In cancer drug trials, for example, prospective participants are routinely asked to sign complicated forms that can consist of dozens of pages, often written in legalese that obscures rather than clarifies the nature of the experiment. Senior researchers say morale is low among clinical and translational scientists, especially those early in their careers (5).

Whether cumbersome or not, IRBs tend to vary considerably in their responses to a given protocol, a variation that the Internet easily exposes. Marked variation in IRB responses challenges their professionalism on at least two counts: (i) fairness—that like should be treated alike by deliberative bodies operating in the public sphere and (ii) capacity for sound judgments. A study of three Baltimore-area IRB responses to a single protocol concludes the following: “Inconsistencies in these reviews raise questions as to the validity and efficiency of the IRB process … Validity can be defined as the ‘extent to which any measuring instrument measures what it is intended to measure.’ It is important that the IRB process reliably measure with adequate validity the degree of safety of scientific experiments in order to preclude harm to subjects.” (6)

IRB administrators tend to assume that IRB members deliberate within a framework based on logical positivism. That is, they believe that members apply fixed regulations more or less accurately, resulting in objectively right and wrong decisions. The prevalence of variation troubles these administrators, as it flies in the face of their positivistic worldview. When looking at examples of variation, physicians and researchers who are running audits of IRB studies conclude that variations come from two sources: dubious application of federal regulations and hasty judgments made by overburdened board members and administrators. In short, they tend to assume that the variation results from members’ lack of time and knowledge. Almost regardless of the nature of the critique, IRB leaders tend to propose the same remedy, which is greater financial support for staff training and staff expansion.

But what if the standard administrative account of the IRB process is wrong? Evidence from interviews with IRB chairs across the United States suggests that differences in decisions from one board to the next are products of how boards deliberate, not of board members’ mistaken judgments (7). What matters, it turns out, is an IRB’s pattern of “local precedents,” which are previous decisions made by the IRB that board members use to guide their evaluations of subsequent protocols. By drawing on them, members tend to read new protocols as permutations of studies that they have previously debated and settled. Instead of working from general rules to specific cases, these IRB members tend to work from case to case. IRBs may vary not because their judgments are mistaken but because they strive to make locally consistent decisions over time. Local precedents tend to be idiosyncratic to an individual IRB but stable within each board. Thus, when a local precedent has not been established, a PI may help shape the solution. Once precedents are set and decision-making inertia is established, future PIs lose influence over decisions made later on the same recurring issue.

From a local-precedents perspective, two common IRB behaviors may account for both variability and lengthy review timelines. First, IRBs disagree as to the amount of scrutiny a given protocol will require. Whether protocols are expedited depends on how risky IRB leaders consider the protocol to be for participants’ bodies and minds. Second, in full-board reviews (as compared to initial reviews, which don’t require a quorum) IRBs can arrive at different overall decisions concerning the approval of similar protocols. Requested modifications to previously approved protocols can differ substantially across IRBs in multicenter trials and can be contradictory. Without disputing the importance of adequate material resources, IRBs seeking less variability might seek to bolster their conceptual resources (7).

SPONSOR COMMUNICATION IN PARTICIPANT RECRUITMENT

Once an IRB has approved a protocol, it makes its way onto the Internet in the form of participant recruitment ads. Operationally, the IRB may believe its work is done, but that does not mean the world is done with the trial or the IRB that approved the trial protocol. Although sponsors take over public communication, the Internet public has little means to make distinctions between a trial sponsor and the IRB that approved the trial. How does the Internet affect the perception of sponsors and, by extension, the IRB approval process as trusted sources during participant recruitment?

Everyone who uses the Internet gets information from multiple sources with disparate interests, values, and voices. Whether a translational entity is officially nonprofit or not, financial conflicts of interest (COIs) suffuse all aspects of U.S. health care. Sometimes these are disclosed, sometimes not. Even when the existence of a financial COI is disclosed, its amount is not. Therefore, no one outside the sponsors has the means to assess a COI’s potential “oomph” to distort sound judgment in conveying risks and benefits to prospective trial participants. According to the U.S. Food and Drug Administration (FDA), “Direct advertising for study subjects [is] the start of the informed consent process” (8), and the Office for Human Research Protections, which is part of the U.S. Department of Health and Human Services (DHHS), has stated that “the information provided on these [patient recruitment] Web sites may constitute the earliest components of the informed consent process” (9). Not surprisingly, recruitment practices on the Web vary considerably by sponsor. In a 2002 DHHS study of 22 Web sites for patient recruitment for 110 clinical trials, only a quarter mentioned potential benefits and none mentioned potential risks of the protocol (10). More recent studies (11) suggest that there have been no significant changes in sponsor behavior since that time. Furthermore, euphemisms are common, such as when recruitment Web sites refer to “new drug treatments” rather than using the phrase “experimental or unproven drug.”

Another problem with patient recruitment Web sites is that balanced presentations—those that offer complete and noneuphemistic information of clinical significance—are not the norm, even though unbiased information is crucial to a potential participant’s decision-making process. A recent report by the Hastings Center (an independent bioethics research institute) that analyzed 171 diabetes and 184 depression clinical trials noted that “38% of the [enrollment Web] sites … did not appear to provide balanced descriptions of the studies.” Moreover, the report stressed that “nearly 75% provided some description of incentives … yet roughly half of these failed to mention risks or what the study involved.” No sites used the term “risk(s),” and only one used the term “side effect(s)” (and then only to note their absence). This study also found that a “more than minimal risk” occurred in 67% of the diabetes trials and in 77% of depression studies, most of which had for-profit sponsors. Recruitment Web sites also often omit basic study criteria, such as the length of the study, the source(s) of funding, and the specifics of subject participation. According to Hastings Center researchers, “Many online recruiting sites include some—but not full—descriptions of what will be involved. … Thus the information provided appears inconsistent with federal guidance and weighted toward encouraging research participation” (11). Concerning the genuine uncertainty and clinical risk faced by trial participants, we must ask whether trial sponsors’ blatant use of euphemism and elision is professional and ethical.

SOME MODEST RECOMMENDATIONS

In terms of conceptual resources, suppose all IRBs had access to a Web resource that contained exemplar protocols and problems as sources of model decisions that allowed IRB members to address ethical concerns efficiently: to identify a subsequent protocol’s essential problem amid all of its particulars, to clarify its resonance with a prior case, and to render a consistent decision. Embodying this wisdom on the Internet might eliminate administrative bottlenecks at the pre-review, initial review, and full review levels. The proposed Web resource could be established and maintained at an IRB or consortium level as part of the research infrastructure.

At the pre-review level, it would not be difficult to provide researchers with Internet “protocol navigators” that outline preparatory steps and provide acceptable precedents organized by type of research and typical protocols. An alternative might be to establish a Central Protocol Service Center that includes live advisers and Internet interaction. Detailed pre-review within the PI’s home department, which is routine in some organizations and not in others, makes sense, especially concerning research design and trial logistics.

At the initial review, instead of an extended back-and-forth exchange of drafts and edits, why not provide an option to PIs that authorizes IRB staff to insert default edits—acceptable precedents—as part of the initial review process? Applicants could bring their computers to the initial meeting with IRB staff and edit their applications right then and there. Researchers could choose or decline this service. Those who opted in would receive an initial review report immediately after the IRB meeting that focused only on outstanding issues, if any.

At the formal review, why not observe limits on inquiry? Given that the primary purpose of the IRB review is human subject protection, what is the logic for having IRB members discuss a protocol’s research design, as when IRBs suggest that the PI gather additional data? Especially when the protocol has been pre-reviewed by the PI’s home department, should not questions of research design be moot for the IRB unless they undermine safety or consent standards?

To maintain the integrity of the participant recruitment process, IRB leaders might insist as part of protocol approval that sponsors provide balanced presentations to be given to prospective participants and then follow through with compliance monitoring and enforcement mechanisms. Small research centers may find this difficult to accomplish on their own. The Internet, however, makes virtual consortiums straightforward, just as it facilitates ongoing surveillance. Large regional players with hundreds or thousands of trials, such as the Partners system at Harvard, control access to participants and researchers alike. If translational leaders have the will, then ensuring professional standards in communication should be doable. Being known as the trusted source in participant recruitment in a clinical research organization’s geographical region might even enhance its marketing to prospective participants. At the moment, “selling” trial participation to participants may not be ubiquitous, but it is common, from trial recruitments on Craigslist to slick Web presentations by large pharmaceutical concerns, such as Wyeth (12).

Vibrant social contracts of small and large scope depend on trust between the parties. When researchers don’t trust the IRB process, and when IRBs respond routinely to researchers with unreasonable demands, translational throughput suffers. When research overseers routinely turn a blind eye to the proliferation of corrupted speech on Internet trial recruitment sites, the public has less of a reason to trust professional authorities.

In the broadest terms, the Internet multiplies the facts, interests, values, and voices at play in many domains, including translational medicine. Instead of professionally controlled speech, heteroglossia prevails. Dissidents can and do organize themselves easily. Learned knowledge is widely available, but medical illiteracy is prevalent. Private interests and governments tend to cherish secrets, but the Internet promotes transparency. As for its potential to transform authority and trust relations of all kinds, nothing like the Internet in scale has occurred since the invention of mass printing. Societies changed more slowly then, but not long after the mass circulation of vernacular Bibles, books, and pamphlets took off in the 16th and early 17th centuries, established authorities in religion, politics, and medicine across Europe wobbled or toppled. Those at the top then felt that their world had been turned upside down, and they were right (13).

Even as the Internet generates pressures that erode professional authorities of all kinds, it also provides opportunities for researchers to bolster their status as trusted sources. Whether local and national leaders of translational medicine are willing to accept that challenge as a genuine calling remains to be seen. At the moment—and regardless of institutional rhetoric to the contrary—translational throughput stalls, atomized scramble and hustle seem ascendant, dissidents proliferate, and trust seems in short supply. Systematic attention to variations in translational processes worldwide would ameliorate important problems that now arise routinely at each stage in the translational enterprise.

Footnotes

  • Citation: R. Martensen, Institutional review boards, professionalism, and the Internet. Sci. Transl. Med. 2, 30cm15 (2010).

References and Notes

  1. The author’s views are personal and do not necessarily represent those of the U.S. National Institutes of Health or the U.S. government.
View Abstract

Navigate This Article