Rethinking the Human Subjects Process
Get a group of social scientists together to talk about prospective research and it won't take long before the conversation turns to the question of human subjects board approval. Most researchers have a war story, and all have an opinion of the Institutional Review Board (IRB), the committee in US universities that must approve any planned investigation to make certain that the subjects of the research are protected. Before too long, someone will suggest doing away with the IRB, or avoiding human subjects altogether.
Research in the field of Digital Media and Learning (DML) tends to focus on youth participants, occur in dynamic, mediated environments, and often consists of researchers working in different locations and sharing their observations. All of these factors can complicate the process of seeking and receiving approval from local IRBs, leading to a substantial amount of effort by researchers and unnecessary delay in doing good research. Particularly vexing is the difficulty in sharing data among researchers at different universities, a vital prerequisite to collaborative social science. In the hope of improving this process for everyone involved—the researchers, members of the IRBs, the participants in the research, and the public at large—the Digital Media and Learning Research Hub supported the first of a pair of one-day workshops intended to discuss potential solutions. A number of groups have been looking at how IRBs are working and how they might work better, and we were lucky to be able to bring to Irvine a group of people with significant experience working with the IRB process in various contexts, including Tom Boellstorff, Alex Halavais, Heather Horst, Montana Miller, Dan Perkel, Ivor Pritchard, Jason Schultz, Laura Stark, and Michael Zimmer. Each of the participants shared their research and other materials with the group beforehand, as did others who were unable to join us.
We found that while there might be some fairly intractable issues, as there are for any established institution, some of the difficulties that IRBs and investigators encountered were a result of reinventing the wheel locally, and a general lack of transparency in the process of approving human subjects research. The elements required to make good decisions on planned research tend to be obscure and unevenly distributed across IRBs. From shared vocabularies between IRBs and investigators, to knowledge of social computing contexts, to a clear understanding of the regulations and empirical evidence of risk, many of the elements that delay the approval of protocols and frustrate researchers and IRBs could be addressed if the information necessary was more widely accessible and easily discoverable.
Rather than encouraging the creation of national or other centralized IRBs, more awareness and transparency would allow local solutions to be shared widely. Essentially, this is a problem of networked learning: how is it that investigators, IRB members, and administrators can come quickly to terms with the best practices in DML research? Not surprisingly, we think digital media in some form can be helpful in that process of learning.
The devil is in the details. First, it's important to identify what should be shared, how to share that information in a way that is most helpful, and how to get from where we are now to that point. Much of this information sharing already takes place today informally, with colleagues contacting one another for advice on protocols, technologies, and the like. Our hope is to create a resource that opens this sharing up a bit more, highlights a core set of ideas held commonly in the disciplines that make up DML, and makes the IRB process quicker and more effective.
As a group we would love to hear your suggestions on how best to improve the IRB process, or questions you might have in the comments below.