Assessing Outcomes of Ethics & Professionalism Training
Our long-term goal is to contribute to the development of evidence-based training programs and policies that help researchers to do good research.
Since 1989, NIH has required instruction in the responsible conduct of research for all funded trainees.
We have examined how institutions funded with a NIH Clinical and Translational Science Award (CTSA) discharge this responsibility:
DuBois JM, Schilling D, Heitman E, Steneck N, Kon A. Instruction in the Responsible Conduct of Research: An Inventory of Programs and Materials within CTSAs. Clinical and Translational Science. 2010; 3(3): 109-111.
We found significant variability in the kinds of programming offered at CTSA institutions.
We have conducted Delphi consensus surveys to establish a consensus on the appropriate goals and content for such programs:
Very few validated measures exist for use in assessing the outcomes of RCR training or assessing factors related to research integrity. We recently published a review of existing measures:
We have developed a test of general knowledge of the responsible conduct of research (ORI), a measure of professional decision making in research, and a test of biased thinking about regulatory requirements. The Professional Decision-Making in Research (PDR) measure builds upon the work done on ethical decision-making by the Mumford lab at the University of Oklahoma. It assesses the use of strategies for making decisions that help to compensate for uncertainty, the potential for bias, the negative impact of stress, and the complexity of situations.
The How I Think about Research (HIT-Res) test measures the degree to which thinking about research compliance reveals self-serving biases such as minimizing the harmfulness of violations or assuming the worst. It builds upon previous work by John Gibbs at Ohio State. These two measure have been validated in studies involving 700 NIH-funded researchers. An article on the HIT-Res is currently in press with Science and Engineering Ethics.
Previously, we assessed the impact of one training program using multiple measures, and found what others have found: Changes are modest.
More recently, we have conducted a needs assessment for remediation education.
We currently have funding from the Office of Research Integrity to examine outcomes of PI Program.
Additionally, in 2015 and 2016, we will explore the role of experience, culture, and acculturation in professional decision-making in research. This project will include the development and validation of the Rating Values in Science measure, which is an adaption of the Schwartz Portrait Values Questionnaire.