Is Research Impact Observable?
This is the fourth in the series, Bracing for Impact, by Dr. Elizabeth N. Farley-Ripple. Follow this link to the first in the series, Exploring the Concept of Research Impact. The second in the series was on the topic of Is research impact different than research use?. The third in the series, A layer cake of motivations, discussed how research impact matters.
Welcome to the fourth post on research impact. The focus of this week’s blog is on a common challenge in the research use space: measurement. I’ve blogged on this before, as have others, for the William T. Grant Foundation as part of the Use of Research Evidence initiative. But, as we’ve already discussed, research use and research impact are not the same. So I posed the question to the thought leaders I spoke with, who represent leaders of government agencies, funding organizations, institutions of higher education, innovative programs, research-practice partnerships, and professional associations: Is research impact observable? How might we capture it? Like the importance of impact, this was a point of convergence – all respondents agreed that research impact is observable and that a number of methods are available to capture it. At this point, convergence ends.
Measuring Research Impact
Respondents varied widely on their perspectives on how impact might be measured. At one end, there is an argument that impact in and of itself implies causality and that only through methods suited to causal inference can we really capture impact. This, however, is regarded as challenging at best in no small part because of the absence of a counterfactual: if research was not introduced, would the decision have been different? Given the highly situated, accreting nature of decision-making and the overwhelming belief of the importance of research relevance to use, it is hard to imagine estimating a counterfactual except in the rare conditions of, for example, information interventions delivered in an experimental format. I’ll leave it to you to imagine the possibilities of this for understanding research impact at scale.
But less rigorous evaluation methods were listed and equivalently held as problematic. References to research or explicit citations in decision-making or policy were suggested, but as one respondent noted, “90% of what we use, you will never know we used.” In other words, citations may indicate impact, but the absence of them does not mean the opposite, leading to inestimable false negatives.
Promising Options
In spite of repeated concerns, two lines of thinking emerged as promising (in my view). First, there is perhaps a natural sequencing of indicators to which we ought to pay attention. For research to have impact, it must first be seen, read, or otherwise engaged with, and prior to that it must be accessed, and prior to that it must be made accessible. So although we have a set of highly imperfect measures of impact, we may have indicators that help us move through that sequence. For example, publishing in a journal or a magazine or any other location is a far cry from use, but if we don’t put it out to be consumed, it can’t be consumed. As I tell my son, there is a zero percent chance of getting a hit if you don’t swing the bat (I say this a lot). And citation rates or downloads or views – also imperfect – mean that someone is accessing the research. You can see where I’m going. This approach to measurement, however, demands a well thought out (ideally, well researched) logic model with corresponding indicators at each point, from inputs to outputs to short and long term outcomes (what Dan Goldhaber refers to as deathbed impact – the impact we actually care about). While I’m certain I am not the first to suggest this, none of those I spoke with referred to any such tool that guided their thinking or their work.
Measurement of research use and research impact will always be challenging. I’ll add that the measures of impact emerging in the UK and elsewhere in response to accountability requirements for impact are widely debated as well (see here, here, and here for examples). But ideas about how to do this, albeit imperfectly, abound. To turn the discussion over, let me pose this question to you: How is research impact measured in your institution? What tools, especially well articulated logic models, exist to help us develop shared understandings about measures of impact?
*Author’s Note: I’d like to acknowledge the support of the William T. Grant Foundation for creating the opportunities that resulted in this line of work, and Vivian Tseng and Mark Rickinson for their generosity in letting me bounce ideas off of them. I’d especially like to acknowledge the six thought leaders who volunteered their valuable time to contribute to this project.
About the Author:
Dr. Elizabeth N. Farley-Ripple, is an Associate Professor in the School of Education at the University of Delaware. Her research focuses on policy analysis and evidence-based decision-making in schools. She can be reached at enfr@udel.edu.
Published by