What makes for an effective research-practice partnership?
Begin the assessment or evaluation by clarifying the definition of “effective.” Effective at what? A new partnership, for example, might focus on how well partners work together. More mature RPPs are better poised for evaluations of impact. For example, these partnerships might be ready to assess an agency’s or organization’s capacity to turn research findings into improvements. Much of this capacity depends on how individual leaders are engaged in research findings and can apply them.
Researcher Cynthia Coburn suggests, “We know that good working relationships are essential. Having structures and processes to facilitate open and active communication is important, and having regular meetings and processes in place to ensure mutuality are signs that partnerships might be effective.” She adds that cooperation in understanding research findings is an additional sign of effectiveness. “A lot of reports go into piles because people don’t have time to make meaning of them.”
Coburn and Joshua Glazer have assessed partnerships in terms of their “absorptive capacity,” or the capacity of an organization to incorporate knowledge from external sources. This concept is promising because it may help partnerships better support partners’ capacity to use research.
What qualities of RPPs might be evaluated to inform improvement efforts?
Ultimately, partnerships want to know whether their efforts are making a difference for youth, and if they could operate more effectively. A recent paper by Henrick, Cobb, Penuel, Jackson, and Clark, Assessing Research-Practice Partnerships: Five Dimensions of Effectiveness, describes the qualities that cut across different types of partnerships, and that partners—on both the practice and research side—say are essential to their work. The paper offers a compelling framework for evaluating RPPs in ways that can inform improvement efforts.
As outlined by Henrick and colleagues, partnership members frequently want to know about five key dimensions about their work together. These include:
- the quality of the relationships within the partnership;
- whether the research produced is relevant, timely, and rigorous;
- the availability of supports that aid the use of the research produced;
- whether the capacity of the participating researchers, practitioners, and organizations is advancing; and,
- the value of the research produced for other improvement efforts.
Thoughtful evaluations that probe deep into the challenging work of RPPs and honest conversations about what is learned will ultimately aid the functioning and impact of RPPs.
What kinds of data will aid research-practice partnerships as they aim to improve?
Once the broad questions of interest are identified, more focused questions are needed to structure data collection efforts in ways that aid understanding and provide clues about ways to improve. Data should be trustworthy and responsive to the questions at hand.
It is good to consider a range of sources and tools for gathering data that can support, contradict, or reveal insights about the RPP’s progress and/or effectiveness. A key task before selecting or developing relevant measures is to identify the indicators of each dimension that are relevant to the RPPs work. For example, to gauge the quality of relationships, evaluations about the relationships within the partnership might examine how routinely researchers and practitioners work together and whether these routines promote joint decision making, an understanding of the constraints and resources of each partners’ role, and whether the routines for working together guard against or introduce power imbalances. These indicators are all feasible to measure, but may require data from different sources. Some RPPs, for instance, have collected annual survey data on whether there are structures in place to ensure that all partners have meaningful opportunities to inform the work, exchange information, and communicate with each other through both informal and formal channels. Others might use interviews to assess perceptions of each partners’ satisfaction with the routines and respect for their expertise.
While there are multiple ways to gather data, and each comes with its own set of tradeoffs. Data might be gathered using quantitative measures, such as a survey, and/or through qualitative protocols. Each offers strengths and limitations. Quantitative measures are often easy to administer and analyze, but assume a common language. Qualitative protocols allow for more probing questions about the nuance and how and why of RPPs, but they are more time intensive to administer and analyze. For example, to evaluate how participating in a partnership affects those doing the work, interviews might be conduct to assess whether researchers are pursuing different types of research questions or adopting more participatory approaches to conduct their research. Alternatively, brief, online assessments might be administered to evaluate changes in the accuracy of researchers understanding of core problems of practice, or administrative data might be analyzed to determine the alignment between current practices and the research findings. At the organizational level, budget data might be collected to assess whether investments have shifted to bring on staff or bolster the data infrastructure to meet the agency’s research and implementation needs.
Another consideration is whether to use well-established measures that are general enough to be describe a wide-range of RPPs versus measures or data that is designed to inform targeted aspects of a specific partnership or at a particular stage in an RPP’s development. For example, an RPP may be asked to provide evidence of its productivity using standard indicators, such as a count of reports, policy briefs, peer-reviewed publications, and presentations. Or, more nuanced data may be needed about what stakeholders attended the presentations, which media sources covered the research findings, or how findings of the research were reflected in recent policies or practices, in which case document reviews or interviews with practitioners and researchers may be appropriate.
Lastly, and similar to the options described above, data about youth functioning and outcomes can also come from a range of sources. The RPPs interviewed, reported relying on administrative data that is routinely collected as well as supplementing this information with more targeted surveys and interviews. The specific data to be collected and the mode of data collection should be determined by the driving questions and intended impact.