What kinds of data will aid research-practice partnerships as they aim to improve?

Once the broad questions of interest are identified, more focused questions are needed to structure data collection efforts in ways that aid understanding and provide clues about ways to improve. Data should be trustworthy and responsive to the questions at hand.

It is good to consider a range of sources and tools for gathering data that can support, contradict, or reveal insights about the RPP’s progress and/or effectiveness. A key task before selecting or developing relevant measures is to identify the indicators of each dimension that are relevant to the RPPs work. For example, to gauge the quality of relationships, evaluations about the relationships within the partnership might examine how routinely researchers and practitioners work together and whether these routines promote joint decision making, an understanding of the constraints and resources of each partners’ role, and whether the routines for working together guard against or introduce power imbalances. These indicators are all feasible to measure, but may require data from different sources. Some RPPs, for instance, have collected annual survey data on whether there are structures in place to ensure that all partners have meaningful opportunities to inform the work, exchange information, and communicate with each other through both informal and formal channels. Others might use interviews to assess perceptions of each partners’ satisfaction with the routines and respect for their expertise.

While there are multiple ways to gather data, and each comes with its own set of tradeoffs. Data might be gathered using quantitative measures, such as a survey, and/or through qualitative protocols. Each offers strengths and limitations. Quantitative measures are often easy to administer and analyze, but assume a common language. Qualitative protocols allow for more probing questions about the nuance and how and why of RPPs, but they are more time intensive to administer and analyze. For example, to evaluate how participating in a partnership affects those doing the work, interviews might be conducted to assess whether researchers are pursuing different types of research questions or adopting more participatory approaches to conduct their research. Alternatively, brief online assessments might be administered to evaluate changes in the accuracy of researchers’ understanding of core problems of practice, or administrative data might be analyzed to determine the alignment between current practices and the research findings. At the organizational level, budget data might be collected to assess whether investments have shifted to bring on staff or bolster the data infrastructure to meet the agency’s research and implementation needs.

Another consideration is whether to use well-established measures that are general enough to be describe a wide range of RPPs versus measures or data that are designed to inform targeted aspects of a specific partnership or at a particular stage in an RPP’s development. For example, an RPP may be asked to provide evidence of its productivity using standard indicators, such as a count of reports, policy briefs, peer-reviewed publications, and presentations. More nuanced data may be needed about what stakeholders attended the presentations, which media sources covered the research findings, or how findings of the research were reflected in recent policies or practices, in which case document reviews or interviews with practitioners and researchers may be appropriate.

Lastly, data about youth functioning and outcomes can also come from a range of sources. RPPs report relying on routinely collected administrative data as well as supplementing this information with more targeted surveys and interviews. The specific data to be collected and the mode of data collection should be determined by the driving questions and intended impact.

More about this topic:

Topic

Evaluating Partnerships for Improvement and Impact