That could include, for example, testing multiple versions of an email to see which is most effective in an outreach campaign. ◻ Rapid experimentation: Also called A/B testing, it's a low-cost way to compare the impact of our proposed operational improvements, including those that draw on behavioral insights such as the nudge. An example of the latter is a pilot program that allows a local government more flexibility with federal or state rules (such as the ability to blend funds) in exchange for clear goals, accountability for results and evidence-building to learn what works. ◻ Collaboration: We have at least one ongoing initiative to partner across agencies to strengthen results, whether within our own jurisdiction or with a different level of government. ◻ Data sharing: We have a strategy for making our administrative data accessible to program managers and qualified researchers, while protecting privacy, to shed light on program trends, dynamics and impacts, and to identify ways to improve those programs. ![]() We set aside programmatic funds - perhaps a half-percent to 1 percent - for evaluation and analytics activities. ◻ Evaluation capacity: We have a chief evaluation office (or something similar) that has the staff capacity and skills for rigorous and independent assessments and is valued by leadership as a resource to inform evidence-based decision-making. It is a strategy that drives results, not just a "show and tell" exercise. ◻ Performance leadership: We have a data-driven leadership strategy, often referred to as a PerformanceStat initiative, which we use to identify key challenges, diagnose problems, devise solutions and track results. ◻ Learning agendas: We develop a multi-year learning agenda, updated each year, that identifies the most important research questions facing our organization and, in doing so, helps us prioritize our evidence and evaluation resources. It includes our mission statement, goals and strategies to achieve them, since results-focused government requires a clear sense of where we are aiming. The overall aim of this work is to make evidence-based decision-making more effective and applicable to a wider range of problems.◻ Strategic management: We have a well-written strategic plan, updated regularly and based on broad stakeholder input. We discuss our approach to measuring and reducing uncertainty in policy decisions and its implications for evaluation and research. For example, some interventions may have robust evidence of impact but considerable uncertainty associated with the generalisability of the evidence to a new context, or with the scalability of the intervention. The framework also calls for systematic analysis of uncertainty associated with all components of a policy decision. The decision to act under uncertainty is influenced by a number of other considerations including: the potential to improve the evidence base, the urgency of the decision and the analysis of alternative options. We argue that it is rational to pursue a policy with uncertain outcomes if there is a reasonable probability of large positive utility (compared to the cost of the intervention) and a low probability of negative utility. This analysis is conducted using standard decision theory and an examination of the utility of policies. The framework involves a systematic consideration of the estimated costs, benefits and potential harm of a policy, along with the uncertainty in those estimates. We present a framework for considering inconclusive evidence applied to examples from evidence-based education in low- and middle-income countries. However, there is little guidance as to how and when such inconclusive evidence can be used. For these reasons, policy-making does – and should – consider issues for which there is no conclusive evidence. Evaluations may provide stronger conclusions about impact than about the mechanisms, implementation, context, generalisability and scaling of interventions. ![]() Rigorous evidence is also better suited to some questions than others. ![]() If policies are based strictly on such rigorous evidence there is a risk of bias towards simple, discrete, measurable interventions and away from complex interventions. Many consider the strongest evidence to come from studies that identify causality with high internal validity - such as RCTs - and systematic reviews of these studies. There is growing demand for policy based on rigorous evidence.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |