What’s Wrong with Public Policy Education

Thursday, August 2, 2018

This article was originally published on August 1, 2018 by The American Interest

Public policy education is ripe for an overhaul. Rethinking how it should be done has been one of my major preoccupations over the past decade, and is the focus of work that we are doing now at Stanford in restructuring our Masters in International Policy (MIP) Program. It is also at the core of several mid-career programs we run at the Center on Democracy, Development, and the Rule of Law (CDDRL), such as the Leadership Academy for Development (LAD). The essence of the needed transformation is to shift the focus from training policy analysts to educating leaders who can accomplish things in the real world.

Public policy education in most American universities today reflects a broader problem in the social sciences, which is the dominance of economics. Most programs center on teaching students a battery of quantitative methods that are useful in policy analysis: applied econometrics, cost-benefit analysis, decision analysis, and, most recently, use of randomized experiments for program evaluation. Many schools build their curricula around these methods rather than the substantive areas of policy such as health, education, defense, criminal justice, or foreign policy. Students come out of these programs qualified to be policy analysts: They know how to gather data, analyze it rigorously, and evaluate the effectiveness of different public policy interventions. Historically, this approach started with the Rand Graduate School in the 1970s (which has subsequently undergone a major re-thinking of its approach).

There is no question that these skills are valuable and should be part of a public policy education.  The world has undergone a revolution in recent decades in terms of the role of evidence-based policy analysis, where policymakers can rely not just on anecdotes and seat-of-the-pants assessments, but statistically valid inferences that intervention X is likely to result in outcome Y, or that the millions of dollars spent on policy Z has actually had no measurable impact. Evidence-based policymaking is particularly necessary in the age of Donald Trump, amid the broad denigration of inconvenient facts that do not suit politicians’ prior preferences.

But being skilled in policy analysis is woefully inadequate to bring about policy change in the real world. Policy analysis will tell you what the optimal policy should be, but it does not tell you how to achieve that outcome.

The world is littered with optimal policies that don’t have a snowball’s chance in hell of being adopted. Take for example a carbon tax, which a wide range of economists and policy analysts will tell you is the most efficient way to abate carbon emissions, reduce fossil fuel dependence, and achieve a host of other desired objectives. A carbon tax has been a nonstarter for years due to the protestations of a range of interest groups, from oil and chemical companies to truckers and cabbies and ordinary drivers who do not want to pay more for the gas they use to commute to work, or as inputs to their industrial processes. Implementing a carbon tax would require a complex strategy bringing together a coalition of groups that are willing to support it, figuring out how to neutralize the die-hard opponents, and convincing those on the fence that the policy would be a good, or at least a tolerable, thing. How to organize such a coalition, how to communicate a winning message, and how to manage the politics on a state and federal level would all be part of a necessary implementation strategy.

It is entirely possible that an analysis of the implementation strategy, rather than analysis of the underlying policy, will tell you that the goal is unachievable absent an external shock, which might then mean changing the scope of the policy, rethinking its objectives, or even deciding that you are pursuing the wrong objective.

Public policy education that sought to produce change-makers rather than policy analysts would therefore have to be different.  It would continue to teach policy analysis, but the latter would be a small component embedded in a broader set of skills.

The first set of skills would involve problem definition. A change-maker needs to query stakeholders about what they see as the policy problem, understand the local history, culture, and political system, and define a problem that is sufficiently narrow in scope that it can plausibly be solved.

At times reformers start with a favored solution without defining the right problem. A student I know spent a summer working at an NGO in India advocating use of electric cars in the interest of carbon abatement. It turns out, however, that India’s reliance on coal for marginal electricity generation means that more carbon would be put in the air if the country were to switch to electric vehicles, not less, so the group was actually contributing to the problem they were trying to solve.

As Matt Andrews, Michael Woolcock, and Lant Pritchett at the Kennedy School have argued, it is important to start from a concrete problem, that is, something that other people believe to be a problem. This means that there is already a constituency in favor of change, which can be the starting point for a coalition of people who want a different outcome.

The second set of skills concerns solutions development. This is where traditional policy analysis comes in: It is important to generate data, come up with a theory of change, and posit plausible options by which reformers can solve the problem they have set for themselves. This is where some ideas from product design, like rapid prototyping and testing, may be relevant.

The third and perhaps most important set of skills has to do with implementation. This begins necessarily with stakeholder analysis: that is, mapping of actors who are concerned with the particular policy problem, either as supporters of a solution, or opponents who want to maintain the status quo. From an analysis of the power and interests of the different stakeholders, one can begin to build coalitions of proponents, and think about strategies for expanding the coalition and neutralizing those who are opposed.  A reformer needs to think about where resources can be obtained, and, very critically, how to communicate one’s goals to the stakeholder audiences involved. Finally comes testing and evaluation—not in the expectation that there will be a continuous and rapid iterative process by which solutions are tried, evaluated, and modified. Randomized experiments have become the gold standard for program evaluation in recent years, but their cost and length of time to completion are often the enemies of rapid iteration and experimentation.

This three-stage process should not be thought of as a linear one, but rather one in which there is constant circling back to earlier stages in light of what one has learned in later ones. A failed solution may send one back to another option that was less favored but more feasible; indeed, it may force one to redefine the problem altogether. A proper stakeholder analysis should be fed back into the solution development: It may tell you that Option A, which produces the best outcomes, is impossible to implement, requiring moving to Option B or C instead.

This framework for policy problem-solving is not rocket science; rather it is common sense. Stakeholder analysis is something that every successful politician in the world has performed from the beginning of time. That is the essence of politics:  generating sufficient power by gathering allies and undermining opponents; it’s just that good politicians don’t apply a structured methodology to this task.

To accomplish this properly requires, however, thorough knowledge of local context, and it is appreciation for context that is most severely lacking in the way that public policy is taught today. That will be the subject of a subsequent post.