Friday, 20 July 2012

Influencing public policy Pt 2 - People don't like change!

In my previous post on the series of public policy seminars I've been attending I talked about the challenges around tax reform in Australia at the moment.  Today I want to reflect on a more general seminar on the the policy process and contemporary policy challenges.

A new Kennedy School of Government?

The seminars I've attended this week have been put on by ANU's Crawford School of Public Policy to celebrate the launch of its new Institute of Public Policy under former Treasury Secretary Ken Henry as Executive Chair.

The Institute of Public Policy apparently has ambitions to rival Harvard's Kennedy School of Government in terms of influence and prestige (wasn't that what ANZSOG supposed to be doing? And more than a few other ventures since then have made similar claims...?). 

In any case, Tuesday's seminar was a series of presentations by some of the newly  appointed Public Policy Fellows (basically a rebadging exercize of assorted ANU academics), followed by a contribution from journalist Lenore Taylor and a panel discussion chaired by ex-politician and now (inter alia) ANU Chancellor, Gareth Evans.

There were a couple of extremely entertaining and insightful presentations - notably from Bruce Chapman (on the realities of the policy process), Gabriele Bammer (on why academics aren't good at some types of policy problems and the case for a new discipline of Integration and Implementation Science), and Peter McDonald  (on the demographics and effects of ageing of the population on countries). 

But some rather less so! 

Ken Baldwin, a scientist (or should one say climate change warrior?), presumably won't be reading this because blogs, or indeed anything not in a peer reviewed journal, have 'no value' in his view.   In his view, bureaucrats (and the media) should take the advice of the academy as to who the real experts on a particular subject are, and accept it unquestioningly!

Thankfully ANU Chancellor and Chair of the seminar Gareth Evans dubbed his approach the priesthood approach to science and challenged it.

Kim Rubenstein, a lawyer, tried to suggest that failed litigation appealed even unto the High Court was a sensible way of getting an entirely unintended minor legislative technical anomaly corrected. 

And the argument about the need for 'policy champions' from the other 'gender warrior' (and lawyer) present, Margaret Thornton just came across as naive.  Margaret complained about anti-intellectualism in the media, and the extent to which the term academic has become a term of abuse.  But her view that academics are the 'idea generators' while public servants are the mere 'deliverers', and complaints about the need to actually persuade someone in the system of one's views rather than have them accepted automatically as self-evident, perhaps helps understand just why academics don't get the respect they think they deserve!

The virtue of humility?

There was also an interesting debate on whether being in the media frequently in order to get your views out (as economist Warwick McKibbin is) is helpful or outright counter-productive.

In fact, the consensus seemed to be that in general, one's influence is inversely related to one's media profile!

There was a particularly nice anecdote from Bruce Chapman about an econometrician in his department who didn't view himself as impacting on public policy at all -  but in fact had quietly changed the economic models used by several countries, with significant policy consequences.

Similarly, there was something of a consensus that while saying something public can be helpful early on in a policy development process, there is a point when it becomes counter-productive.

Perspectives our bishops and their apparatchiks might take due note of...

The psychology of change

One of the questioners from the floor queried why there were no psychologists or sociologists amongst the newly selected fellows.  I'm not sure whether or not I agree that is a real concern, but certainly some understanding of some basic psychological principles is important if you want to influence public policy.

The reality is that most people, however discontented they may be, are reluctant to move away from the status quo: they are extremely risk averse and work on the 'devil you know' principle.

Gabriele Bammer's presentation suggested some of the reasons why that is the case.  In particular, she suggested, academics tend to struggle to identify and deal with effects of the 'unknowns' that lead to unintended consequences when policies are implemented.  Academics (and policy makers) also tend to assume universal laws such that policies will work the same way wherever they are transplanted: but in reality context matters.

Her presentation was in effect a plea for a degree of humility on the part of academics, and acknowledgement of the limitations of the advice they can offer and appreciation for the different skill sets of policy-makers and bureaucrats respectively.  But also a plea for thinking more broadly, on a cross-disciplinary basis.

The case of nuclear power

Unfortunately, as several speakers pointed out, academia does not typically work well across the silos.
Ken Baldwin, for example, lamented the effects of the Fukishima accident on the nuclear power debate.  The earthquake and tsunami in Japan, he noted, had killed thousands, the resulting nuclear accident had killed (and he claimed, was unlikely to) kill anyone.  Yet any prospect of nuclear power in Australia was once more off the agenda as a result of it.

But he was wrong on the facts.  The latest epidemiological study (in a peer reviewed journal too!) suggests that up to 1300 cancer deaths are possible.  And of course there were the other major impacts he might want to consider - there were a number of deaths due to the indirect effects of the accident; there were mass evacuations; and it will be decades before the land surrounding the plant will be habitable again, so that thousands have lost their homes for ever;  and the shutdown of the reactor is still impacting on electricity availability and hence production in Japan.

More fundamentally, there is the trust element involved.  The failure of planners to take into account adequately the possibility of a tsunami of the size that occurred, and the subsequent mismanagement of the disaster illustrates the real problems of effective regulation in areas of this kind: catastrophic risks trend consistently to be underestimated; regulators can be subject to capture by the regulated; and politicians and others are often reluctant to act swiftly and decisively to take prudential measures that impose large costs on the electorate (such as evacuation).  Nuclear power itself might in principle be potentially safe if managed properly, but can we know that it will be, given the human factors involved?

Finally, there is a basic tenet of the risk management literature that applies here: people are much more concerned about risks that are not under their own control.  In the end we can't prevent an earthquake or tsunami; we can, however, ensure there is never a nuclear accident!  In most countries, it is many many times safer to get on board an airplane, for example, than to go for a drive in your car.  Why are we willing to spend so much on aviation regulation but not, for example, to take measures such as banning trucks from the roads?  The psychologists will tell you that it is because behind the wheel of a car we have the illusion at least of control, whereas as a passenger in a plane we do not.  The basic principle is, the bigger the size of the potential risk, and the less control an individual has over it, the more risk averse he or she will be.

Now it seems to me that scientists who wish to express views on such subjects, and presumably teach them to their students in the context of public policy development, have a duty to familiarise themselves with the literature not just of their own field, such as the physics of nuclear energy, but also the literature on risk management and regulatory failure.

If you aren't prepared to do that (and not everyone has the time or skills do make that effort), you have to be prepared to accept that your discipline's perspective will need to be integrated with those of others, so that your view of the best possible outcome will not necessarily prevail.  A purely technical, pure science perspective is never going to be enough. 

And that's a lesson that applies in many other fields too, because if you want to seriously try and get nuclear energy (or any other 'elephant in the room' type issue such as promoting the virtues of adoption over abortion) back on the table in this country, you need to find a way of addressing those broader contextual issues as well.

Hasten slowly: the case of HECS

But lest it be thought that I'm just picking on the scientist, a similar lack of appreciation of the broader psychology of change seemed to me to underpin former Treasury Secretary Ken Henry's attack on Governments over-compensating, or 'bribing' of the electorate to get tax reform through.

His comments attacking 'overcompensation' on Monday actually contrasted quite nicely with Bruce Chapman's discussion on Tuesday of how he managed to sell income-contingent loans for higher education (HECS) to the Labor Government in 1988.  The issue was how to make a very big change indeed: to move from a system where Australian students paid no fees whatsoever for higher education (courtesy of Whitlam reforms) to one where (unlike the pre-Whitlam system where around 80% of students received scholarships covering their fees) pretty much everyone paid.

There was a good case for re-introducing fees: the education system needed to be expanded, but no one wanted to increase taxes to pay for it back then; and the system of free higher education had proved to be highly regressive.  But while there would certainly be winners (ie the extra students who would be able to get a University place and the academics employed to teach them), there would also be a lot of losers in the form of students now forced to pay.

Providing income contingent loans - that is, paying back the fee when you were earning enough for it to able to afford to do so -  helped soften the blow.  It was in fact a form of 'overcompensation' if you like (particularly since students initially didn't pay any interest), to ease in a major change.

Moreover, it was a particularly bad form of overcompensation from a Treasury/Finance perspective, because its design meant that while the expansion of higher education was going to start immediately, the revenue raised from fees came some way down the track: students didn't start paying back their loans until they graduated and started earning a reasonable amount of money.

Yet because of these very features, the HECS system has been an enormous success: it now has wide acceptability, even having recently been expanded to TAFE.  It has been adopted overseas.  And it has helped to fund a vast expansion of our higher education system.

Why Ken Henry is wrong on over-compensation...

Turning back to Ken Henry's concerns at 'bribing' people in order to get tax reform, it may be that Henry's real concern goes to the quantum of the compensation given in the case of the carbon price.  But he put it as a general principle, so let's look at it in those terms.

Imagine a Government wants to introduce a measure like carbon pricing.  Inevitably there will be some winners (the electricity generated by the solar panels on your roof saves you more money/those who grow their own food) and some losers (those who have to buy in energy or products requiring transport).

The Government could take a couple of  approaches. 

It could, for example, decide not to offer any compensation at all, and instead sell the virtues of the reform, and attempt to persuade the 'losers' that they are sacrificing for the good of the future of the world. 

First you have to be able to do an extremely good sales job, something that seems pretty much beyond the capability of Government at the moment.  But even then, it will surely only be tenable if the size of price increases the 'losers' face is relatively small, and falls mostly on people who can afford it.

If, however, as in the case of a carbon price, the effects are actually likely to be highly regressive and quite large, offering compensation to people makes sense.

Now from a purely economic perspective, in an ideal world, you could exactly compensate people for the impacts of the carbon price, and still get the desired policy outcome.  The idea is that if you given them the exact amount of money they need to maintain their old consumption pattern, they mostly won't actually keep spending their money the way they used to.  Because the price of electricity has gone up relative to food for example, they might decide to turn off the heater for a few extra hours, and buy a nice bottle of wine instead!

In the real world, however, there are a few problems with the notion of 'exact' compensation.

First, outside the purist world of Treasury modelling, it is pretty much impossible to design an effective compensation mechanism tailored to individual household consumption patterns (costs will increase with the size of the family for example), and to take account of the variability of expenditures (say if a winter is particularly cold, or a summer particularly hot).

The reality is that any system is going to be based on averages and assumptions, and when it is paid will not match up to when you pay out for higher costs, because of institutional constraints and more.

All of which means that your guestimates of how much compensation is needed may prove to be quite wrong. The fact that you are unlikely to be able to get it right (or even close to right) for a lot of people is one of the good reasons for 'over-compensating'.

The second reason for 'overcompensating' seems to me to relate to what economists call adjustment costs. If prices go up, we can't always immediately change our consumption patterns. And doing so is not costless: we need to think about it and make decisions, and we may need to buy new more energy efficient fridges and heating systems for example, or an extra cardigan!

The third reason is that some goods are more substitutable than others, and some dollars are more valuable than others.  In theory as the price goes up we can cut back the length of our showers, turn off the heater and so forth. But there are some kinds of cuts to our lifestyle that we are likely to feel far more acutely than others, and those on low incomes with little disposable income will feel the impact of increases in 'fixed' costs (such as electricity) far more acutely than those on much higher ones. 

All of these factors also influence our reaction to change: most people are risk averse to some degree, those struggling to get by financially (or who believe that they are struggling) are likely to be far more risk averse than those on the income of a former Treasury Secretary turned academic!
So all in all, 'bribing' people, or rather hastening slowly and cushioning the initial impact of major reforms is, in my view, good policy.

And there are lessons in that in trying to sell some of the kinds of other policy reforms we'd all like to see...

More in the next part of this series.

No comments: