The Theory of Macroeconomic Policy

John Maynard Keynes

Back

Contents

(A) The Burden of the Debt and Functional Finance
(B) Optimal Macroeconomic Policy
(C) Monetary Policy

(A) The Burden of the Debt and Functional Finance

"Deficit-spending" was the first great political challenge that the Keynesians had to face down from the very beginning. The Employment Act of 1946, for instance, was most fiercely resisted on the basis that it would lead to unbalanced budgets. As quickly as the word spread, "Keynesianism" and "deficit-spending" became virtually synonymous in the lexicon of politicians.

The basic problem with government buget deficits, in the layman's eyes, derived from reasoning by personal analogy: just as any person must pay back his personal debts by curtailing future expenditure, the accumulation of government debt by current deficit-spending would imply that future generations would be burdened with its payment. Thus, the "burden of the national debt" was that future generations would be forced to have lower standards of living in order to pay back the debt incurred previously by deficit-spending.

This "burden of the debt" notion is, of course, completely ludicrous - regardless of one's theoretical framework. John Maynard Keynes had already railed against this mistaken idea in his fiery pamphlet with Hubert D. Henderson, Can Lloyd George Do It? (1929). As Abba Lerner put it, the "national debt is not a burden on posterity because if posterity pays the debt it will be paying it to the same posterity that will be alive at the time when the payment is made." (Lerner, 1944: p.303). Furthermore, as Evsey Domar (1944) clearly demonstrated, the debt-income ratio would disappear over time anyway, provided the economy grew fast enough (and, as Domar reminds us, it will only grow fast enough if aggregate demand is sufficiently high - thus government expenditures are actually necessary to diminish this "debt burden"!)

However, many early Keynesian economists, like Alvin Hansen, still objected to the maintenance of government budget deficits over the long run. They still considered it a useful policy objective to attempt to eliminate deficits - what can be termed "sound finance". The great Keynesian logician, Abba Lerner (1941, 1943, 1944, 1951, 1973) took pains to demontrate that this should not be an objective of government policy. Instead, he argued government policy should be governed entirely by the principles of "functional finance".

In Lerner's perspective, taxing and spending, borrowing and lending, buying and selling are the tools available by to government to influence the economy. The primary question facing governments is how to ensure that this impact could be most beneficial upon the economy regardless of whether they increase or decrease government debt. In Lerner's view, there are three effective policy principles governments should adhere to:

(1) adjust taxation and government spending such that output is at full employment and there are no inflationary pressures (and not with the objective of "raising revenues" or "closing the deficit");

(2) borrow and repay the debt only as a means of changing the proportions by which the public holds bonds and money (and not to "raise funds" or "repay debt");

(3) print and destroy money as necessary to reconcile the policies in (1) and (2).

Lerner argued that the volume and structure of government activity - via its purchases, sales, subsidies, taxes, transfers, borrowing, lending, printing, destroying, etc. - both directly and by impacting incentives, could change the volume and structure of output in order to eliminate the greatest inefficiency of a modern capitalist system - the massive waste of resources known as "unemployment" and "low capacity utilization". He argued that the deficit and national debt can keep rising at no cost to the economy now, or in the future; and, at any rate, there is an automatic tendency for the budget to be balanced in the long run if these guidelines are followed. The main point was that at no point should this spending be funded by taxes; taxes should only be imposed to reduce spending in inflationary times.

Lerner's proposals were greeted with alarm - even John Maynard Keynes himself initially objected to them. However, Lerner's reasoning was based on a rather clear demonstration that the dangers that "sound finance" was supposed to avert were either illusory or already addressed by "functional finance". Firstly, as we have seen, the "burden of debt" arguments that lay underneath much of the "sound finance" reasoning is really a red herring: deficits do not transfer resources across generations but rather only redistribute purchasing power within generations. Secondly, the fear that debt-financing and/or increasing money supply is inflationary will be eliminated if principle (1) is adhered to. Thirdly, the fear that government debt will "crowd out" private investment will be eliminated by principle (2) (although we will have more to say on this later).

In sum, Lerner argued that the functional aspects of government policy should be its primary and sole criteria for whether it is a good or bad policy. Whether these lead to more or less public debt should not even be considered. This does not mean, Lerner reminds us, that debt had no impact - indeed, it has an allocative impact in the sense of reallocating income from taxpayers to bondholders - but whether the government budget is in deficit or in surplus in and of itself does not matter and should not be a policy concern.

Most Neo-Keynesians soon absorbed Lerner's "functional finance" ideas and, for most of the post-war period, government fiscal and monetary policies were indeed assessed largely in terms of their effects on output and employment, regardless of whether these increased or decreased the public debt. This is precisely what is done in standard IS-LM exercises: the purpose of taxation or government spending is to shift the IS curve around and of increasing/decrease the supply of money and bonds is to move the LM curve so that output and employment will be at full employment - and not because it is necessary to "raise revenue", "raise funds", "close the deficit" etc. However, this was to be disputed in the 1970s, in the analysis of the "long-run" macroeconomy.

(B) Optimal Macroeconomic Policy

One of the advantages of Keynes's General Theory was the provision of a closed, interdependent "general equilibrium" theory based on a few simple, essential relationships, such as that between consumption and income, investment and interest rates, etc. It was fortuitous that national income accounts, which began to be collected in the 1920s by institutions and economists such as Simon Kuznets (1934, 1937, 1941) in the US and Colin Clark (1932, 1937), Richard Stone and James Meade (1941, 1944) in the UK, contained within them these very categories. The still infant discipline of econometrics, bogged down in the estimation of individual demand curves for which data was generally lacking and specification problems were rife, found that Keynes's simple equations and the availability of the relevant data were a heaven-sent pair on which they could apply and hone their techniques - and, to sweeten the task, any results achieved in the process would be highly useful for public policy.

Jan Tinbergen (1939) was among the first to apply the "new econometrics" to macroeconomic aggregates. Admittedly, Keynes (1939) himself was critical of his efforts - "black magic" as he called it. But the econometric tide was unstoppable and Tinbergen's contributions led the way. After Trygve Haavelmo's (1944) "probabilistic revolution" and the advances of the Cowles Commission on simultaneous equations estimation, the first full macroeconometric models -- most notably the Keynesian macromodels of Lawrence Klein (e.g. Klein, 1950; Klein and Goldberger, 1955) -- began to be produced and systematically applied to public policy.

Estimating the parameters of the Keynesian relationships was one thing, using them for policy was another. One of the issues that emerged around that time out of the discussions on international macroeconomics was the issue of the design of macroeconomic policy. Jan Tinbergen (1952, 1956) and James E. Meade (1951) were among the first to establish the general guidelines for policy-execution, i.e. the choosing of "optimal" settings of policy instruments to achieve particular "targets".

The Tinbergen-Meade approach was simple: suppose there is a function that purports to summarize the "real economy" of the following general sort, Y = (G, M) where variable Y is dependent on variables G and M in some precise (econometrically-estimated) manner. Suppose G is a policy instrument under the control of government (e.g. government spending). Then one can achieve a particular output level by inputing the target value Y* and then inverting the equation for the policy variable, i.e.

G = -1(Y*, M)

The resulting G will be the optimal setting for the policy instrument.

Of course, there may be multiple policy instruments available to the government and, also, multiple targets. In the simplest case, suppose the government wishes to achieve some target level of output, Y* and some target exchange rate, E*, then it might have a set of simultaneous equations, Y = (G, M) and E = g(G, M), which relates two targets Y and E to a pair of policy instruments, G and M. The Tinbergen-Meade approach also works in this case, provided a few basic conditions are met. Firstly, and most obviously, the policy instruments must affect the targets i.e. dY/dG 0 and dE/dM 0. Secondly, the number of independent targets must be less than or equal to the number of independent instruments.

Independence of instruments is obvious - we cannot, in general, hit a particular set of target values Y*, E* if the instruments G and M are linearly related to one another (think of this in terms of a two-dimensional vector problem where the length of the vectors are the policy instrument settings and a target values are a point in the plane). That the number of targets not exceed the number of instruments is also self-evident: we cannot, in general, hit two targets, say Y* and E*, with merely one instrument (say G). Of course, if we have more instruments than targets, then there is no problem: we merely need to use as many instruments as there are targets, the remaining instruments can be left redundant.

Another problem with the Tinbergen-Meade approach is the "common funnel" issue where the targets may be related to one another. For instance, suppose we have a recursive system Y = (G, M) and P = g(Y). Obviously, we can use instrument G to hit Y* and leave M redundant. But Y also affects P. We may, erroneously, estimate that an instrument, such as G, affects P directly when it only really affects P through Y. A common example of this is the Phillips Curve: suppose G and M are government spending and money supply respectively, Y = (G, M) is the aggregate demand relationship and P = g(Y) is the Phillips curve (where P is inflation). We can use government spending (G) to target aggregate demand (Y*), but aggregate demand (via the Phillips Curve) also affects inflation (P). Now, we may erroneously assume that as we have two instruments (G, M) and two targets (Y, P) and that we can, say, assign government spending to output (G to Y*) and money supply to inflation (M to P*). But, in this model, money supply itself only affects inflation through aggregate demand, thus in targeting P* with M, we will be upsetting our attempts to target Y* with G. Recognizing and accounting for this "common funnel" is usually quite a complicated affair.

Is there any criteria as to which instruments ought to be assigned to which targets? Robert Mundell (1962, 1968) proposed a simple one: the theory of comparative advantage. If, in relative terms, G affects Y more than it affects E and if M affects E more than it affects Y, i.e.

(dY/dG)/(dY/dM) > (dE/dG)/(dE/dM)

then G has a comparative advantage at influencing Y* and M has a comparative advantage at influencing E*. Thus, G should be assigned to Y* and M should be assigned to E*. Mundell (1968) outlined the difficulties that could be created if we did not use this comparative advantage criteria to assign instruments to targets. Specifically, he noted that if we assigned erroneously M to Y* and G to E* when their comparative advantage argued for the opposite, then we could easily have serious instability. An illustration of this is when, say, the government tries to use spending to target the external trade balance and interest rates to target output when (usually), their comparative influence is of opposite strength. Such an erroneous assignment can lead to instability in output and exchange rates.

Another approach to optimal policy settings was developed by Henri Theil (1957, 1964), William Brainard (1967), Edmond Malinvaud (1969) and William Poole (1970) which is particularly effective when there is uncertainty or randomness involved. For instance, suppose the economy is governed by a simple relationship such as:

Y = a + bM

where M is the policy instrument and Y is the target (say Y is output and M is money supply). By Tinbergen-Meade, all we have to do is settle on a target, Y* and then solve for the instrument:

M* = (Y* - b)/a

However, if there is (additive) uncertainty so that the economic relationship is actually:

Y = a + bM + u

where u is a random variable (zero mean, constant variance), then inversion is no longer as clear. What the Theil-Brainard approach proposes is the setting of optimal policy according the minimization of a quadratic loss function, i.e.

min E(Y - Y*)2

s.t.

Y = a + bM + u

u ~ (0, s2)

thus, we are attempting to minimize the variance of actual output (Y) around the target output, Y*. The solution to this problem, M*, is actually:

M* = (Y* - b)/a

i.e. exactly the solution we would have if there was no uncertainty. Thus, optimal policy in this framework is "certainty-equivalent". The actual outcome, of course, will be Y = Y* + u, but the expected outcome is Y*. Under the circumstances, the policy-maker cannot do better than this.

Things become quite different if there is also multiplicative uncertainty, e.g. if the original relationship is something like:

Y = a + (b +e)M + u

where e is another random term which is affecting the coefficient attached to M. In this case, minimizing the same quadratic loss function, the solution is:

M* = a (Y* - b)/(s2e+ a2)

which is different from what we have before. Indeed, the optimal instrument setting is less than the instrument setting under certainty or additive uncertainty. Thus, we can see that with multiplicative uncertainty, expected output is actually less than target outcome, i.e. E(Y) < Y*. The optimal policy is to "undershoot" the target and hope that the random factor pulls one a bit closer to it.

What if there are multiple instruments? For the additive uncertainty case, suppose the relationship is:

Y = a + bM + gG + u

where M and G are policy instruments. In this case, it turns out that the optimal policy is still certainty-equivalent, i.e. we use a single instrument (say M) to target Y and leave the other instrument (G) redundant. However, if there is multiplicative uncertanty, i.e.

Y = a + (b + e1)M + (g +e2)G + u

where both coefficients are random, then the optimal policy would be to use both the instruments to target Y. With multiplicative uncertainty, there is no redundancy of instruments. The reasoning for this is quite "Tobinesque": by using all the instruments we have on a single target, we are effectively using the power of "diversification" to reduce the variance of the target.

A more difficult analysis involves policy in a dynamic setting. Early work on simple Tinbergen-Meade type of criteria for setting policy instruments in a dynamic context was done by Alban W. Phillips (1954, 1957). Later on, with the development of techniques of optimal control theory in a stochastic environment, more complicated criteria for "optimal policy" in a dynamic setting were developed (e.g. H. Theil, 1957; G.C. Chow, 1970, 1975; E.C. Prescott, 1972; S.J. Turnvosky, 1977).

(C) Monetary Policy

Monetary policy, before 1936 had been largely confined to maintaining the "Gold Standard". In accordance with the Quantity Theory, it was not believed that money supply could much influence anything other than prices, at least in the long run. Keynes's General Theory tore down that presumption and highlighted the Central Bank's power to influence real variables such as interest rates and output levels. Nonetheless, in the early post-war period, when Keynesians were mulling over the money wages issue, the hopeful conclusion that investment was interest-insensitive led them to ignore monetary policy and give full attention to fiscal policy. Milton Friedman's (1956) early Monetarist challenge reminded Keynesians of the importance of money and monetary policy. Throughout the 1960s, at least from James Tobin (1961) onwards, attention was again paid to the structure of the LM side of Keynesian theory and thus, the conduct of monetary policy began being analyzed in earnest.

The theoretical treatment of monetary policy, as laid out in William Poole (1970) and others, was couched in terms of the general theory of optimal macroeconomic policy outlined earlier. As such, the optimization problem of the Central Bank can be seen as the following:

min E(Y - Y*)2

s.t.

Y = a0 + a1r + u

M = b0 + b1r + b2Y + v

where the first constraint is the IS equation relating output to interest rates and the second is the LM equation (where M is money supply and the right side is money demand - which, in turn, is related to interest and output). Of course, u and v are random terms - the former representing shocks to aggregate demand for goods, the latter representing shocks to money demand. Notice that we have only one ultimate target (output) and, apparently, only one policy instrument, M. However, in fact, there are in fact two "intermediate" targets - interest rates (r) and money supply (M) and the policy instrument is in fact the "monetary base" H which is linked to M via the money multiplier and to r via the LM equation. Thus, in order to reach the ultimate target (output), the Central Bank can follow three sorts of "intermediate" target settings: it can attempt a money supply target, interest rate target or a mixture of both,.

Let us compare the relative strengths of the three types of monetary policy. Suppose the Central Bank wishes to follow a money supply rule. In this case, we integrate the IS and LM constraints via the r component and then express the entire term as a function of M and the random terms - heuristically Y = (M, z) where z is a composite random term (in fact, z = (b1u - a1v)/(b1+a1b2)). Plugging (M, z) for Y* in our quadratic loss function and minimizing it, we can obtain an expression for the optimal money supply setting, i.e. M*. If we work through the algebra, the variance of actual output around the target output Y* under the optimal money supply setting M* will be:

sY2 = (b12su2 + a12sv2 )/(b1+a1b2)2.

where su2 and sv2 are the variances aggregate demand and money demand shocks respectively. Thus the variance of output under the money supply rule (i.e. how far output will deviate from target output), will depend on the relative volatility of aggregate demand and money demand.

Under an interest rate rule, the Central Bank can ignore the LM equation (since M will be manipulated to yield the target r*) and concentrate wholly on IS. In this case, the optimal interest rate setting will be r* = (a0 - Y*)/a1 where Y* is the target output. However, the resulting variance of output under an interest rate rule is, after some calculation:

sY2 = su2

i.e. output variance is wholly (and only) the variance of aggregate demand shocks.

The reasoning for these differing results for money supply and interest rate rules should be obvious. Recall, when money supply is "controlled" (Ms curve is vertical), then movements in money demand will affect interest and hence, via investment, output. Thus, money demand shocks increase the variance of output. Under an interest rate target (where Ms is horizontal), movements in money demand will leave r unchanged and thus will have absolutely no effect on output - thus money demand shocks are completely innoculated.

So why not have only interest rate targeting? The reason is obvious from Figure 7: if interest rates are targeted, then the LM curve is completely horizontal (LMi in the diagram) - thus aggregate demand shocks (fluctuations in IS - in our diagram ranging from ISL to ISU) will have an enormous impact on output (ranging from YiL to YiU). If, however, money supply is targeted, then the LM curve is upward sloping (LMM in the diagram) so then movements in the IS curve do not lead to movements in Y over so far a range (they are restricted to the smaller range from YML to YMU).

figure7.gif (3817 bytes)

Figure 7 - Monetary Policy Targeting

As we see in Figure 7, under money supply rules, aggregate demand shocks are partly absorbed by rising interest rates, whereas under interest rate rules they are allowed to exert their complete impact on output (which is why output variance s Y2 is equal to s u2 under interest rate targeting, but is only related to a fraction of s u2 under money supply targeting). However, as mentioned, money supply targeting exposes itself to money demand shocks (thus the entry of s v2 in the expression of output volatility for the money supply rule case) while interest rate targeting shields output completely from it (thus it does not enter the volatility expression at all).

Consequently, it is obvious that the "optimal" policy to choose under any particular circumstance depends on the nature of the shocks the economy is experiencing. If the economy is suffering from money demand volatility, then an interest rate policy is preferred; if it is suffering from aggregate demand volatility, then a money supply policy is preferred. Unfortunately, it usually suffers from a good degree of both, thus it might be best to follow a combination policy and do a bit of both - in other words, allow interest rates to fluctuate within some reasonable range so that the LM curve is slightly upward sloping, but not as steep as if in a strict money supply targeting regime. Again, it is easy to demonstrate that when both shocks are in operation, the volatility of output, s Y2 under a mixed regime is lower than under either extreme of interest targeting and money supply targeting. This, once again, is the old logic of "diversification" applied to policy instruments.

All this assumes we can see the fluctuations, but sometimes Central Banks must make decisions when this is not clear. Borrowing a leaf from New Classical theory, one way to go about setting optimal policy in such circumstances would be via "signal extraction" (as in, for instance, Lucas (1973) or Sargent (1979)). The idea is basically that we use past information to infer the optimal policy settings. Notice that IS shocks change interest rates and output in the same direction while LM shocks change interest and output in the opposite direction. Suppose the government can only observe the fluctuations in interest rates - as these are reported daily, whereas fluctuations in output are only observed at wider intervals. On the basis of past experience, the government can "project" (estimate in standard OLS manner) what the change in interest rates it observes implies about the concurrent (but still unseen) changes in output and then set its instruments accordingly to counteract that.

Naturally, the government could be completely wrong and interpret a rise in interest rate as coming from an IS shock when in fact comes from an LM shock and thus they would set the wrong instruments. However, if it makes a mistake (revealed when the output figures come out), it can correct itself. Thus, the government "learns" over time what are the relative frequencies and strengths of aggregate demand and money demand shocks. It uses this past experience to set up a "signal extraction" function so that, on the basis of an observed change in interest rates, it can "estimate" (using parameters based on the past information) what the concurrent but unobserved change in output probably is - and then set the instruments accordingly to counteract it. This use of past information is usually better than just blind diversification of instrument settings.

An additional problem that was highlighted in monetary policy, notably by Benjamin Friedman (1975, 1977), was the possibility that these intermediate targets - interest rates and money supply - may themselves be stochastic. Recall, the government really only has monetary base, H, in its direct power. We can thus have two types of volatility - volatility of the ultimate target (i.e. output) around its target value Y* and volatility of the intermediate targets (e.g. money supply, interest) around their target values, M* and r*. In this case, what ought to be the optimal policy procedure? Should governments concentrate on minimizing the first type of variance or the second? In other words, should they try hitting the desired intermediate targets first and foremost and let the rest run its natural course, or should they focus more on targeting output above everything, and worry less about minimizing volatility around the intermediate targets? Benjamin Friedman (1977) comes down heavily in favor of concentrating on hitting the ultimate target on the basis that at least one can gain information from volatility in the intermediate targets whereas little information is gained by trying to keep them stable. Nonetheless, at least in practice, this is a more complicated question than it might seem at first sight.

back
Back
top
Top
book4.gif (1891 bytes)
Selected References

next
Next


Home Alphabetical Index Schools of Thought Surveys and Essays
Web Links References Contact Frames