Thursday, December 7, 2017

Consumer Inflation Uncertainty Holding Steady

I recently updated my Consumer Inflation Uncertainty Index through October 2017. Data is available here and the official (gated) publication in the Journal of Monetary Economics is here. Inflation uncertainty remains low by historic standards. Uncertainty about longer-run (5- to 10-year ahead) inflation remains lower than uncertainty about next-year inflation.


In recent months, there has been no notable change in inflation uncertainty at either horizon.


The average "less uncertain" type consumer still expects longer-run inflation around 2.4%, while the longer-run inflation forecasts of "highly uncertain" consumers have risen in recent months to the 8-10% range.

Tuesday, October 24, 2017

Is Taylor a Hawk or Not?

Two Bloomberg articles published just a week apart call John Taylor, a contender for Fed Chair, first hawkish, then dovish. The first, by Garfield Clinton Reynolds, notes:
...The dollar rose and the 10-year U.S. Treasury note fell on Monday after Bloomberg News reported Taylor, a professor at Stanford University, impressed President Donald Trump in a recent White House interview. 
Driving those trades was speculation that the 70 year-old Taylor would push rates up to higher levels than a Fed helmed by its current chair, Janet Yellen. That’s because he is the architect of the Taylor Rule, a tool widely used among policy makers as a guide for setting rates since he developed it in the early 1990s.
But the second, by Rich Miller, claims that "Taylor’s Walk on Supply Side May Leave Him More Dove Than Yellen." Miller explains,
"While Taylor believes the [Trump] administration can substantially lift non-inflationary economic growth through deregulation and tax changes, Yellen is more cautious. That suggests that the Republican Taylor would be less prone than the Democrat Yellen to raise interest rates in response to a policy-driven economic pick-up."
What actually makes someone a hawk? Simply favoring rules-based policy is not enough. A central banker could use a variation of the Taylor rule that implies very little response to inflation, or that allows very high average inflation. Beliefs about the efficacy of supply-side policies also do not determine hawk or dove status. Let's look at the Taylor rule from Taylor's 1993 paper:
r = p + .5y + .5(p – 2) + 2,
where r is the federal funds rate, y is the percent deviation of real GDP from target, and p is inflation over the previous 4 quarters. Taylor notes (p. 202) that lagged inflation is used as a proxy for expected inflation, and y=100(Y-Y*)/Y* where Y is real GDP and Y* is trend GDP (a proxy for potential GDP).

The 0.5 coefficients on the y and (p-2) terms reflect how Taylor estimated that the Fed approximately behaved, but in general a Taylor rule could have different coefficients, reflecting the central bank's preferences. The bank could also have an inflation target p* not equal to 2, and replace (p-2) with (p-p*). Just being really committed to following a Taylor rule does not tell you what steady state inflation rate or how much volatility a central banker would allow. For example, a central bank could follow a rule with p*=5 and a relatively large coefficient on y and small coefficient on (p-5), allowing both high and volatile inflation.

What do "supply side" beliefs imply? Well, Miller thinks that Taylor believes the Trump tax and deregulatory policy changes will raise potential GDP, or Y*. For a given value of Y, a higher estimate of Y* implies a lower estimate of y, which implies lower r. So yes, in the very short run, we could see lower r from a central banker who "believes" in supply side economics than from one who doesn't, all else equal.

But what if Y* does not really rise as much as a supply-sider central banker thinks it will? Then the lower r will result in higher p (and Y), to which the central bank will react by raising r. So long as the central bank follows the Taylor principle (so the sum of the coefficients on p and (p-p*) in the rule are greater than 1), equilibrium long-run inflation is p*.

The parameters of the Taylor rule reflect the central bank's preferences. The right-hand-side variables, like Y*, are measured or forecasted. That reflects a central bank's competence at measuring and forecasting, which depends on a number of factors ranging from the strength of its staff economists to the priors of the Fed Chair to the volatility and unpredictability of other economic conditions and policies. 

Neither Taylor nor Yellen seems likely to change the inflation target to something other than 2 (and even if they wanted to, they could not unilaterally make that decision.) They do likely differ in their preferences for stabilizing inflation versus stabilizing output, and in that respect I'd guess Taylor is more hawkish. 

Yellen's efforts to look at alternative measures of labor market conditions in the past are also about Y*. In some versions of the Taylor rule, you see unemployment measures instead of output measures (where the idea is that they generally comove). Willingness to consider multiple measures of employment and/or output is really just an attempt to get a better measure on how far the real economy is from "potential." It doesn't make a person inherently more or less hawkish.

As an aside, this whole discussion presumes that monetary policy itself (or more generally, aggregate demand shifts) do not change Y*. Hysteresis theories reject that premise. 

Monday, October 23, 2017

Cowen and Sumner on Voters' Hatred of Inflation

A recent Scott Sumner piece has the declarative title, "Voters don't hate inflation." Sumner is responding to a piece by Tyler Cowen in Bloomberg, where Cowen writes:
Congress insists that the Fed is “independent"...But if voters hated what the Fed was doing, Congress could rather rapidly hold hearings and exert a good deal of influence. Over time there is a delicate balancing act, where the Fed is reluctant to show it is kowtowing to Congress, so it very subtlety monitors its popularity so it doesn’t have to explicitly do so. 
If we imposed a monetary rule on the Fed, even a theoretically optimal rule, it would stop the Fed from playing this political game. Many monetary rules call for higher rates of price inflation if the economy starts to enter a downturn. That’s often the right economic prescription, but voters hate high inflation. 
Emphasis added, and the emphasized bit is quoted by Scott Sumner, who argues that voters don't hate inflation per se, but hate falling standards of living. He adds, "How people feel about a price change depends entirely on whether it's caused by an aggregate supply shift or a demand shift."

The whole exchange made my head hurt a bit because it turns the usual premise behind macroeconomic policy design--and specifically, central bank independence and monetary policy rules--on its head. The textbook reasoning goes something like this. Policymakers facing re-election have an incentive to pursue expansionary macroeconomic policies (positive aggregate demand shocks). This boosts their popularity, because people enjoy the lower unemployment and don't really notice or worry about the inflationary consequences.

Even an independent central bank operating under discretion faces the classic "dynamic inconsistency" problem if it tries to commit to low inflation, resulting in suboptimally high (expected and actual) inflation. So monetary policy rules (the topic of Cowen's piece) are, in theory, a way for the central bank to "bind its hands" and help it achieve lower (expected and actual) inflation. An alternative that is sometimes suggested is to appoint a central banker who is more inflation averse than the public. If the problem is that the public hates inflation, how is this a solution?

Cowen seems to argue that a monetary rule would be unpopular, and hence not fully credible, exactly when it calls for policy to be expansionary. But such a rule, in theory, would have been put into place to prevent policy from being too expansionary. Without such a rule, policy would presumably be more expansionary, so if voters hate high inflation, they would really hate removing the rule.

One issue that came up frequently at the Rethinking Macroeconomic Policy IV conference was the notion that inflationary bias, and the implications for central banking that come with it, might be a thing of the past. There is certainly something to that story in the recent low inflation environment. But I can still hardly imagine circumstances in which expansionary policy in a downturn would be the unpopular choice among voters themselves. It may be unpopular among members of Congress for other reasons-- because it is unpopular among select powerful constituents, for example-- but that is another issue. And the members of Congress who are most in favor of imposing a monetary policy rule for the Fed are also, I suspect, the most inflation averse, so I find it hard to see how the potentially inflationary nature of rules is what would (a) make them politically unpopular and (b) lead Congress to thus restrict the Fed's independence.

Friday, October 13, 2017

Rethinking Macroeconomic Policy

I had the pleasure of attending “Rethinking Macroeconomic Policy IV” at the Peterson Institute for International Economics. I highly recommend viewing the panels and materials online.

The two-day conference left me wondering what it actually means to “rethink” macro. The conference title refers to rethinking macroeconomic policy, not macroeconomic research or analysis, but of course these are related. Adam Posen’s opening remarks expressed dissatisfaction with DSGE models, VARs, and the like, and these sentiments were occasionally echoed in the other panels in the context of the potentially large role of nonlinearities in economic dynamics. Then, in the opening session, Olivier Blanchard talked about whether we need a “revolution” or “evolution” in macroeconomic thought. He leans toward the latter, while his coauthor Larry Summers leans toward the former. But what could either of these look like? How could we replace or transform the existing modes of analysis?

I looked back on the materials from Rethinking Macroeconomic Policy of 2010. Many of the policy challenges discussed at that conference are still among the biggest challenges today. For example, low inflation and low nominal interest rates limit the scope of monetary policy in recessions. In 2010, raising the inflation target and strengthening automatic fiscal stabilizers were both suggested as possible policy solutions meriting further research and discussion. Inflation and nominal rates are still very low seven years later, and higher inflation targets and stronger automatic stabilizers are still discussed, but what I don’t see is a serious proposal for change in the way we evaluate these policy proposals.

Plenty of papers use basically standard macro models and simulations to quantify the costs and benefits of raising the inflation target. Should we care? Should we discard them and rely solely on intuition? I’d say: probably yes, and probably no. Will we (academics and think tankers) ever feel confident enough in these results to make a real policy change? Maybe, but then it might not be up to us.

Ben Bernanke raised probably the most specific and novel policy idea of the conference, a monetary policy framework that would resemble a hybrid of inflation targeting and price level targeting. In normal times, the central bank would have a 2% inflation target. At the zero lower bound, the central bank would allow inflation to rise above the 2% target until inflation over the duration of the ZLB episode averaged 2%. He suggested that this framework would have some of the benefits of a higher inflation target and of price level targeting without some of the associated costs. Inflation would average 2%, so distortions from higher inflation associated with a 4% target would be avoided. The possibly adverse credibility costs of switching to a higher target would also be minimized. The policy would provide the usual benefits of history-dependence associated with price level targeting, without the problems that this poses when there are oil shocks.

It’s an exciting idea, and intuitively really appealing to me. But how should the Fed ever decide whether or not to implement it? Bernanke mentioned that economists at the Board are working on simulations of this policy. I would guess that these simulations involve many of the assumptions and linearizations that rethinking types love to demonize. So again: Should we care? Should we rely solely on intuition and verbal reasoning? What else is there?

Later, Jason Furman presented a paper titled, “Should policymakers care whether inequality is helpful or harmful for growth?” He discussed some examples of evaluating tradeoffs between output and distribution in toy models of tax reform. He begins with the Mankiw and Weinzierl (2006) example of a 10 percent reduction in labor taxes paid for by a lump-sum tax. In a Ramsey model with a representative agent, this policy change would raise output by 1 percent. Replacing the representative agent with agents with the actual 2010 distribution of U.S. incomes, only 46 percent of households would see their after-tax income increase and 41 percent would see their welfare increase. More generally, he claims that “the growth effects of tax changes are about an order of magnitude smaller than the distributional effects of tax changes—and the disparity between the welfare and distribution effects is even larger” (14). He concludes:
“a welfarist analyzing tax policies that entail tradeoffs between efficiency and equity would not be far off in just looking at static distribution tables and ignoring any dynamic effects altogether. This is true for just about any social welfare function that places a greater weight on absolute gains for households at the bottom than at the top. Under such an approach policymaking could still be done under a lexicographic process—so two tax plans with the same distribution would be evaluated on the basis of whichever had higher growth rates…but in this case growth would be the last consideration, not the first” (16).

As Posen then pointed out, Furman’s paper and his discussants largely ignored the discussions of macroeconomic stabilization and business cycles that dominated the previous sessions on monetary and fiscal policy. The panelists acceded that recessions, and hysteresis in unemployment, can exacerbate economic disparities. But the fact that stabilization policy was so disconnected from the initial discussion of inequality and growth shows just how much rethinking still has not occurred.

In 1987, Robert Lucas calculated that the welfare costs of business cycles are minimal. In some sense, we have “rethought” this finding. We know that it is built on assumptions of a representative agent and no hysteresis, among other things. And given the emphasis in the fiscal and monetary policy sessions on avoiding or minimizing business cycle fluctuations, clearly we believe that the costs of business cycle fluctuations are in fact quite large. I doubt many economists would agree with the statement that “the welfare costs of business cycles are minimal.” Yet, the public finance literature, even as presented at a conference on rethinking macroeconomic policy, still evaluates welfare effects of policy using models that totally omit business cycle fluctuations, because, within those models, such fluctuations hardly matter for welfare. If we believe that the models are “wrong” in their implications for the welfare effects of fluctuations, why are we willing to take their implications for the welfare effects of tax policies at face value?

I don’t have a good alternative—but if there is a Rethinking Macroeconomic Policy V, I hope some will be suggested. The fact that the conference speakers are so distinguished is both an upside and a downside. They have the greatest understanding of our current models and policies, and in many cases were central to developing them. They can rethink, because they have already thought, and moreover, they have large influence and loud platforms. But they are also quite invested in the status quo, for all they might criticize it, in a way that may prevent really radical rethinking (if it is really needed, which I’m not yet convinced of). (A more minor personal downside is that I was asked multiple times whether I was an intern.)

If there is a Rethinking Macroeconomic Policy V, I also hope that there will be a session on teaching and training. The real rethinking is going to come from the next generations of economists. How do we help them learn and benefit from the current state of economic knowledge without being constrained by it? This session could also touch on continuing education for current economists. What kinds of skills should we be trying to develop now? What interdisciplinary overtures should we be making?

Thursday, September 28, 2017

An Inflation Expectations Experiment

Last semester, my senior thesis advisee Alex Rodrigue conducted a survey-based information experiment via Amazon Mechanical Turk. We have coauthored a working paper detailing the experiment and results titled "Household Informedness and Long-Run Inflation Expectations: Experimental Evidence." I presented our research at my department seminar yesterday with the twin babies in tow, and my tweet about the experience is by far my most popular to date:

Consumers' inflation expectations are very disperse; on household surveys, many people report long-run inflation expectations that are far from the Fed's 2% target. Are these people unaware of the target, or do they know it but remain unconvinced of its credibility? In another paper in the Journal of Macroeconomics, I provide some non-experimental evidence that public knowledge of the Fed and its objectives is quite limited. In this paper, we directly treat respondents with information about the target and about past inflation, in randomized order, and see how they revise their reported long-run inflation expectations. We also collect some information about their prior knowledge of the Fed and the target, their self-reported understanding of inflation, and their numeracy and demographic characteristics. About a quarter of respondents knew the Fed's target and two-thirds could identify Yellen as Fed Chair from a list of three options.

As shown in the figure above, before receiving the treatments, very few respondents forecast 2% inflation over the long-run and only about a third even forecast in the 1-3% range. Over half report a multiple-of-5% forecast, which, as I argue in a recent paper in the Journal of Monetary Economics, is a likely sign of high uncertainty. When presented with a graph of the past 15 years of inflation, or with the FOMC statement announcing the 2% target, the average respondent revises their forecast around 2 percentage points closer to the target. Uncertainty also declines.

The results are consistent with imperfect information models because the information treatments are publicly available, yet respondents still revise their expectations after the treatments. Low informedness is part of the reason why expectations are far from the target. The results are also consistent with Bayesian updating, in the sense that high prior uncertainty is associated with larger revisions. But equally noteworthy is the fact that even after receiving both treatments, expectations are still quite heterogeneous and many still substantially depart from the target. So people seem to interpret the information in different ways and view it as imperfectly credible.

We look at how treatment effects vary by respondent characteristic. One interesting result is that, after receiving both treatments, the discrepancy between mean male and female inflation expectations (which has been noted in many studies) nearly disappears (see figure below).

There is more in the paper about how treatment effects vary with other characteristics, including respondents' opinion of government policy and their prior knowledge. We also look at whether expectations can be "un-anchored from below" with the graph treatment.



Thursday, September 14, 2017

Consumer Forecast Revisions: Is Information Really so Sticky?

My paper "Consumer Forecast Revisions: Is Information Really so Sticky?" was just accepted for publication in Economics Letters. This is a short paper that I believe makes an important point. 

Sticky information models are one way of modeling imperfect information. In these models, only a fraction (λ) of agents update their information sets each period. If λ is low, information is quite sticky, and that can have important implications for macroeconomic dynamics. There have been several empirical approaches to estimating λ. With micro-level survey data, a non-parametric and time-varying estimate of λ can be obtained by calculating the fraction of respondents who revise their forecasts (say, for inflation) at each survey date. Estimates from the Michigan Survey of Consumers (MSC) imply that consumers update their information about inflation approximately once every 8 months.

Here are two issues that I point out with these estimates:
I show that several issues with estimates of information stickiness based on consumer survey microdata lead to substantial underestimation of the frequency with which consumers update their expectations. The first issue stems from data frequency. The rotating panel of Michigan Survey of Consumer (MSC) respondents take the survey twice with a six-month gap. A consumer may have the same forecast at months t and t+ 6 but different forecasts in between. The second issue is that responses are reported to the nearest integer. A consumer may update her information, but if the update results in a sufficiently small revisions, it will appear that she has not updated her information. 
To quantify how these issues matter, I use data from the New York Fed Survey of Consumer Expectations, which is available monthly and not rounded to the nearest integer. I compute updating frequency with this data. It is very high-- at least 5 revisions in 8 months, as opposed to the 1 revision per 8 months found in previous literature.

Then I transform the data so that it is like the MSC data. First I round the responses to the nearest integer. This makes the updating frequency estimates decrease a little. Then I look at it at the six-month frequency instead of monthly. This makes the updating frequency estimates decrease a lot, and I find similar estimates to the previous literature-- updates about every 8 months.

So low-frequency data, and, to a lesser extent, rounded responses, result in large underestimates of revision frequency (or equivalently, overestimates of information stickiness). And if information is not really so sticky, then sticky information models may not be as good at explaining aggregate dynamics. Other classes of imperfect information models, or sticky information models combined with other classes of models, might be better.

Read the ungated version here. I will post a link to the official version when it is published.

Monday, August 21, 2017

New Argument for a Higher Inflation Target

On voxeu.org, Philippe Aghion, Antonin Bergeaud, Timo Boppart, Peter Klenow, and Huiyu Li discuss their recent work on the measurement of output and whether measurement bias can account for the measured slowdown in productivity growth. While the work is mostly relevant to discussions of the productivity slowdown and secular stagnation, I was interested in a corollary that ties it to discussions of the optimal level of the inflation target.

The authors note the high frequency of "creative destruction" in the US, which they define as when "products exit the market because they are eclipsed by a better product sold by a new producer." This presents a challenge for statistical offices trying to measure inflation:
The standard procedure in such cases is to assume that the quality-adjusted inflation rate is the same as for other items in the same category that the statistical office can follow over time, i.e. products that are not subject to creative destruction. However, creative destruction implies that the new products enter the market because they have a lower quality-adjusted price. Our study tries to quantify the bias that arises from relying on imputation to measure US productivity growth in cases of creative destruction.
They explain that this can lead to mismeasurement of TFP growth, which they quantify by examining changes in the share of incumbent products over time:
If the statistical office is correct to assume that the quality-adjusted inflation rate is the same for creatively destroyed products as for surviving incumbent products, then the market share of surviving incumbent products should stay constant over time. If instead the market share of these incumbent products shrinks systematically over time, then the surviving subset of products must have higher average inflation than creatively destroyed products. For a given elasticity of substitution between products, the more the market share shrinks for surviving products, the more the missing growth.
From 1983 to 2013, they estimate that "missing growth" averaged about 0.63% per year. This is substantial, but there is no clear time trend (i.e. there is not more missed growth in recent years), so it can't account for the measured productivity growth slowdown.

The authors suggest that the Fed should consider adjusting its inflation target upwards to "get closer to achieving quality-adjusted price stability." A few months ago, 22 economists including Joseph Stiglitz and Narayana Kocherlakota wrote a letter urging the Fed to consider raising its inflation target, in which they stated:
Policymakers must be willing to rigorously assess the costs and benefits of previously-accepted policy parameters in response to economic changes. One of these key parameters that should be rigorously reassessed is the very low inflation targets that have guided monetary policy in recent decades. We believe that the Fed should appoint a diverse and representative blue ribbon commission with expertise, integrity, and transparency to evaluate and expeditiously recommend a path forward on these questions.
The letter did not mention this measurement bias rationale for a higher target, but the blue ribbon commission they propose should take it into consideration.