The Nature
of Stock Market Equilibrium (How This Equilibrium is Dyuamic)
(Caution: This article described the stock
market between 1968-1999. It does
not describe the stock market after the
Financial Crisis of 2008 because
the economy’s intrinsic rate of growth is
lower.)
The novelist, G.K. Chesterton wrote:
“The real
trouble with this world of ours is
not that it is an unreasonable world, nor even
that it is a reasonable one. The commonest
kind
of trouble is that it is nearly reasonable,
but not quite.” In 2003, Clive Granger won the
Nobel Prize for econometric research that
justified
the use of real world data in theoretical
regression models.
At the core of
economics is the idea of equilibrium, that stable state of affairs to which an
economy tends. Equilibrium makes possible the reasoning of textbooks. There is,
however, a problem with this idea because the state of equilibrium requires no
markets.
In real economies,
people trade freely in markets to increase their welfare. Yet, as we have
discussed in a previous article, a
mathematical model of trading between value and momentum investors verges of
chaos, not eliciting the efficiencies that are at the heart of equilibrium
economics.
Econometrics is a
branch of economics concerned with empirical studies. In 1987, Clive Granger
solved a crucial econometric problem, also resolving this paradox. His research
allows us to explain exactly why our cyclical modification of the Fed ratio
works.
Background
The econometric
regression model assumes ideal Gaussian (bell-shaped) statistical distributions.
Real world economic data, however, is often not Gaussian. Many economic time
series share common trends due to general economic growth. Over time, these
common trends can swamp the phenomena being studied, producing spurious
regressions with very high statistical significances but with no grounding in
fact. Some non Gaussian economic data, however, can be usefully analyzed.
The following
discussion is necessarily technical. Those of our readers not acquainted with
econometrics might want to skip to the summary, where we will discuss the
practical consequences of dynamic error correction in the
Analyses
The OLS regression
model assumes a Gaussian normal data distribution, but our primary data is not
Gaussian. We used the Lilliefors statistical test (Conover, 1980) which sets the
model data against an ideal Gaussian normal distribution and measures the
vertical distances. A Gaussian distribution is automatically stationary, having
no serial correlation. Table I presents the resulting analyses of our model data
for the periods (1968-1999). There are apparent
problems.
The standard
econometric technique is then to detrend the data by calculating first
differences, that is the difference between a data point and its previous
neighbor. The problem with this technique is that it removes long-term
information, in this instance cyclical economic information, and the results are
difficult to interpret because economic theories are usually formulated in terms
of equilibrium levels rather than in rates of change. We also present the first
differenced results in the same table.
Data Analysis Results
Data |
Test Outcome |
Test Outcome After First
Differencing |
Bond
Yields Stock Earnings
Yields
|
Suggestive
Evidence Against
Normality |
No Evidence Against
Normality |
Capacity
Utilization |
Strong
Evidence Against
Normality |
“
|
Inflation* |
“
|
Suggestive
Evidence Against Normality
|
* We included this variable
for theoretical reasons. Current research suggests that inflation is related
more to Fed policy (Gauthier, 2001) and perhaps its statistical distribution
could be modeled otherwise.
In our previous
analysis, we used the primary data directly without first differencing.
Why did we get statistically significant results? Here are the statistics: The
overall R2 of the model explains 95% of the variation in the data,
all variables are statistically significant at the 0% level, the regression
almost perfectly fits the data mean, but
the standard Durbin-Watson test for detecting distorting serial correlation in
the residuals is inconclusive.
The Durbin-Watson
statistic of our residual data is 1.49, below the preferred level for our sample
exceeding 1.50. Since this test was inconclusive, we ran a next neighbor
regression on the yearly residuals. There is no serial correlation in the
residuals; the primary data is statistically significant. However, there
are variations that are persistent in sign. In other words, investors overreact
to major short-term changes. (Tversky and Kahneman, 1972 ). Our subsequent
analysis will confirm this fact because we will be able to see exactly how a
regression model operating on non-Gaussian data can error correct, but that’s
getting ahead of the story.
In a landmark paper
titled “Co-Integration and Error Correction: Representation, Estimation, and
Testing,” Engle and Granger (1987) proved, even more generally than as follows,
that economic data which is stationary at first differences (as our data
substantially is) can be conventionally analyzed without differencing, provided
the residuals of the model are not serially correlated. Furthermore, if this is
so, there is a dynamic error-correcting model embedded within the data; that
in the words of the authors, “allows long-run components of variables to obey
equilibrium constraints while short-run components have a flexible dynamic
specification.” These are the relevant consequences of the theory:
A stationary random
variable has a stable mean and standard deviation.
A variable is
integrated I(0) if it is stationary as is. A variable is integrated I(1) if it
is stationary only after taking first differences (the difference between its
value and its previous neighbor). The primary variables of our study are I(1),
meaning they have large but undefined trend components (Granger, 1981) and large
variances.
Regressing two I(1)
variables will generally produce problematic I(1) residuals unless the variables
are cointegrated. If they are cointegrated, they will produce the statistically
random I(0) residual that regression model requires. The I(0) residual will be
caused by an error correction model that ties the two variables together and
which will cause the actual equilibrium values to occur at
times.
We investigate the
simplest error correction process for our cyclical modification of the Fed
model:
D
Bond Yields (t) / Stock Earnings Yields (t) = a + b (
residual (t-1) )
Where: a is a
constant
b is
a regression coefficient
What this model says
is that a prior year’s residual error will cause a comparable correction in the
Fed ratio. We ran this regression with our data; here are the
results:
1. The sign of the residual coefficient, x^1, is
negative as we expect.
2. This regression is
significant at the 3% level. There is within the data an error correction model
that keeps the willingness of investors to buy stocks related primarily to
expected industrial capacity utilization.
But:
3. The R2
coefficient of determination that measures the overall fit of the yearly
error model with the data is only .15. The R2 measures the strength of
the linear relationship, and it would be 1 if market corrected perfectly in one
year. Large persistent deviations in the market from the
levels predicted in our model relate to actual events such as the OPEC crisis
and 9/11. The yearly error correction process operates only to a degree because
investors tend to overreact to large changes in the economy.
4. The non-cyclical Fed model tracked the
market well between (1982-1997), years that excluded the OPEC crisis and the
later Internet exuberance. We ran the cyclical error correction model for these
years. The R2 was a higher .43; the regression was significant at the
1% level. The results are as expected; error correction exists.
5. On the full data
set, the error correction model operates only to a degree. However, because the
mean of our cyclical Fed Model is nearly equal to the mean of the data, the slow
but cumulative effect of error correction eventually dominates over a number of
economic cycles as positive and negative events, or more precisely as investors’
reactions to these events, cancel out. With error correction cointegrating the
independent and dependent variables, the primary regression equation is the best
estimate of the equilibrium relationship.
Summary
In 1935 the British
economist John Maynard Keynes, an accomplished investor, published The General Theory. The theory can be
stated in the mathematical form found in the economics texts. Keynes also argued
that, at equilibrium, investors equate the marginal returns of all
investments, whether stocks or bonds.
Nevertheless, in the
oft quoted Chapter 12 titled “Long-Term Expectation,” Keynes likened stock
selection to a beauty contest, an investor being rewarded if his choice
corresponds to the average preferences of all investors.
Keynes left this
major contradiction unresolved, other than to say that the chapter was at a
different level of abstraction. In 1987, Clive Granger published the theory of
cointegration, the theory that non-Gaussian economic variables will be tied
together by an error correction model that defines equilibrium, if the primary
regression residuals are statistically random.
We have shown that
within the
The fact that the
1. The short-term
error correcting process will return the actual Fed ratio to its equilibrium
value, but only if there are no further events. Events, both positive and
negative, cause the actual ratio to cross its equilibrium value. Major events
prolong the error correction process because investors overreact, sometimes for
a period of several years. Furthermore, good times and bad times alternate; but
not with mathematical precision.
Considered over the long-term, several
business cycles, the
2. As a
practical matter, value investors look to short-term catalysts as well – to
positive changes in the economy and company events.
3. There is
a statistical basis for tactical asset allocation, the marginal adjustment of
balanced portfolios according to relative values.
4. The idea of
dynamic error correction resolves the contradiction between freedom and
necessity. The stock market allows for short-term innovations and also drives
companies towards profitable operating efficiencies over the
long-term.
5. Simple
rules of thumb can contain remarkable logic.
We have stated our
argument in the empirical form. More concisely, Warren Buffet said, “…in the
short run, (the market is) a voting machine; in the long run, it’s a weighing
machine.”
__
Answering our
readers’ questions, here are some additional observations:
The practical
application of this quantitative analysis requires judgment. These are some of
the shorter term considerations of our analysis:
1) Under substantially
balanced macroeconomic conditions,
error correction will cause the stock market to trade around its equilibrium
value.
2) Under most macroeconomic conditions, more
than a quantitative formula is necessary to describe short-term stock market
behavior. The stock market is also moved by catalytic economic events; but the
calculated equilibrium is still important because the stock market will cross
this estimate, at some time in the future.
3) Excessive stock markets will overcorrect,
actual events also matter.
Considering the long-term, and the importance of appropriate institutions, the formula describes an open economy and universe. Future equilibrium stock market prices depend both upon realized earnings and future interest rates. The future depends upon what you do today. The future also depends upon the patterned facts and circumstances at the time. What are the consequences of this?
__
Error correction makes unnecessary the logical distinction between Gaussian risk, a state where the probabilities can be quantified, and uncertainty, where the probabilities are unknown.
Equilibrium economics
is assumed to be always true. In fact equilibrium economics, and the associated
math, is true only in a formal sense:
Economic theory
based on utilitarian premises, which is to say all “economic” theory in the
proper sense of the word, is purely abstract and formal… . Any question as to
what resources, technology, etc., are met with at a given time and place must be
answered in terms of institutional (our emphasis) history, since all such
things, in common with the impersonal system of market relations itself, are
obviously culture-history facts and products…the first step, toward a practical comprehension of the
social system is to isolate and follow out to their logical conclusion...(the)
fundamental tendencies discoverable in it.
Frank H. Knight
1921
We have shown that a cyclical modification of the Fed ratio is the best long-run estimate of the relationship between stock and bond prices.
__
Using a cointegration
methodology, Dupuis and Tessier
(2003) find that changes in inflation-adjusted dividends (earnings) account
for 76% of the changes in long-term U.S. stock prices and that changes in
long-term interest rates account for 24%. This is a significant empirical result
for investors. The authors, however, do not use the main regression to directly
calculate the level of the stock market for methodological reasons. They
directly analyze interest rates and stock prices, trending variables whose
causes must be further explained if possible Wickens
(1996).
By analyzing the
ratio between bond and stock earnings yields, the common long-term trends
simply cancel out. We can then use the regression to estimate the long-term
equilibrium level of the stock market. The Fed’s countercyclical monetary policy
is likely the main source of statistical error correction.