From Chapter 10: Raddr 3, Gummy 2, Bogle 0?

Research on Safe Withdrawal Rates

Moderator: hocus2004

Post Reply
JWR1945
***** Legend
Posts: 1697
Joined: Tue Nov 26, 2002 3:59 am
Location: Crestview, Florida

From Chapter 10: Raddr 3, Gummy 2, Bogle 0?

Post by JWR1945 »

Chapter 10 was John Bogle's discussion On Reversion to the Mean. I had looked forward to reading it and, unfortunately, built up my expectations much too high. The reason that I put the question mark after Bogle's score is that he did draw attention to the Fundamental Return of stocks.

You can visit Gummy's site or look at his CD to learn about the details of the Dividend Discount Model and its adaptations, such as the Gordon equation.

Assume that an investment produces dividends grows at a steady rate and that the price that people are willing to pay for the (current) value of that income stream remains steady.

Then, approximately, the price paid for a particular investment will grow at a rate = the initial dividend yield + the growth rate of the dividends.

If the payout ratio of dividends remains constant, the growth rate of dividends and earnings are exactly the same. If the payout ratio varies slowly, the growth rate of dividends is approximately the same as the growth rate of earnings, but not exactly the same. If the payout ratio varies slowly, a reduction in the payout ratio will increase the retained earnings and will tend (or, at least, should tend) to result in the faster growth of earnings. The net effect, ideally, is that the dividend growth rate increases (at least, partially), which offsets (to some extent) the effect of the reduction in the payout ratio. [I know of no convincing set of assumptions that would cause everything to end up being exactly the same mathematically.]

Because (real) earnings growth rate (with earnings averaged over 5 to 10 years) has been well behaved while the (real) dividend growth rate has been erratic, the equation for price growth is frequently written as:

The price paid for a particular investment will grow at a rate = the initial dividend yield + the growth rate of the earnings.

John Bogle refers to this as the Fundamental Return of stocks. He adds a speculative term because people are willing to pay different prices at different times for the same stream of income. He approximates the effect by using the price to earnings ratio (P/E) at the beginning and at the end of a period.

In its final form, John Bogle presents his mathematical model of the total return from stocks as:

The total return of stocks = the initial dividend yield + the growth rate of the earnings + the speculative return = the Fundamental Return + the Speculative Return.

What constitutes a fair price for an income stream is what people are able and willing to pay. What constitutes the number to use over the long-term is what people have been able and willing to pay for it over many years. That is, the average long-term Speculative Return should be zero. If the prices that people have been able and willing to pay for stocks had been dominated by a long-term trend instead of fluctuating, the model would have been modified to take that into account. [Demographics is a factor that probably influences the Speculative Return that has not been treated separately. That may change in the future.]

Mathematically, the key set of assumptions are that earnings growth has been reasonably steady and that the Speculative Return has been dominated by fluctuations instead of a long-term trend.

A better way to state this, consistent with what Gummy has mentioned, is that the total return of stocks = the Fundamental Return + a human factor that varies with time.

The Effect of Medium-Term Cycles

Many of the effects that John Bogle describe show nothing more than the presence of long-term cycles that are best explained in terms of human perceptions. If fluctuations dominate trends so that something similar to a mean exists, the comparisons that John Bogle makes cannot help but show a tendency to revert to a mean almost by definition. As more data are collected, they define a new mean.

The first set of comparisons that John Bogle made were in terms of relative fund performance ranked by quartiles from one decade to the next. Consider this alternative to his Reversion to the Mean explanation. Imagine that each fund had its own average (or mean) performance throughout the time period and that superimposed upon each was its own, large fluctuating component. Those funds that were in the top quartile in the first decade should be among the better funds that also were near their peak upward fluctuations. In the following decade, their fluctuating components would not be as favorable and, in some cases, would be unfavorable. The performance of the group during the second decade should be reduced, not because the steady component had diminished, but because circumstances were not all happening in each group member's favor at the same time. The initial separation into quartiles is what caused the observed result. The key requirement is that a large, possibly dominating, fluctuating component is present.

It would be one thing if the top quartile and the bottom quartile consistently changed places. They did not. If they had, a stronger assertion would have made sense: that the means were all the same and that all of the differences were only momentary fluctuations.

One type of behavior shows up consistently. Poor performance often persists. Most frequently, this is related to fees. In some cases, however, it is the result of a special kind of skill: consistently making bad investment choices.

Medium-Term Cycles

There can be many explanations as to why we should expect to see medium-term cycles.

One factor is the amount of time it takes to discern that a particular investment approach is doing better than alternatives. There is a lot of apparently random year-to-year fluctuation in the market in general. Unless an approach is spectacularly successful, which is unlikely, in takes several years as a minimum to show convincingly that there is improvement.

Another factor is that any advantage will diminish as more and more people become aware of it. It can take a long time and it may not disappear completely. With time, however, people will look instead to other factors and a relative advantage can reemerge.

I will mention here my concern about investment style differences. Many people are more willing than I to assume that they will persist.

The most important factor, I imagine, is the common learning experience of each generation. Few people who lived through the Great Depression were ever entirely comfortable in owning stocks. Few people who lived through the stagflation of the 1970s feel comfortable in holding something without an inflation hedge such as low or medium interest, fixed rate long-term bonds. Many who started in the late 1980s and in the 1990s are as emotionally attached to equity investments as survivors of the Great Depression were attached to bonds.

Raddr 3, Gummy 2

Raddr's precise definition of Reversion to the Mean is a new finding and it is meaningful. The volatility of the stock market has decreased more rapidly than if it were purely random. That is, the variance has fallen faster with time than 1/N, where N is the number of years (and the standard deviation has fallen faster than 1/[the square root of N] ).

Gummy has discussed the more traditional Reversion to the Mean issue in quite a bit of detail. He has shown that Reversion to the Mean makes little sense unless you treat it as a separate human factor, not as a mathematical theorem.

Have fun.

John R.
Mike
*** Veteran
Posts: 278
Joined: Sun Jul 06, 2003 4:00 am

Post by Mike »

He has shown that Reversion to the Mean makes little sense unless you treat it as a separate human factor, not as a mathematical theorem.
This would seem to imply that switching models based upon valuation should incorporate crowd psychology into the mix for better results. Evaluating the human element is presently more of an art, than a science.
JWR1945
***** Legend
Posts: 1697
Joined: Tue Nov 26, 2002 3:59 am
Location: Crestview, Florida

Post by JWR1945 »

It is very instructive to plot P/E10 versus time. It is a slowly varying function and there is a wealth of history behind those numbers. It gives you a picture of how people have viewed stock market investing in the past.

Since it varies slowly, you can make short-term projections, not of what definitely will happen but of what is reasonably likely.

The increase of demand for investments of any kind from the Baby Boomers should elevate the fair value of P/E10, but not to bubble levels.

Have fun.

John R.
Mike
*** Veteran
Posts: 278
Joined: Sun Jul 06, 2003 4:00 am

Post by Mike »

The increase of demand for investments of any kind from the Baby Boomers should elevate the fair value of P/E10, but not to bubble levels.
The size of the boomer cohort increases this effect. The 50 to 54 age group increased 55% from 1990 to 2000, and the 45 to 49 age group increased 45% during the same period. Contrast this with the actual decline in younger cohorts of 6 and 9 per cent:

http://articles.findarticles.com/p/arti ... i_79627442
bpp
** Regular
Posts: 98
Joined: Tue Nov 26, 2002 6:46 am
Location: Japan

Post by bpp »

Hi John R.,
Raddr's precise definition of Reversion to the Mean is a new finding and it is meaningful. The volatility of the stock market has decreased more rapidly than if it were purely random. That is, the variance has fallen faster with time than 1/N, where N is the number of years (and the standard deviation has fallen faster than 1/[the square root of N] ).
Here's a random, off-the-cuff and not-at-all-thought-out idea: could some of this be the Central Limit Theorem at work? Short-term returns are clearly non-Gaussian, with fat tails:

(From http://www.efficientfrontier.com/ef/104/iid.htm) Also from the same article:


Long-term returns are just the product (sum in logarithms) of a lot of short-term returns, so longer-term returns should approach a more log-normal distribution as compared to shorter-term ones. This should shrink the variance faster than 1/N.

Anyway, just throwing this out there. Have to think about this later when I have some time for thinking.

Bpp
JWR1945
***** Legend
Posts: 1697
Joined: Tue Nov 26, 2002 3:59 am
Location: Crestview, Florida

Post by JWR1945 »

Thank you for your post, Bpp.

The probability distribution is not directly related to the behavior of its variance with time. If all samples are independent, the variance of the calculated mean is (1/N)*(the variance of each sample) regardless of the details of the probability distribution.

The key assumption is that each sample is independent.

There may be some minor technical errors in the following discussion.

The expectation function is the integral of the product of a random variable x and its probability density function p(x) or, when values are discrete, the sum of the products of the random variable's possible outcomes and its probability of occurrence.

Because we are talking about integrals or sums, the expectation function is linear. That is, E(x+y) = E(x) + E(y) and E(c*x) = c*E(x) where x and y are random variables and c is a constant, not something random.

The expectation of a random variable is its (true) mean. The expectation of the error term, which is how much an actual outcome differs from its mean, is zero. The variance is the expectation of the square of the error term.

We calculate a sample mean by averaging the values of several outcomes. We can write the outcomes in this form: x1 = mean + error1 and x2 = mean + error2 and so forth until xN = mean + errorN, when there are N samples. In our case, N is also the number of years of data.

When we calculate the variance, we end up with a sum of the squares of individual error terms plus the twice the sum of all cross products. For example, after the first two samples, variance of our calculated mean = E[({[x1+x2]/2} - mean)^2] = E[({[x1/2]-[mean/2]) + [x2/2]-[mean/2]}^2}] = E[({error1}/2}+({error2}/2})^2] = E[({error1)^2}/4 + {2*(error1)*(error2)}/4 + {error2}/4)^2] = (1/4)*E[(error1)^2] + (1/2)*E[(error1)*(error2)] + (1/4)*E[(error2)^2] = (1/4)*variance of individual term x1 + (1/2)*E[(error1)*(error2)] + (1/4)*variance of individual term x2.

When we assume that the variance of each sample is the same, variance of individual term x1 = variance of individual term x2 = variance of a single sample and the variance of the calculated mean = (1/2)*variance of a single sample + (1/2)*E[(error1)*error2)].

When we assume that each sample is independent, we are assuming that the error terms are independent and E[(error1)*(error2)] = 0.

When we take the third sample, the calculated mean is ([x1+x2+x3]/3). Proceeding as above, there will be three terms: (1/9)*E(error1)^2 and (1/9)*E(error2)^2 and (1/9)*E(error3)^2. Since we assume that the variance of each is the same, this totals (1/3)* variance of a single sample.

There will be two cross terms for each pair of samples: E[(error1)*(error2)] and E[(error2)*(error3)] and E[(error1)*(error3)]. When we assume that each sample is independent, we are assuming that each of these terms equals zero.

These results can be extended to any number of years N.

If the variance falls faster than 1/N, it means that the sum of the cross terms is negative. Loosely stated, this tells us: if an individual sample is larger than the mean, the remaining samples (when taken together) will tend to be smaller than the mean. If the individual sample is smaller that the mean, the remaining samples (when taken together) will tend to be larger than the mean.

Have fun.

John R.
bpp
** Regular
Posts: 98
Joined: Tue Nov 26, 2002 6:46 am
Location: Japan

Post by bpp »

Hi John R.,

Of course, you're right. As you show, the 1/N behavior is not dependent on the shape of the distribution.:oops:

Bpp
JWR1945
***** Legend
Posts: 1697
Joined: Tue Nov 26, 2002 3:59 am
Location: Crestview, Florida

Post by JWR1945 »

Do not feel embarrassed, Bpp. I have made worse errors myself.

Here is an example of my making a similar error in two threads.

Mean reversion dated Thu, Nov 28, 2002 at 6:28 pm CST.
http://nofeeboards.com/boards/viewtopic.php?t=51
http://nofeeboards.com/boards/viewtopic.php?p=280#p280
Mean Reversion Equivalence dated Fri, Jan 31, 2003 at 2:26 pm CST.
http://nofeeboards.com/boards/viewtopic.php?t=424

Mean reversion implies that the autocorrelation function becomes negative at long time lags. It does not imply that there is a periodic component in the data. A periodic component causes the autocorrelation to alternate between positive and negative values as the time lag varies. It cannot cause it to stay either positive or negative.

This is technical enough for me to hide behind. But it is the kind of thing that someone with my background should have spotted immediately. One of the first things that you learn when studying the autocorrelation function is how periodic components behave.

Have fun.

John R.

P.S. Your graph is important and it is reasonable that you should routinely draw attention to it. Failure to consider the fat tails almost brought down our financial system. Read about what happened in the sad history of Long-Term Capital Management.
JWR1945
***** Legend
Posts: 1697
Joined: Tue Nov 26, 2002 3:59 am
Location: Crestview, Florida

Post by JWR1945 »

To be more precise, the autocorrelation function can go negative and then trend toward zero. It does not have to persist.

Periodic components must always persist. If a periodic component were to cause mean reversion (by going negative at times), it would also have to cause digression from the mean when it becomes positive.

[Do not be concerned about an offset to a periodic function. The bias portion of a periodic function (i.e., the component at a frequency of zero) shows up only when the time lag of the autocorrelation function is zero, not at any other time lag.]

Have fun.

John R.
Post Reply