Page 1 of 1

Our Standard Analysis Procedures

Posted: Sun Mar 20, 2005 12:47 pm
by JWR1945
Our Standard Analysis Procedure

We have been applying these procedures ever since we discovered the tight relationship between Historical Surviving Withdrawal Rates and the percentage earnings yield of the S&P500, 100E10/P. We have had considerable success.

1) The first step is to collect historical data.
2) The next step is to plot the historical data versus 100E10/P. We use Excel calculations to determine the best linear curve fit (i.e., regression).
3) We determine confidence limits. Most of the time, we use eyeball estimates as a convenience. We have the appropriate formulas that we can use when we need better precision.
4) We refer to the linear curve fit itself as the Calculated Rate. We identify the lower confidence limit as the Safe Withdrawal Rate. We identify the upper confidence limit as the High Risk Rate.
5) We provide a baseline (or baselines) for comparisons. These days, we make sure to include TIPS at a 2% interest rate. We often include a baseline consisting only of 2% TIPS.

Collecting Historical Data

We collect historical sequence information with a variety of criteria and withdrawal algorithms. We use a procedure similar to that used with Historical Surviving Withdrawal Rates. We identify the highest withdrawal rate that satisfies the criteria with a precision of 0.1%. When we increase the withdrawal rate by 0.1%, the historical sequence fails to meet the criteria.

One of our criteria is the Half Failure Surviving Withdrawal Rate over a period of 30 years. The portfolio's balance is allowed to fall to one-half of its initial value within the 30-year period. It is not allowed to fall below one-half of its initial value.

Another example of our criteria is to maintain a Constant Terminal Value over a period of 30 years. The portfolio's balance at year 30 must be at least as high as the initial balance at this rate. But the portfolio's balance at year 30 must be less than its initial level when we increase the withdrawal rate by 0.1%.

Quite often, we have simply collected the final balances at specific withdrawal rates. This makes the analysis especially easy to perform. It can be a great time saver for making an initial survey.

With rare exceptions, we adjust withdrawals to match inflation in accordance with the CPI. Except when noted, we use an expense ratio of 0.20%.

We almost always look at a time frame of 30 years. The reason has to do with data analysis. It is not because 30 years is the most relevant period for retirees. It is a relevant for many retirees, but not all. It is relevant for very few early retirees, if any. Rather, it has to do with the stock market.

When we look at Historical Surviving Withdrawal Rates, we find that good times and bad times appear in intervals of 30 to 35+ years. Here are some key dates (rounded) when Historical Surviving Withdrawal Rates were low: 1930, 1965-1970 and 2000. If we extend our period to 40 years, the data show some bimodal statistical effects. That is, a historical sequence can have two bad times and one good time or it can have two good times and one bad time. Bimodal effects make analysis difficult.

Another reason for using 30 years has to do with using complete historical sequences. We have found that sequences that began in 1965-1970 are almost always worst-case sequences. If we extend our time period to 40 years, all of these become partial sequences. They are influenced by dummy data values for 2003-2010.

Plotting the Historical Data versus 100E10/P

In retrospect, it seems obvious that Historical Surviving Withdrawal Rates should follow the market's earnings yield closely. After all, the Dividend Discount Model, the Gordon Equation and John Bogle's preferred variation of them all point to a linear relationship between future stock market returns and dividend yields. Take note that dividends come out of earnings. Then, add Professor Shiller's contribution of P/E10, which he based on Ben Graham's recommendation to average several years of earnings when evaluating companies. Finally, add the observation that overall earnings have grown at a remarkably steady pace when smoothed by averaging ten years of data. It all seems simple now, looking backward. It was not simple looking forward.

The greatest difficulty was the anomalous behavior of Historical Surviving Withdrawal Rates with P/E10. There are numerous reasons as to why such an anomaly might exist. Gummy's recent research sheds additional light. He calculated correlation coefficients for linear curve fits (i.e., regression equations) versus start years of the historical sequences. There is a notch centered at 1881. The correlation recovers and it is strong once again after six years.

Gummy's GARCH thread dated Sat Jan 15, 2005.
http://nofeeboards.com/boards/viewtopic.php?t=3264
http://nofeeboards.com/boards/viewtopic ... 209#p26209

Gummy's plot involved Half Failure Withdrawal Rates, the highest withdrawal rate that you can use (in increments of 0.1%) while assuring that all portfolio balances within a historical sequence stay above one-half of the initial balance. Half Failure Withdrawal Rates and Historical Surviving Withdrawal Rates and several additional measures all behave in a similar manner.

I standardized on collecting data for the start years of 1921-1980 before Gummy's discovery. I standardized on using the 1923-1980 data for linear curve fits. The years 1921 and 1922 have exceptionally high earnings yields that take the plot into saturation. That is, their Historical Surviving Withdrawal Rates no longer increase as much with 100E10/P as at smaller levels. (They still increase, just not by as much.)

Confidence Limits

We can calculate confidence limits by making a series of approximations and taking advantage of an analog of a standard statistical problem. The key issue, however, is determining the appropriate number of degrees of freedom when applying Student's t-test. To what extent does the overlap of sequences distort our ability to determine the underlying relationship?

It turns out not to be nearly so bad as it might appear. Every sequence has at least one value of P/E10 that is unique when compared to its nearest neighbor. The randomness in stock prices, even from a single point, overwhelms the other factors. Comparing two sequences always includes at least two unique points with the full randomness of the year-to-year change in stock prices. Comparing every other sequence is more than sufficient to produce unique results. The effective number of degrees of freedom is between what we would use if everything were entirely random and one-half of that number.

The process of determining the number of degrees of freedom involves a couple of iterations. The first step is to make a curve fit and to determine the confidence limits that would apply if the year-to-year variations equaled the total range of the Historical Surviving Withdrawal Rates. The next step is to revise these confidence limits based upon how much year-to-year randomness is actually present. This establishes a revised set of confidence limits, which are accurate enough for us to use.

We have been able to reveal the effect of overlapping sequences in our investigations of A New Tool. By looking at the tightest curve fit (with R-squared above 90%), which occurs at the 14-year point for 30-year survival periods, we can see small variations relative to their closest neighbors.

Based on this detailed analysis, it is clear that eyeball estimates are reasonably close to accurately calculated confidence limits.

There are times when I limit confidence limits to data at earnings yields below 10% (i.e., P/E10 above 10). I do this most often when making eyeball estimates.

The spread of the data increases with earnings yield, which makes a statistical characterization difficult. Plots using P/E10 have well-behaved, constant spreads. Using earnings yield results in linear curves, but with increasing standard deviations. Using P/E10 produces curves that are distinctly non-linear, but with constant standard deviations.

The Calculated Rate, the Safe Withdrawal Rate and High Risk Rate

I use 90% confidence levels centered about the regression equation (which is the straight line curve fit and the Calculated Rate) based on the standard approximation that uses a Gaussian distribution. [They actually turn out to be between 85% to 90%.] This level is sufficiently coarse to make it clear that we do not claim exceedingly high levels of accuracy and precision. It is reasonable. The one-sided probabilities of error are 5% at the Safe Withdrawal Rate and 5% at the High Risk Rate.

Baselines

It is a good idea to include baselines in any investigation.

The most basic baseline related to Safe Withdrawal Rates is what an inflation-matched cash equivalent would do. It allows you to withdraw 1/N times your initial balance for exactly N years. This establishes a performance floor. For example, if you cannot depend upon a 3.33% withdrawal rate over 30 years or a 2.50% withdrawal rate over 40 years, what does your strategy have to offer? It may offer a reasonable probability of a much better result. Just be aware of the alternatives.

Most of our data used commercial paper as part of the baseline. Although highly realistic, this turns out to be a bad choice. Commercial paper has acted quite differently at three different time frames.

Back in the late 1800s, overall prices declined as a result of increased productivity. The result, which is technically different from deflation, was that commercial paper by itself produced 6% withdrawal rates safely (when adjusted to match inflation). During the Great Depression, commercial paper was a disaster. Those are the years that have made commercial paper look like a horrible choice. More recently, during the 1960s and 1970s, commercial paper has behaved similar to TIPS at a 1% interest rate.

Commercial paper is always of interest. There were a few instances in the 1960s in which a portfolio consisting entirely of commercial paper would have done better than a portfolio that included stocks. These were unusual occurrences, to be sure, but they are important. There have been times when owning stocks has decreased portfolio survivability. As you might expect, those were times when stock prices were very high.

I am now using TIPS with a 2% interest rate in my baseline. I am of the opinion, but without proof, that it will always be possible to construct an equivalent to 2% TIPS. In contrast to commercial paper, TIPS perform consistently. Surprisingly, a portfolio consisting entirely of 2% TIPS makes a formidable baseline, but one that can often be breached. For example, 2% TIPS have a 35-year Safe Withdrawal Rate of 4.00% that is truly safe (not just 95% safe). Portfolios of 50% and 80% stocks plus commercial paper have lasted only 30 years.

Have fun.

John R.

Posted: Sun Mar 20, 2005 2:39 pm
by peteyperson
Just a quick note.

I think TIPS are even lower than 2% today. Depends on how long you are buying the term but if you plan to schedule TIPS coming due each year automatically then only 5- and 10-year bonds would work for this plan to schedule yearly and remove price volatility risk. These have the lowest returns, 1.25% and 1.76% today.

As to being 100% safe, I disagree. 100% in US TIPS would have country and currency risk. One could argue that runaway inflation risk is handled with TIPS as long as the gov't honestly states CPI when there will be considerably temptations to misreport to reduce debt finance costs. One might look to balance this by owning foreign TIPS but the tax implications are higher as you do not get a tax break on capital uplift from inflation (as we do in the UK - not sure if you do there). Will the questionable safety of the US debt position, I would not trust 100% in US TIPS and consider this 100% safe instead of 95%. I would give this a far lower safety percentage due to the other problems. Just saying the US can print more money, cause inflation and it won't matter fails to address the reality I think. I think the problem and thus the risks are more fundamental than that.

Petey

Posted: Sun Mar 20, 2005 6:30 pm
by unclemick
Shades of the Brit engineers from my working days - the Global view vs myopic American(aka me).

Anywise - this needs to be posted as a sticky or somewhere handy - so I can easy come back from time to time to refresh my memory.

Good post.

Posted: Sun Mar 20, 2005 8:40 pm
by Mike
...as the gov't honestly states CPI...
Real life retiree inflation appears to be much higher than the CPI for anyone who pays for their own medical insurance.

Posted: Mon Mar 21, 2005 5:09 am
by JWR1945
This is meant to be a sticky post.

Have fun.

John R.

Posted: Mon Mar 21, 2005 3:35 pm
by Norbert Schlenker
JWR1945 wrote:2) The next step is to plot the historical data versus 100E10/P. We use Excel calculations to determine the best linear curve fit (i.e., regression).
3) We determine confidence limits. Most of the time, we use eyeball estimates as a convenience. We have the appropriate formulas that we can use when we need better precision.
You're using Excel to plot regressions and calculate coefficients but Excel presumes some underlying distributions. In other threads, mean reversion is stated, explicitly or implicitly, as an assumption you make. Can you trust Excel's results in the case of mean-reverting Markov chains?

Posted: Mon Mar 21, 2005 5:16 pm
by JWR1945
Norbert Schlenker wrote:
JWR1945 wrote:2) The next step is to plot the historical data versus 100E10/P. We use Excel calculations to determine the best linear curve fit (i.e., regression).
3) We determine confidence limits. Most of the time, we use eyeball estimates as a convenience. We have the appropriate formulas that we can use when we need better precision.
You're using Excel to plot regressions and calculate coefficients but Excel presumes some underlying distributions. In other threads, mean reversion is stated, explicitly or implicitly, as an assumption you make. Can you trust Excel's results in the case of mean-reverting Markov chains?
Excel curve fitting equations do not assume knowledge of an underlying distribution. They simply minimize the least squares error.

I use the standard approximation with a Gaussian distribution. I have been very careful to identify this as a coarse approximation.

Mean reverting properties show up as smaller variances in stock market returns than otherwise expected. They are a secondary effect compared to year-to-year price fluctuations.

Our measurements are in terms of portfolio survivability. The dominating source of randomness is the year-to-year variations in stock market returns. As long as you do not press for too much statistical precision, using a Gaussian approximation does a good job.

Have fun.

John R.

Posted: Tue Mar 22, 2005 5:47 am
by hocus2004
This is meant to be a sticky post.

I've sent ES an e-mail asking that a sticky be assigned to the thread, JWR1945.

Posted: Tue Mar 22, 2005 5:59 am
by hocus2004
I appreciate that this is a valuable thread. Much of what is being said is way over my head, but I can tell that it is important stuff.

I just want to make a point aimed at putting things in perspective for those like me who do not have the skill set required to make full sense of what is going on in this particular thread.

The questions that are being asked go to the validity of JWR1945's analytical approach. I have a good bit of confidence in JWR1945's work and believe that it will stand up to reasoned scrutiny.

To put things in perspective, however, I think it is worth noting that we knew that the methodology used in the REHP study was analytically invalid for purposes of determining SWRs long before this board was created and JWR1945's research was posted to it. We knew on August 27, 2002, the day that I posted the "What Bernstein Says" post, that Bernstein found the conventional methodology results "highly misleading" at times of high valuation, and that he calculated (using a dfifferent methodology than the JWR1945 methodology) the SWR for a high-stock profile to be 2 percent at the top of the recent bubble. Raddr in his posts to the FIRE board that were put up in the days when he was shooting straight described yet another analaytically valid methodology that also generated a SWR nowhere even remotely in the neighborhood of the number identified by intercst as "100 percent safe."

JWR1945 has done wonderful work that benefits us all. It is a good use of our time to explore his methodology in depth so that the numbers crunchers among us will be aware of both its strengths and weakneses. But nothing that we learn about the JWR1945 methodology will change our most important findings--that intercst got the number wrong in the REHP study and then engaged in a 34-month Campaign of Terror to block the community from having the discussions it needs to have to determine what the historical data really says re SWRs.

Posted: Tue Mar 22, 2005 7:01 am
by Norbert Schlenker
JWR1945 wrote:Excel curve fitting equations do not assume knowledge of an underlying distribution. They simply minimize the least squares error.
Very well. Then you're assuming that you can fit a straight line to the empirical data. What justifies that assumption?
Mean reverting properties show up as smaller variances in stock market returns than otherwise expected. They are a secondary effect compared to year-to-year price fluctuations.
Well, you've got me confused. AFAIK, mean reversion has nothing to do with variances per se. Please clarify.
Our measurements are in terms of portfolio survivability. The dominating source of randomness is the year-to-year variations in stock market returns. As long as you do not press for too much statistical precision, using a Gaussian approximation does a good job.
I see two problems here. (1) Market returns are demonstrably not Gaussian. The fat tails bolster your thesis, so you might think about how to incorporate that. (2) Using any sort of iid distribution, including a Gaussian, to project future behavior doesn't work with a mean reverting time series.

Posted: Wed Mar 23, 2005 8:03 am
by JWR1945
Norbert Schlenker wrote:
JWR1945 wrote:Excel curve fitting equations do not assume knowledge of an underlying distribution. They simply minimize the least squares error.
Very well. Then you're assuming that you can fit a straight line to the empirical data. What justifies that assumption?
The lines themselves and their R-squared values.

Look at bpp's graphs in our special SWR Research section.

Have fun.

John R.

Posted: Wed Mar 23, 2005 8:11 am
by JWR1945
Norbert Schlenker wrote:
Mean reverting properties show up as smaller variances in stock market returns than otherwise expected. They are a secondary effect compared to year-to-year price fluctuations.
Well, you've got me confused. AFAIK, mean reversion has nothing to do with variances per se. Please clarify.
Raddr came up with a precise definition of Mean Reversion of the stock market in the early days of the FIRE board. This definition was centered around the decrease in standard deviation faster than the square root of the number of years.

Most discussions about Mean Reversion have been meaningless because of the lack of a consistent, precise definition. Raddr filled up that gap.

The cause-and-effect relationship behind Mean Reversion is pretty simple: stock prices are related to earnings, albeit loosely at times. This is what draws the overall return toward a central area instead of drifting away independently. The probability that stock prices will go up or down ultimately depend upon earnings.

Have fun.

John R.

Posted: Wed Mar 23, 2005 8:21 am
by JWR1945
Norbert Schlenker wrote:
JWR1945 wrote:Our measurements are in terms of portfolio survivability. The dominating source of randomness is the year-to-year variations in stock market returns. As long as you do not press for too much statistical precision, using a Gaussian approximation does a good job.
I see two problems here. (1) Market returns are demonstrably not Gaussian. The fat tails bolster your thesis, so you might think about how to incorporate that. (2) Using any sort of iid distribution, including a Gaussian, to project future behavior doesn't work with a mean reverting time series.
We are on the right side of this issue. Fat tails and other effects argue against claiming a high degree of statistical precision.

Remember that our distributions are based on Historical Surviving Withdrawal Rates and not directly on the stock market. These are finite length sequences. The Central Limit Theorem is adequate for our purposes.

We have addressed statistical properties. Visit the Crestmont Research site (see the interview with Ed Easterling, who posted as Crestmont) for what I think is the best simple statistical description. Read Gummy's recent GARCH thread that looks into advanced statistical modeling. Look into my references to Benoit Mandelbrot and visit his website for a more advanced description.

Have fun.

John R.

Posted: Wed Mar 23, 2005 8:22 am
by Norbert Schlenker
JWR1945 wrote:
Norbert Schlenker wrote:Then you're assuming that you can fit a straight line to the empirical data. What justifies that assumption?
The lines themselves and their R-squared values.
What are the r^2's?
Look at bpp's graphs in our special SWR Research section.
I'm a noob. Could you provide a link?
Raddr came up with a precise definition of Mean Reversion of the stock market in the early days of the FIRE board. This definition was centered around the decrease in standard deviation faster than the square root of the number of years.
That's a low powered test but okay. Have you a link?