## Worth a look on the SWR board

John has put up a very interesting post on the SWR Research board. I'd be interesting in hearing comments from those who don't post there, especially raddr. :)

"Do not spoil what you have by desiring what you have not; remember that what you now have was once among the things only hoped for." - Epicurus

### Re: Worth a look on the SWR board

BenSolar wrote: John has put up a very interesting post on the SWR Research board. I'd be interesting in hearing comments from those who don't post there, especially raddr.

Hi Ben,

I looked at it and I have some problems with it. First of all, statements such as this make me very nervous when statistics are being applied to a problem:

I have restricted my investigations to portfolios beginning in the years 1921-1980. In fact, I have excluded the years 1921 and 1922 in most cases to get a better curve fit.

You can't choose the years you want to use just to get better numbers for your hypothesis. The temptation is always there and is hard to avoid. I'm always on guard for this for when I look at a problem.

I also disagree with this:

Since there are 60 data points (or years for starting a retirement from 1921-1980), it is clear that I have treated each year as statistically independent. They are, in fact, very close to being independent. The fact that the sequences overlap strongly is only a secondary consideration. The reason is that the randomness comes from the percentage earnings yield, 100/[P/E10]. The earnings component of P/E10 is relatively stable because E10 is the average of ten years of (trailing, real) earnings. The randomness comes almost entirely from price fluctuations. In the very short-term, price fluctuations are very close to being entirely independent. It is only over longer periods that mean reversion (as properly defined and quantified by raddr) reduces the randomness.

I would recommend looking at this thread from awhile back when much of this was covered:

http://nofeeboards.com/boards/viewtopic ... orrelation

The bottom line is that when each data series shares 29 out of 30 datapoints with its neighbor you don't get a Gaussian distribution as a result and descriptive statistics can't be applied with any precision.

One way to look at this is to compare adjacent years with each other. For example, according to John's hypothesis, at a given PE10 you could predict the SWR with a standard deviation of 0.96% for HDBR80. If you look at the historical data, however, and pick out adjacent years where the PE10 is essentially equal and look a their SWR's you'll find that they are almost the same with very little dispersion:

Code: Select all

`PE10Â Â Â SWR `

8.2Â Â Â Â 8.9

8.1Â Â Â Â 9.2

10.1Â Â Â Â 9

10.2Â Â Â Â 8.7

10.4Â Â Â Â 10.2

10.2Â Â Â Â 10

18.3Â Â Â Â 4.9Â Â

18.5Â Â Â Â 4.8Â Â

21.5Â Â Â Â 4.1

21.2Â Â Â Â 4.1

11.2Â Â Â Â 6.6

11.4Â Â Â Â 6.6

9.2Â Â Â Â 7.9

9.3Â Â Â Â 8.2

As you can see, all of the paired SWR's are within 0.3% of each other. If the SD for such a distribution is 0.96% then you'd expect any two SWR's derived from a single PE10 to land within 0.3% (0.31 SD's) of each other about 12.2% of the time. We have seven such pairs above and the chances of all 7 being within 0.31 SD's of each other is about 1 in 2.4 million. It is clear then that this is not a Gaussian distribution and that the culprit is the severe data dependency from overlapping SWR sequences.

I read the response to the response. I am still not sure what to make of throwing out data and then announcing that what is left fits great.

maybe this is the way it is done now a days (when I was a kid we thought if you couldn't find a rationale for throwing out inconvenient data it just showed lack of imagination ) I have no problem with doing this sort of thing for hypothesis generation when there is a plan for testing it against additional data

A straight line provides an excellent fit for a scatter plot of Historical Database Rates versus Earnings Yield

maybe this is the way it is done now a days (when I was a kid we thought if you couldn't find a rationale for throwing out inconvenient data it just showed lack of imagination ) I have no problem with doing this sort of thing for hypothesis generation when there is a plan for testing it against additional data

Have fun.

Ataloss

Ataloss

ataloss wrote: I read the response to the response. I am still not sure what to make of throwing out data and then announcing that what is left fits great.

Agreed. What's worse, however, is claiming to have 60 independent data points where there's really only 2 or 3. You can't get valid statistics from severely overlapped data.

BTW, John's assertion that the SWR board is the correct place to discuss this is incorrect. BenSolar asked me

**on the FIRE board**to comment on a post from the SWR board. I outlined my problems with the post here

**on the FIRE board**because that's where the question was asked. No offense to John but there just isn't much over there that interests me and I prefer to hang out here at the FIRE and Index boards instead. I don't like being told I have to participate on a board that I don't care to visit.

Agreed. What's worse, however, is claiming to have 60 independent data points where there's really only 2 or 3. You can't get valid statistics from severely overlapped data.

I guess we have differing outlooks on which aspect is most egregious. The overall goal seems to be to generate some data to promote some sort of deterministic valuation based withdrawal rate to support some offhand claim that hocus/rob bennett made with no data at all

I think jwr has confirmed this wisdon:

"Don't gamble. Take all your savings and buy some good stock and hold it till it goes up then sell it. If it don't go up, don't buy it."

Will Rogers

Have fun.

Ataloss

Ataloss

ataloss wrote:Agreed. What's worse, however, is claiming to have 60 independent data points where there's really only 2 or 3. You can't get valid statistics from severely overlapped data.

I guess we have differing outlooks on which aspect is most egregious. The overall goal seems to be to generate some data to promote some sort of deterministic valuation based withdrawal rate to support some offhand claim that hocus/rob bennett made with no data at all

Yeah. Even if you forget about the two or three years in the 20's that he wants to throw out, what about the data from 1871-1920? Why was it tossed out? Maybe it doesn't fit the hypothesis?

*Edited after looking at the 1971-1920 data:*

If you plot HDBR80 vs. PE10 for 1871-1920 you get an R^2 of only 0.37. For 1921-1980 it is 0.77. I think I see now why the pre-1920 data was not included.

*might*be reasonable to carry out the regression using every tenth data point which result in this set of pseudo-independent data points:

Code: Select all

`YearÂ Â 100/PE10Â Â HDBR80 `

1920Â Â 6.00Â Â Â Â 8.3

1930Â Â 4.48Â Â Â Â 4.5

1940Â Â 6.10Â Â Â Â 5.8

1950Â Â 9.35Â Â Â Â 10.3

1960Â Â 5.46Â Â Â Â 5.1

1970Â Â 5.85Â Â Â Â 4.8

1980Â Â 11.24Â Â Â 8.2

Running the regression we get this equation:

HDBR80 = 1.958225433 + 0.68678084 * 100/PE10

The standard error is 1.61 and confidence limits are +/- 3.13% (90%) and 3.94% (95%). The PE10 is currently about 27 so the predicted SWR for an 80% stock portfolio would be 4.44%. So far so good but the confidence limits look this:

4.44% +/- 3.13% =

**7.57-1.31%**(90% confidence level)

4.44% +/- 3.94% =

**8.38-0.50 %**(95% confidence level)

Clearly this is pretty useless when proper confidence limits are applied. Basically it doesn't comfort me much to know that there is a 95% chance that the SWR for the next 30 years will lie somewhere between zero and 8%. Unfortunately there is just not enough data here to warrant drawing any useful conclusions about the predictive ability of PE10 vis a vis the SWR for the next 30 years.

raddr wrote: The data overlap I've alluded to above invalidates the regressions performed on the other board and linked to by Ben in the first post. From the link I provided in the thread it appears that by year 10 most of the autocorrelation has resolved between consecutive SWR data points. With this in mind itmightbe reasonable to carry out the regression using every tenth data point which result in this set of pseudo-independent data points: ....

Hey raddr, :)

Thank you for looking at this stuff. I'm swimming over my head when it comes to this statistical analysis business. Did you see this post John put up in that thread where he seemed to address similar issues?

JWR1945 wrote:

The total number of degrees of freedom equals 8 (essentially) independent Historical Database Rates times 6 degrees of freedom in estimating each of these rates. This totals 8*6 = 48 degrees of freedom total instead of 58.

...

An alternative way of looking at this is to look at how big a price swing it takes to make a discernible change on the curve. ... This approach would widen the confidence limits by something less 40% (i.e., a multiplier of less than the square root of two).

I am pretty much lost here. Any thoughts? I thought the idea of using averages to get our 'pseudo-independant data points' (or should I say pseudo-idependant pseudo-data points ) was interesting. But I'm not sure if it makes sense or not. His multiplying degrees of freedom looks kind of iffy. It doesn't make sense to me that we can have 8 (pseudo?)independant points with 48 degrees of freedom.

On the other hand the dataseries you used doesn't seem to reflect the data particularily well, with every point being a below average PE-10.

I recall reading Shiller's papers on the subject of using PE-10 and other valuation metrics to forecast long term return. On the subject of data dependancy he said that it was his and Cambell's opinion that there was enough 'independancy' (not a word ) in the data (going back to 1871) for the regressions to have value for forecasting. I don't recall his giving any statistical rational for that opinion though. He used each year as a separate point - not a subset that would have been more independant.

Regards,

"Do not spoil what you have by desiring what you have not; remember that what you now have was once among the things only hoped for." - Epicurus

BenSolar wrote: Thank you for looking at this stuff. I'm swimming over my head when it comes to this statistical analysis business. Did you see this post John put up in that thread where he seemed to address similar issues?JWR1945 wrote:

The total number of degrees of freedom equals 8 (essentially) independent Historical Database Rates times 6 degrees of freedom in estimating each of these rates. This totals 8*6 = 48 degrees of freedom total instead of 58.

...

An alternative way of looking at this is to look at how big a price swing it takes to make a discernible change on the curve. ... This approach would widen the confidence limits by something less 40% (i.e., a multiplier of less than the square root of two).

I am pretty much lost here. Any thoughts? I thought the idea of using averages to get our 'pseudo-independant data points' (or should I say pseudo-idependant pseudo-data points ) was interesting. But I'm not sure if it makes sense or not. His multiplying degrees of freedom looks kind of iffy.It doesn't make sense to me that we can have 8 (pseudo?)independant points with 48 degrees of freedom.:?

I agree. There are only 7 or 8 pseudo-independant data points thus 6 or 7 pseudo-degrees of freedom (n-1) if you do it this way.

On the other hand the dataseries you used doesn't seem to reflect the data particularily well, with every point being a below average PE-10.

Acutally I used 100/PE10 to stay consistent with what John was doing. If you use PE10 the numbers range from 8.9 to 22.3 - pretty representative of the data.

I recall reading Shiller's papers on the subject of using PE-10 and other valuation metrics to forecast long term return. On the subject of data dependancy he said that it was his and Cambell's opinion that there was enough 'independancy' (not a word ) in the data (going back to 1871) for the regressions to have value for forecasting. I don't recall his giving any statistical rational for that opinion though. He used each year as a separate point - not a subset that would have been more independant.

Well, I think that this is wrong. In fact I'm sure it is wrong. If you look back at my post above it is clear that adjacent datapoints are basically the same for a given PE10 thus they are not independent. You are essentially using the same data twice when you use two consecutive rolling 30 year periods. However, once you space them out by at least 8-10 years most of the autocorrelation is gone.

BTW, I'm not trying to say that PE10 is useless for forecasting. I think it is very helpful but not nearly to the degree of precision that was put forth in the post you referenced at the top of the thread. I'm not sure that linear regression is the way to go, particularly since SWR's have a lognormal distribution (i.e. they can't go below zero and the distribution has a positive skew).

I looked at Schiller's paper and, as I suspected, he acknowledges the pitfalls of using nonindependent data. He shows through Monte Carlo simulation that the predictive ability of the PE10 ratio is for real but that you can't rely on conventional descriptive statistics for confidence intervals such as those cited in the study you posted at the top of the thread. In fact, he does not even attempt to place confidence levels on his data.

This is pretty much in agreement with my findings. There is no doubt that some valuation indicators such as PE10 have predictive value. You just can't construct confidence intervals unless you remove the data dependency such as I attempted to do by using every 10th data point. When you do this you find that the confidence intervals are nowhere near what they would be if the data was truly independent. In the case of PE10, the future SWR confidence intervals are so wide (e.g. 0-8% for an 80% stock portfolio) that you cannot make anything other than a very broad prediction about future market returns.

raddr wrote: I looked at Schiller's paper and, as I suspected, he acknowledges the pitfalls of using nonindependent data. He shows through Monte Carlo simulation that the predictive ability of the PE10 ratio is for real but that you can't rely on conventional descriptive statistics for confidence intervals such as those cited in the study you posted at the top of the thread. In fact, he does not even attempt to place confidence levels on his data.

Thanks for looking into it and reporting back.

"Do not spoil what you have by desiring what you have not; remember that what you now have was once among the things only hoped for." - Epicurus

...you cannot make anything other than a very broad prediction about future market returns.

I tend to agree with this. Nonindependent data is a factor to consider, although I don't currently know how to calculate its effect. I believe that our present models need to incorporate more factors than they currently do. The present S&P over valuation may be partially related to the relative size of the population group approaching retirement, as well as the limited choices they have for saving in their 401k plans. For many people, it is mutual funds that more or less track the S&P, or cash. They cannot simply buy a rental house or commodities in their 401k if the S&P becomes over valued, they are stuck. I would like to see demographics added to the model at a minimum, but finding the relevent data is not easy. I hope that we can improve the model over time. Until then, earnings yield adds another piece to the puzzle. It points to potential danger of investing in over valued asset classes. Reversion to the mean is not guaranteed to take place in 5 years, or even 10, but reversion to the historic norms is one possibility that should be considered.

I would like to see demographics added to the model at a minimum, but finding the relevent data is not easy.

wrt unprecedented current and future demographic shifts I think it is going to be hard to find data

seriously, I favor lower lt returns but I think it is dangerous to blindly apply past results to the future ( and throwing out data to make your r squared higher is just bizarre)

Have fun.

Ataloss

Ataloss

- salaryguru
- * Rookie
**Posts:**26**Joined:**Wed Nov 26, 2003 7:14 am**Location:**Mesa, AZ

There is another fundamental problem here. Using an empirical curve fit outside of the data range that was used to establish the curve is of extremely questionable value. For example, data that is well fit to a straight line can be fit equally well to an appropriately defined tanh (hyperbolic tangent) function. But the behavior of the two curve fits is dramatically different outside the range of the data. Unless there are causal principles that can be applied to indicate one curve is more accurate than another, then extrapolation outside the data range is questionable.

Looking specifically at the PE10 vs HDBR50 data, I find that that data is better fit to the curve

HDBR50 = A* exp(-B*PE10) + C

where A = 10.396

B = 0.13707

C = 4.152

than it is fit to a straight line.

But note that this curve predicts a minimum SWR of 4.1% for all valuations.

Neither the straight line nor the function above is justified for current PE values that are outside of the historical range.

*edit: I had left out a negative sign in the equation in my original post. Sorry about that for anyone who tried to fit the equation to the data. The fit is really much better than the line fit, it supports a totally different conclusion, and without causal justification it has no more basis in fact than the line fit outside of the data range.

Looking specifically at the PE10 vs HDBR50 data, I find that that data is better fit to the curve

HDBR50 = A* exp(-B*PE10) + C

where A = 10.396

B = 0.13707

C = 4.152

than it is fit to a straight line.

But note that this curve predicts a minimum SWR of 4.1% for all valuations.

Neither the straight line nor the function above is justified for current PE values that are outside of the historical range.

*edit: I had left out a negative sign in the equation in my original post. Sorry about that for anyone who tried to fit the equation to the data. The fit is really much better than the line fit, it supports a totally different conclusion, and without causal justification it has no more basis in fact than the line fit outside of the data range.

Last edited by salaryguru on Fri Apr 23, 2004 10:51 am, edited 1 time in total.

-SG-