Tag Archives: author

The Statistical Support For Long-Term Return Regimes Is Compelling

By Rob Bennett The last three columns examined a recent article by Michael Kitces (“Should Equity Return Assumptions in Retirement Projections Be Reduced for Today’s High Shiller CAPE Valuation?”) that advances the highly counter-intuitive and yet entirely accurate claim that, “The ideal way to adjust return assumptions… [may be] to do projections with a ‘regime-based approach to return assumptions. This would entail projecting a period of much lower returns, followed by a subsequent period of higher returns.” This changes everything that we once thought we knew about how the stock market works. The old (and still dominant) belief was that stock prices fall in the pattern of a random walk because price changes are caused by economic developments. If what Kitces is saying is so (I strongly believe that it is), prices do not fall in a random walk at all. They play out according to a highly predictable long-term pattern. For about 20 years, valuations rise (with short-term drops mixed in). Then, for about 15 years, valuations drop (with short-term rises mixed in). It is investor emotion that is the primary determinant of stock price changes. Investors can reduce risk dramatically, while also increasing return dramatically by adjusting their stock allocations in response to big valuation shifts, and thereby keeping their risk profile roughly constant as one “regime” is replaced with another. This is hard to accept. We are always living through either a high-return regime or a low-return regime. The regimes continue long enough to convince us that they are rooted in something solid and real and permanent, not in something as loosey-goosey and vague and seemingly ephemeral as investor psychology. When sky-high returns were being reflected on our portfolio statements in the late 1990s, we adjusted our understanding of our net worth. But improperly so! A large portion of the oversized returns were the result of the regime we were living through. Those returns were fated to disappear in the following regime. And the poor returns of today’s regime (which began in 2000) will also disappear when we enter the next return-boosting regime. The strategic implications are far-reaching. If there really are high-return regimes and low-return regimes, it makes no sense to stick with the same stock allocation at all times. If there are two types of return regimes that last for 15 or 20 years, there are two types of stock markets that last for 15 or 20 years. Decisions that make sense for one of the two types of regimes cannot possibly make sense for the other type of regime. Buy-and-hold is a mistake. We should be going with higher stock allocations in high-return regimes and with lower stock allocations in low-return regimes. There’s a rub. What if the data that Kitces is taking into consideration in forming his conclusions is the product of coincidence? Can we really be sure that the two-regime world will remain in place? If it doesn’t, and if we invest on the belief that it will, we will be underinvested in stocks while waiting for today’s low-return regime to play out (the historical reality is that no low-return regime has ever ended until the P/E10 level dropped to 8 or lower, a big drop from where it stands today). Negative consequences follow for an investor who abandons buy-and-hold for valuation-informed indexing in the event that Kitces’ regime concept turns out to be an illusion. The most convincing case that I have seen that it is not an illusion is the case put forward in a book by Michael Alexander, titled Stock Cycles: Why Stocks Won’t Beat Money Markets Over the Next Twenty Years . Please note that the claim made in the subtitle was widely perceived as crazy at the time it was made (the book was published in 2000), and yet, has proven prophetic – stock returns over the past 16 years have been far smaller than the returns that were available in 2000 through the purchase of super-safe asset classes like Treasury Inflation-Protected Securities and iBonds. Buy-and-holders would have said at the time that a prediction of 16 years of poor returns was exceedingly unlikely to prove valid. And yet, Alexander knew something (or at least thought he knew something) compelling enough to persuade him to put his name to that claim in a very public way. Alexander engaged in extensive statistical analysis to determine whether stock price changes really do play out differently in different long-term regimes. He concluded that: “The effect of holding time on stock returns in overvalued markets is the opposite of what it is for all markets. Normally, holding stocks for longer amounts of time increases the probability that they will beat other types of investments such as money markets… In the case of overvalued markets (like today), holding for longer times, up to twenty years, does not increase your odds of success.” We don’t today know everything there is to know about how stock investing works. We are in the early years of coming to a sound understanding of even the fundamentals. We need to be careful not to jump to hasty conclusions based on limited research. That’s what I believe the buy-and-holders did. Many of their insights were genuine and important, and have stood the test of time. But the claim that it is safe for investors to ignore price when buying stocks has not stood the test of time. The Kitces article is pointing us in a new direction. I hope it generates lots of debate. My guess is that we will not see that debate immediately, but that many will be giving the Kitces article a second look following the next price crash, when we will all be seeking to come to terms with what we have done to ourselves by too easily buying into the idea that the stock market is the one exception to the general rule that price discipline is what makes markets work. Disclosure: None.

Low Interest Rates Alone Cannot Prevent A Bear Market In Stocks

The most common definition of a bear market in stocks? A major index needs to fall 20% from a high watermark. And while that is precisely what has happened for most gauges of stock health – MSCI All-Country World Index, Nikkei 225, STOXX Europe 600, Shanghai Composite, U.S. Russell 2000, U.S. Value Line Composite – the Dow and the S&P 500 remain defiant. Yet, there’s another way to view bulls and bears. In particular, chart-watchers often use the slope of a benchmark’s long-term moving average. It is a bull market when the 200-day moving average is rising. During these times, investors often benefit when they buy the dips. In contrast, when the 200-day is sloping downwards, it may be a “Grizzly.” During these days, investors successfully preserve capital when they raise cash by selling into rallies. There’s more. During stock bears, stocks frequently hit “lower highs” and “lower lows.” That’s exactly what investors have experienced since May of 2015. There’s little doubt that – at the moment – we are witnessing the “rolling over” of the 200-day moving average. The exceptionally popular measure of market direction is sloping downward, giving support to the notion that a bearish downtrend is in command. Technical analysis notwithstanding, there are other reasons to believe that the stock bear will maul and mangle. Fundamental analysts note that the Q1 2016 S&P 500 earnings are set to record a decline of -8.0%. That is going to register a fourth consecutive quarter for year-over-year declines in corporate earnings per share – the first such sequence since 2008 (Q1, Q2, Q3, Q4). “But Gary,” you protest. “It’s only the energy companies. You should just exclude them from consideration.” (Like technology in 2000? Financials in 2008?). Actually, it’s not just the energy sector. Seven of the 10 key economic sectors will serve up profits-per-share disappointments. Telecom, healthcare and consumer discretionary companies may be the only sectors to provide a positive boost in the upcoming earnings season. Still, get a gander at the earnings expectations at the start of the year vs. the earnings expectations at the beginning of March. It only took two months for analysts to lower their expectations for every single stock segment – percentage revisions that have not dropped this fast since the Great Recession. Keep in mind, reported earnings for the S&P 500 peaked at $105.96 on 9/30/2014. At that time, the S&P 500 closed at 1,972 and traded at a P/E of 18.6. With the most recent 12/30/2015 S&P 500 earnings at $86.46, and the 3/8/2016 close of 1979, the market trades at a P/E of 22.9. That’s correct. The market is essentially flat since September of 2014, but it is far more expensive in March of 2016 . Nearly 20% more expensive since profits peaked . It is exceptionally difficult to make a case for the overall market to be “attractive” or “fairly valued.” Not that perma-bulls haven’t tried. The most common argument is the attractiveness of stocks relative to the alternatives in fixed income. Ultra-low interest rates not only force savers into equities, they argue, but it also primes the pump for companies to buy back shares of their own stock through the issuance of corporate debt. However, history has a similar circumstance when the U.S. had a low rate environment for nearly 20 years (i.e., 1935-1954). In that period, valuations were about HALF of what they are today. If low rates alone weren’t enough to DOUBLE the “P” relative to the “E,” why are low rates enough to justify higher stock prices regardless of valuations in 2016? When top-line sales and bottom-line earnings are contracting? It is also worth noting that low rates alone did not stop bear markets occurring in 1937-1938 (-49.1%), 1938-1939 (-23.3%), 1939-1942 (-40.4%), or 1946-1947 (-23.2%). Click to enlarge By way of review, the technical picture is inhospitable. The fundamental backdrop is unsavory. And even the perma-bull panacea of low interest rates cannot obliterate historical comparisons entirely. “Well, Gary,” you decry. “Back then, we were still coming out of the Great Depression. We don’t have anything like that right now… and we are not going into recession.” I’m glad that you brought up the Great Depression. For starters, Federal Reserve policy error in 1937-1938 went a long way toward reigniting recessionary forces – dynamics not unlike the depression-like disaster that plagued America from 1929 to 1932. Today, six-and-a-half years removed from the Great Recession, Fed policy error (December 2015) remains a distinct possibility. Members of the Fed’s Open Market Committee currently believe that they can raise borrowing costs in 2016 without reversing the Fed’s wealth effect ambitions . They may learn, however, that they will be returning the country to zero percent rate policy (ZIRP) and quantitative easing (QE) to squelch a 20%-plus decline in key barometers like the S&P 500. What’s more, the stock bears in 1939-1942 (-40.4%) and in 1946-1947 (-23.2%) are not attributable to recessions or the Great Depression. Those stock bears had low interest rates and booming economies. Is the U.S. economy booming right now? The surprising popularity of anti-incumbent candidates like Sanders and Trump suggests that the real economy – jobs included – is shaky at best. This simple chart that plots both manufacturing and non-manufacturing (services) demonstrates that economic weakness is an actuality, not a doom-n-gloom delusion. One might even choose to consider the most recent business headlines. Chinese exports plummeted by their largest amount since 2009. The International Monetary Fund (IMF) warned today that the world is looking at an increasing “risk of economic derailment.” And stateside, the NFIB’s Small Business Optimism Index fell for the fourth time in five months. It now sits at its lowest level in two years while demonstrating its steepest peak to trough drop since 2009. Instead of getting more stimulus from the U.S. Federal Reserve, like “Twist” and “QE3,” the Fed is pushing “gradual stimulus removal. ” In sum, the rallies of September-October (2015) and February-March (2016) share more in common with bear market bounces than buy-the-dip opportunities. Technicals, fundamentals, economics and Federal Reserve policy collectively favor a lower-than-usual allocation to risk. For my moderate growth-and-income clients, our 45-50% allocation to domestic large caps exists in stark contrast to 65-70% in a broadly diversified equity mix (e.g., large, small, foreign, emerging, etc.). Some of our core positions? The Vanguard High Dividend Yield ETF (NYSEARCA: VYM ), the iShares S&P 100 ETF (NYSEARCA: OEF ) and the iShares MSCI USA Quality Factor ETF (NYSEARCA: QUAL ). We have pure beta exposure to the SPDR S&P 500 Trust ETF (NYSEARCA: SPY ) as well. On the income side of the ledger? We have benefited immensely from a commitment to investment-grade holdings, including the Vanguard Long-Term Corporate Bond Index ETF (NASDAQ: VCLT ), the iShares 3-7 Year Treasury Bond ETF (NYSEARCA: IEI ) and munis via the SPDR Nuveen Barclays Muni Bond ETF (NYSEARCA: TFI ). For Gary’s latest podcast, click here . Disclosure : Gary Gordon, MS, CFP is the president of Pacific Park Financial, Inc., a Registered Investment Adviser with the SEC. Gary Gordon, Pacific Park Financial, Inc, and/or its clients may hold positions in the ETFs, mutual funds, and/or any investment asset mentioned above. The commentary does not constitute individualized investment advice. The opinions offered herein are not personalized recommendations to buy, sell or hold securities. At times, issuers of exchange-traded products compensate Pacific Park Financial, Inc. or its subsidiaries for advertising at the ETF Expert web site. ETF Expert content is created independently of any advertising relationships.

Estimating Return-Shortfall Risk For Portfolios

Failure isn’t an option, but it happens. Modeling the possibility that a portfolio strategy will stumble isn’t exactly cheery work, but it’s a productive and necessary exercise for stress testing what the future can do to the best-laid plans for investing. The good news is that there’s a rainbow of options for estimating the potential for trouble. But it’s usually best to start with a basic framework before venturing into more exotic realms. A solid way to begin is by calculating the probability that a portfolio’s return will fall short of a particular benchmark or return. Larry Swedroe, Director of Research for the BAM Alliance, last month wrote about the probability of underperformance from the perspective of four factor premiums. The technique is to assume a normal distribution of returns and model the outcome under a variety of scenarios. Normal distributions are problematic, of course, due to fat-tail risk. But as Swedroe correctly points out, a normal distribution is “reasonable for multi-annual returns data because annual returns data is approximately normally distributed for diversified portfolio.” The details for the number crunching are straightforward. Several years ago The Calculating Investor outlined the procedure with an Excel spreadsheet. Let’s expand the concept a bit by applying the normal distribution function in R via the pnorm() command. Assume we’ve designed a portfolio with a 10-year time horizon and expected annualized volatility (standard deviation) of 15%. Holding those variables constant, here’s the probability of generating a below-zero return over that span based on a range of expected returns for the portfolio: Not surprisingly, the risk of suffering a negative result is substantial if we’re assuming a low return. A 1% annualized return carries a 40%-plus risk a sub-zero performance over a 10-year stretch. But as expected return rises, the risk of below-zero performance falls. As the portfolio’s projected return approaches 10%, the risk of losing money fades to a virtually nil possibility, given the assumptions about volatility and time horizon. For another perspective, let’s vary the time horizon while holding the expected return and volatility constant by assuming the portfolio will earn 5% annualized with 15% standard deviation. As the next chart below shows, running the numbers through a normal distribution model tells us that the risk of sub-zero performance is considerable at short time horizons. Starting at around 15 years, shortfall-return risk falls below a 10% probability. In other words, the longer the time horizon, the lower the probability of losing money. Finally, let’s model various levels of expected volatility while holding constant the time horizon (10 years) and projected return (5%). The third chart below quantifies what intuition implies: higher portfolio volatility increases the probability of suffering a loss. There are many variations on the simple examples above. For example, we can easily model the risk of falling short of the risk-free rate, an inflation-adjusted benchmark, or any other yardstick that’s considered relevant. We can also crunch the data by factoring in a fat-tails assumption for added reality. Ultimately, the goal is to design a modeling framework that’s customized for a specific portfolio. The point is that a basic quantitative application is useful for deciding how a given portfolio might fare under extreme conditions. For instance, the procedure outlined above may reveal that a given set of assumptions is highly sensitive to small changes – a sensitivity that may not be obvious without a formal modeling effort. In that case, it may be time to go back to the drawing board for designing an asset allocation. After all, the price tag is always lower for discovering problems in the design stage as opposed to finding enlightenment when real money is at stake. The future’s still uncertain, of course, but the first priority for the art/science of risk modeling is about minimizing the potential for surprises. Our capacity for insight is limited and so deploying diagnostic tests about what could happen fall well short of providing definitive clarity for the morrow. Estimating shortfall risk is no panacea, but it’s still useful. In fact, the only thing that’s worse than running this modeling procedure is not doing it at all.