Tag Archives: seeking-alpha

Algorithm Aversion – Why People Don’t Follow The Model!

By Jack Vogel, Ph.D. There are many studies showing that models beat experts , including the meta-study ” Clinical versus mechanical prediction: A meta-analysis ” by Grove et al. (2000). However, given this knowledge that models beat experts, forecasters still prefer to use the human (expert) prediction as opposed to using the model. Why is this? A recent paper by Dietvorst et al. (2014), titled ” Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err ” examines this phenomenon. Here is the abstract of the paper. Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human. Here is an interesting example (from the paper) to describe why this may occur: Imagine that you are driving to work via your normal route. You run into traffic and you predict that a different route will be faster. You get to work 20 minutes later than usual, and you learn from a coworker that your decision to abandon your route was costly; the traffic was not as bad as it seemed. Many of us have made mistakes like this one, and most would shrug it off. Very few people would decide to never again trust their own judgment in such situations. Now imagine the same scenario, but instead of you having wrongly decided to abandon your route, your traffic-sensitive GPS made the error. Upon learning that the GPS made a mistake, many of us would lose confidence in the machine, becoming reluctant to use it again in a similar situation. It seems that the errors that we tolerate in humans become less tolerable when machines make them. We believe that this example highlights a general tendency for people to more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. We propose that this tendency plays an important role in algorithm aversion. If this is true, then algorithm aversion should (partially) hinge on people’s experience with the algorithm. Although people may be willing to trust an algorithm in the absence of experience with it, seeing it perform-and almost inevitably err-will cause them to abandon it in favor of a human judge. This may occur even when people see the algorithm outperform the human. The paper goes on to show that as human confidence in the model increases, humans are more likely to use the model (even if they have viewed the model and have seen it fail). This is described in the figure below. (click to enlarge) Source: “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err” by Dietvorst et al. (2014) However, note that even when people have “much more” confidence in the models, around 18% of people still use the human prediction! We are a firm that believes in evidence-based investing and understand that (in general) models beat experts . However, most people prefer the human option after seeing a model (inevitably) fail at some point in time. How does this tie to investing? If we are trying to beat the market through security selection, an investor has three options: use a model to pick stocks, use a human, or combine the two. Inevitably, the model will underperform at some point, since no strategy wins all the time (if a strategy never failed, everyone would invest and the edge would cease to exist). When a model underperforms for a certain time period, it does not mean that the model is inherently broken. In fact, the model could have simply failed over some time period, but the long-term statistical “strength” of the model remains intact. Steve, the human stock-picker, will also under-perform at some point; however, Steve can probably tell a better story over a beer as to why he missed the mark on last quarter’s earnings, that pesky SEC investigation, etc. And since drinking a beer with stock-picker Steve is a lot more fun than drinking a beer with an HP desktop, we will probably give Steve the benefit of the doubt. Successful investors understand that models will fail at times; however, being able to stick with the model through thick and thin is a good strategy for long-term wealth creation. For the rest of us, there’s always stock-picker Steve. Cheers. Original Post

July 2015 Asset Class Performance

July is now complete, and the S&P 500 SPY ETF finished the month up 2.23%. Below is a look at the performance of all asset classes during the month of July using key ETFs traded on US exchanges. While the SPY ETF was up 2.23% in July, the Nasdaq 100 (NASDAQ: QQQ ) doubled that with a gain of 4.56%. The Dow 30 (NYSEARCA: DIA ), however, was up just 0.52% during the month. And small-caps actually fell. The Russell 2,000 (NYSEARCA: IWM ) fell 1.56%, while the S&P 600 (NYSEARCA: IJR ) fell 0.83%. Mid-caps (NYSEARCA: IJH ) ended the month flat. Looking at sectors, we saw a wide range of performance, with Consumer Discretionary (NYSEARCA: XLY ), Consumer Staples (NYSEARCA: XLP ) and Utilities (NYSEARCA: XLU ) gaining 5%+, and Energy (NYSEARCA: XLE ) and Materials (NYSEARCA: XLB ) falling 5%+. Outside of the U.S., Brazil (NYSEARCA: EWZ ) and China (NYSEARCA: FXI ) both got slaughtered in July with declines of 12%. But the rest of the world did relatively well, with France (NYSEARCA: EWQ ), Germany (NYSEARCA: EWG ), India (NYSEARCA: INP ) and Italy (NYSEARCA: EWI ) all posting decent gains. Along with Brazilian and Chinese equities, commodity ETFs also got smoked. The DBC commodities ETF fell 12.6%, while oil (NYSEARCA: USO ) fell more than 20%. Both gold (NYSEARCA: GLD ) and silver (NYSEARCA: SLV ) fell 6%. Brutal action for the commodities asset class. Finally, Treasuries rallied back in July, with the 20+ Year Treasury ETF (NYSEARCA: TLT ) posting a 4% gain. For the year, though, TLT remains down 2%. Share this article with a colleague

Ivy Portfolio August Update

The Ivy Portfolio spreadsheet tracks the 10-month moving average signals for two portfolios listed in Mebane Faber’s book, The Ivy Portfolio: How to Invest Like the Top Endowments and Avoid Bear Markets . Faber discusses 5, 10, and 20 security portfolios that have trading signals based on long-term moving averages. The Ivy Portfolio spreadsheet tracks both the 5 and 10 ETF portfolios listed in Faber’s book. When a security is trading below its 10-month simple moving average, the position is listed as “Cash”. When the security is trading above its 10-month simple moving average, the positions is listed as “Invested”. The spreadsheet’s signals update once daily (typically in the late evening) using the dividend/split adjusted closing price from Yahoo Finance. The 10-month simple moving average is based on the most recent 10 months, including the current month’s most recent daily closing price. Even though the signals update daily, it is not an endorsement to check signals daily or trade based on daily updates. It simply gives the spreadsheet more versatility for users to check at his or her leisure. The page also displays the percentage each ETF within the Ivy 10 and Ivy 5 Portfolio is above or below the current 10-month simple moving average, using both adjusted and unadjusted data. If an ETF has paid a dividend or split within the past 10 months, then when comparing the adjusted/unadjusted data you will see differences in the percent an ETF is above/below the 10-month SMA. This could also potentially impact whether an ETF is above or below its 10-month SMA. Regardless of whether you prefer the adjusted or unadjusted data, it is important to remain consistent in your approach. My preference is to use adjusted data when evaluating signals. The current signals based on July 31st’s adjusted closing prices are below. This month the Vanguard Total Bond Market ETF (NYSEARCA: BND ), the SPDR Dow Jones International Real Estate ETF (NYSEARCA: RWX ), the Vanguard FTSE Emerging Markets ETF (NYSEARCA: VWO ), the PowerShares DB Commodity Index Tracking ETF (NYSEARCA: DBC ), the iShares S&P GSCI Commodity-Indexed Trust ETF (NYSEARCA: GSG ), the Vanguard REIT Index ETF (NYSEARCA: VNQ ) and the iShares TIPS Bond ETF (NYSEARCA: TIP ) are below their 10-month moving average. The spreadsheet also provides quarterly, half yearly, and yearly return data courtesy of Finviz. The return data is useful for those interested in overlaying a momentum strategy with the 10-month SMA strategy: (click to enlarge) I also provide a “Commission-Free” Ivy Portfolio spreadsheet as an added bonus. This document tracks the 10-month moving averages for four different portfolios designed for TD Ameritrade, Fidelity, Charles Schwab, and Vanguard commission-free ETF offers. Not all ETFs in each portfolio are commission free, as each broker limits the selection of commission-free ETFs and viable ETFs may not exist in each asset class. Other restrictions and limitations may apply depending on each broker. Below are the 10-month moving average signals (using adjusted price data) for the commission-free portfolios: (click to enlarge) (click to enlarge) Disclosure: None