Tue, May 5, 2009 | Jared Woodard
Bloomberg has this shocking bit of news:
Ever since the Standard & Poor’s 500 Index peaked in October 2007, six of eight strategies — which are supposed to make money whether stocks rise or fall — failed, according to back-testing data compiled by Bloomberg. As the bear market erased $11 trillion from the value of U.S. equities, buy and sell signals from those six technical indicators produced losses of as much as 49 percent, the data show.
“Technical analysis on its own as a discipline does not work,” said Diane Garnick, the New York-based investment strategist at Invesco Ltd., which oversees $348 billion. Using it in isolation is “the fastest way to lose money,” she said.
Of the eight strategies, stochastics, Bollinger bands, relative strength, commodity channels, parabolic systems and the Williams %R indicator generated buy and sell signals that resulted in losses between the S&P 500’s peak of 1,565.15 on Oct. 9, 2007, and its March 9 trough, the data show. They did worse as the index then rallied 30 percent through last week. [link]
It’s actually kind of difficult to respond to this. In the first place, it’s hardly remarkable that a set of widely known, off-the-shelf vanilla indicators didn’t outperform in one of the strangest and most volatile markets in recent history. Neither did buy-and-hold, nor value “investing,” nor most long/short hedge funds, nor etc. And I’m actually inclined to cheer coverage like this, in spite of all its flaws, if it causes even one individual to scorn untestable and subjective forms of technical (or any other) analysis: verificationism may have failed as a standard for academic metaphysics, but it’s an absolute necessity for the analysis of any financial time series. A prediction that cannot be tested empirically is neither technical nor analytic: it is faith-based finance.
Still, Bloomberg reporters Tsang and Martin have created and destroyed a particularly combustible straw man. Why did they choose these, out of all the indicators around? Why opt for the plainest of the plain, rather than indicators with a little more finesse? Why should anyone – even its most vociferous critics – believe that the set of indicators examined there is representative of “technical analysis” as such? I count myself on the side of the critics, and yet I don’t believe for a moment that any of the sources or indicators referenced in the article represent the gold standard for what it is possible to do with a time series and some basic software.
Moreover, why choose such an odd standard of success, namely, whether or not an indicator anticipated this particular bear market rally? The standard might just as easily have been whether or not an indicator avoided major losses last fall, or whether it has been profitable over a 1-, 5-, or 10-year period. One reason the authors might have chosen to focus on the current rally is that the supposedly “failing” technical indicators still managed to beat the S&P 500 over the full October 2007 – May 2009 period examined. The worst performer, Williams %R, beat the index by 0.8%, while the Directional Movement Indicator gained 9% versus the index decline of 43.9%. “Quotidian technical indicators still manage to beat S&P 500″ clearly isn’t as exciting a headline, and would give the lie to the Church of Efficient Markets. Note that the authors insisted on a quote from Pope Malkiel himself, who imputed impure motives to any proponent of price-based predictions.
I eschew both fundamentals-based fundamentalism and the zealotry of credulous chart painting. Perhaps financial journalists leave those methods alone because it so much harder to hit a perpetually moving target, and easier to aim instead at pointy-headed evidence-based quantitative practitioners.
[This is part 2 in our unintentional n-part series, "X Fails to Give You a Pony," in which we help mainsteam financial journalists do their jobs better. Part 1 is here.]