As I have mentioned in previous articles, there is much to be gained by quantifying fundamental information that we use in the analysis of companies. I have begun the long and painstaking work of deriving from theory a quantitative investment formula. However, I am no rare genius; others before me have thought of the same idea. One particular person came up with an investment formula that is quite successful. This is Joseph Piotroski and he is a professor at the University of Chicago. His paper, “Value Investing: The Use of Historical Financial Information to Separate Winners from Losers”, available as a PDF here, was published in 2000. In that paper, Piotroski showed that by using a set of nine different fundamental signals to screen among low P/B stocks, an investor could separate the winners from the losers. By buying only those stocks that had the highest scores, an investor could have outperformed the market by an average of 10% per year from 1976 to 1996.
Piotroski started by screening for the stocks with the lowest 20% P/B ratios that were non-negative. This limits the strategy to true value companies. After the price to book ratio, nine other pieces of information were used, as follows:
-
positive earnings
-
positive cash flow from operations
-
increasing ROA
-
increasing cash flow from operations
-
decreasing long-term debt as a proportion of total assets
-
increasing current ratio, indicating increasing ability to pay off short-term debts
-
decreasing or stable number of shares outstanding
-
increasing asset turnover ratio, indicating increasing sales as a proportion of total assets
-
increasing gross margin
Each company is given either a one or a zero on each variable. The strategy calls for buying every company with the requisite low P/B ratio and a score of either eight or nine. As you can see by looking at table 3 in Piotroski’s paper, the composite score does a very good job in discriminating between the stocks that perform well and those that do not perform well. On average, those companies scoring either zero or one saw their stock rise by only about 8% annually. On the other hand, those companies with scores of either eight or nine saw their stocks rise by on average 32% per year. For comparison, over the study time period (1976 – 1996) the stock market as a whole gained about 20% annually.
These results are incredible, but Piotroski does a good job of making them more credible. A priori, it would be reasonable to believe that fundamental analysis would be most beneficial for those stocks that were not well followed (e.g., stocks with small market caps). That is exactly what Piotroski shows in table 4 in his paper. While the strategy does outperform the market for all sizes of companies, it is much more effective with small companies. As table 5 shows, companies with no analyst following and high rank scores outperform the market by 18% per year.
What is perhaps most exciting about this paper is that the strategy of using fundamental analysis to find the strongest value stocks works with many different measures of fundamental strength. As shown by table 8 in the paper, a measure of financial distress or decreasing earnings or decreasing profitability can also discriminate between better and worse investments among low P/B stocks.
I cannot say with surety why this strategy works (ie, why investors do not take this information into account already). However, I think that the reason does not matter. Consider this analogy: you are the manager of a professional baseball team. Your goal is to find and hire the best baseball players you can for the least amount of money. You scout out all the lowest earning free agents in professional baseball. You then hire four or five of the most talented of those. In the long-run, you’ll be able to build a fairly solid baseball team for relatively little money. Assuming that you have a good eye for talent, you’ll be able to pick up many players for much less than they are worth. Why are they available for so little? It doesn’t really matter to you. Perhaps certain among them have been known to have a temper, perhaps others are perceived in the league as being injury prone and you judge that perception to be wrong. Perhaps other teams just overlooked certain players. It doesn’t matter much to you as long as you can get good players without paying too much.
It is much the same with finding value stocks: maybe some good companies are in boring industries, maybe they have suffered from some bad publicity, or maybe they are in a difficult industry. As long as you look for the best of the cheap stocks you’ll likely do well. Piotroski’s strategy ensures that this is what you are doing, by using quantitative variables such as profitability, asset turnover, cash flow from operations, and other variables that are correlated with future profitability and future earnings.
This is probably a good point to remind you of the unpredictability of future earnings. David Dreman as well as others have shown repeatedly that it is very hard to predict future earnings. I have previously mentioned the dangers of regression to the mean in trying to buy stocks based on projections of future growth that are unreliable. So, in buying cheap stocks (stocks with low price to book ratios) we ensure that we are not paying too much for growth that may never occur. Also importantly, we reduce our risk by buying stocks with improving fundamentals. As Piotroski points out, low P/B stocks with high rankings are less likely to go bankrupt or to fall drastically in price than are those with low rankings.
In a world where mutual fund managers only rarely can beat the market by one or two percentage points over 10 or 15 years, such performance is astounding. However, one problem with this or with any similar research is that these returns were not actually obtained. Anytime an investment strategy is tested on past data, there is the risk of optimizing the strategy for that past data and by doing so changing the strategy in such a way that it will be worthless in the future. An example is certainly an order: let’s say I have developed a technical trading system that simply buys stocks that are at their 52-week lows. I test this system over market data from the past 10 years and I find that the system leads to an annualized 5% return. This is not good, so I try to improve the system. I add information on stock’s P/E ratios, their market cap, and analyst ratings for those stocks. I retest the system and find that over the past 10 years it would have given me an annual return of 10%. That is better but still not great. So I try to improve the system even more. At this point, I have run out of ideas so I try plugging in random things to see if they improve the system. I find that only buying on certain days of the week and selling on other days improves the system. Also, I find that buying stocks only after a Chicago sports team has won a game almost doubles returns.
After all this work, my system yields theoretical annual returns of 30%. Confident, I try out my system and lose half of my money in one year. What went wrong? By testing factors that are almost certainly of no relation to stock returns, I over-optimized the trading system. While the system worked well in the past, the system was developed for that past. There is thus no reason that it should work well in the future.
Therefore, one of the keys to developing a trading or investment system is to ensure that it is both relatively simple and that it is theoretically derived. If the system is derived from theory then it is not vulnerable to the problems of data mining. One way to ensure that a strategy has not been over optimized is to test it on data that was not used to form the strategy. This has been done with Piotroski’s strategy and the results are impressive. Paul Sturm of SmartMoney.com wrote about the strategy three times in 2001, 2002, and 2004. Over that period of time, the stocks he chose using Piotroski’s strategy rose 50%, whereas the S&P 500 lost over 10%.
Another concern in developing a quantitative investment strategy is that it should be robust. A robust system will tend to work even if you use unreliable data or if circumstances change. An example of a non-robust system would be DCF analysis. In DCF analysis, if your estimate of future growth is even slightly off your estimate of a company’s true value will be way off.
Piotroski’s strategy is quite robust because it involves weighting each of nine variables equally. So if two of those nine variables turn out not to be related to a stock’s performance, the strategy will perform worse but will still likely outperform the market. Or, if there is an error in our database and a number is off by an order of magnitude, it will only slightly reduce the performance of the strategy.
Harry Domash has written an article at MSN Money about implementing this strategy. While he describes how you can implement a similar strategy using MSN’s stock screener, that stock screener does not contain all the necessary information to fully implement Piotroski’s strategy. (Also, MSN’s advanced stock screener only works with Internet Explorer. Other than that, however, it is one of the most powerful free stock screeners available).
in the domash article employing the piotroski screens, domash indicated that computer sciences was a value pick. computer sciences screwed up an irs web fraud tax software program several years ago that cost the irs and the american public over $300 million in fraudulent refunds the irs stated would be too costly to attempt to recover.
also, domash should consider a riskgrades.com equity portfolio comparison of the piotroski value stock selections vs. a current low peg value stock selection portfolio vs. a current _ibd_ hi relative strength stock selection portfolio, should domash want a really decent article. the riskgrades.com website, considered by _forbes_ to be the most user friendly, is free and provides users with individual roi and risk measures for stocks, bonds or mutual funds or a combination of these and then provides overall portfolio roi and risk vaues against the current roi and risk values for the s&p 500 benchmark. also, users can store several different portfolios on the riskgrades.com site for free on the server, and one can stress the portfolios for different events, such as the asian meltdown a few years ago and for the impact of 9/11.
The problem with RiskGrades is all it does is measure a type of Beta. Fama and French found out 15 years ago that Beta is useless as a measure of risk.
Do you have a suggestion on what tool you would use to implement this screen? I agree that Microsoft’s Money screen doesn’t do it but I am at a loss for an alternative.
The companies that come up on the screen are so small, that you can lose 20% due to the bid ask spread.
Mark — since before Fama/French we have known that the best returns are for small value companies. These are very illiquid; the return is partially compensation for that. It can take a long time to build up a positions … just make sure to put in a buy right at or above the best bid so that someone more in a hurry pays the bid/ask. I did this successfully with TSRI for example (I am no longer in it).
True – I have been buying small caps (trading at huge discounts to book) quite successfully over the years. Patience is a big issue here as you can expect to be under water at many times while waiting for the market to get more rational. Today I picked up some Starrett SCX, which is now trading at 1/3 book value. My most profitable trades have been in BOFI Holdings this year – which has been trading below 1/2 book for the past couple months…