©2005 All content on site protected by copyright
Briefly Speaking, by Victor Niederhoffer
As the Palindrome is reported to have said early in his career, "I've had such a good year this year, why not even make it better--- you pay the bill." I predict that these markets will all end at their round numbers at the end of the year (or at least inordinately close).
|Year||SP Move||Year||SP Move||Year||SP Move|
Considering the average variability of months during the period, and the drift from a S&P level of 91 in 1975 to 1280 today, it has always seemed to me that December is an unusually balmy month. I tested the joint distributions of good and bad months for randomness with a reasonably robust simulation procedure in EdSpec from 1870 to 1996, and concluded, "January and August have been the most bullish months, and September and October have been the most bearish months." The hypothesis needs to be updated and refocused to modern times, but I would tentatively propose that December is an inordinately mild month with a non-random tendency for stock prices to close near highs of the year, and a general tendency for markets to close near round numbers.The artful simulator Mr. Tom Downing, who is sorely missed at his previous employer, will be busy when he arrives today.
I was completely dissatisfied by your reaction to my results. This may be the only firm in the world where if you work from 6 am to 8 pm every day the way I do, including Sundays, you get told, 'That adds to the randomness of your results because it means you tested more hypotheses and thus are more prone to multiple comparison Type 1 errors.' I don't wish to talk to you for a few days, in this world and never in the next. Strong letter to follow.
Yes, if you develop 50 hypotheses during the course of your work, and you then exult over the best one that is less than 5 in 100 to have occurred by chance, you have a 0.92 chance of falsely rejecting the randomness hypothesis. I find the book Multiple Comparison Procedures by Hochberg and Tamhane helpful in this regard. They review and systematize a plethora of methods for adjusting for multiple comparisons starting with Fisher's very conservative procedure to consider that if you have five different means to consider and thus ten different comparisons to make, you adjust your probability of rejection down from the normal 5% to 1/10 of 5%. A much more interesting and informative procedure is outlined in Rayner's and Best's "A Contingency Table Approach to Non-Parametric Testing" which I have been reviewing in conjunction with my continuing study of clustering and recommend highly.