Daily Speculations

The Web Site of  Victor Niederhoffer & Laurel Kenner

Dedicated to the scientific method, free markets, ballyhoo deflation, value creation, and laughter;  a forum for us to use our meager abilities to make the world of specinvestments a better place.

 

Home

Write to us at:  (not clickable)

Victor Niederhoffer

 

04/26/2004

Seasonality Redux

 

A seasonatarian suggests that one should break up day-of-week effects into before the bear market, during the bear market and after. A reformed seasonatarian, however, had found that the day-of-week effects when measured by % of rises had been relatively constant. Neither had used futures. We bit our tongue but finally ... we cannot refrain from reprinting an excerpt from our March 20, 2003 article on MSN Money (thanks to Dr. Alex Castaldo for reminding us):

Seasonality and changing cycles, by Victor Niederhoffer and Laurel Kenner

A good part of the anomaly literature is devoted to studies of seasonality. A basic problem with these studies is that merely picking a season to study
involves making guesses as to when and where the seasonality is. For example, is it in January or December, on Monday or Friday, in the United States or the
Ukraine? (Yes, our Google search turned up a study of anomalies in the Ukraine.)
Thus, the very choice of a subject might involve random luck.

Another aspect of seasonality studies that must be considered is whether the effects noted are sufficient to cover transaction costs. A retrospective study
showing that you can make 2 cents more on Friday trades than Monday trades in your typical $50 stock would not be sufficient in practice to leave anyone but the broker and the market-maker richer.

Thus, it s essential to temper the conclusions of such studies with out-of-sample testing -- in other words, with real trading.

With that in mind, let s consider a representative seasonality study, The Day of the Week Effect and Stock Market Volatility: Evidence From Developed
Countries, by Halil Kiymaz and Hakan Berument, from the Summer 2001 edition of the Journal of Economics and Finance. Their research examines whether returns and variability differ on different days of the week in the five major international markets from 1989 to October 1997. They conclude that day-of-week anomalies exist for returns and volatility in all five countries.

For example, they find that in the United States, stocks go down an average of -0.08% on Monday, while rising on Fridays an average of 0.1%. But hold all tickets -- 0.1% on a $50 stock is just 5 cents, hardly enough to cover transaction costs even if the results hold up.

SpecDuo Rating:
Practicality: 0
Transparency: 10
Testing: 10
Regime shift possibility noted: 0

Spec update

Fortunately, the Spec Duo has the pencils and envelopes necessary to update the study. Here is what we found for the daily changes and volatility for the S&P 500 Index on the different days of the week from the beginning of 1997 to March 15, 2003:

Returns and volatility by day of the week, in percent:

  Monday Tuesday Wednesday Thursday Friday
Av return 0.04 -0.02 -0.06 0.05 -0.01
Av volatility 0.12 0.13 0.12 0.12 0.12

Regrettably, the returns are so close to zero and the variabilities so close to equal that the results are completely random. The day-of-week effect has been totally useless to speculators over the last six years, despite what the literature says.

Comment by Philip J. McDonnell, a former student of the Chairman at UC Berkeley: Dr. Niederhoffer points out that there is no a priori reason to believe that any one day of the week is stronger than any other. Thus when Y--- collected the data (thank you!), presumably the reason was to find out if any days of the week behaved differently. Only after peeking at the data was it possible to say that Monday was the best and Tuesday the worst.

There are 10 such pairwise comparisons:
Mon with other 4 days 4
Tues with 3 last days 3
Wed with Thu & Fri 2
Thu with Fri 1
Total 10

In other words it is also possible that Tuesday could have been the best day and Monday the worst or any other pairwise comparison by chance alone. So when the one best and the one worst day shown by the data are compared and shown to have say a 5% significance we need to remember that we implicitly ruled out the other nine cases which weren't the best or worst. So we need to take our 5% number and multiply by 10 to get the correct significance of 50%. 50% is exactly consistent with randomness.

The problem is multiple comparisons are often subtle and remain unrecognized. Multiple comparisons are insidious because they dramatically reduce the power of the statistical tests we employ.

The right way to do this type of thing is to form a specific hypothesis based on a single comparison and then to test it on the data. It is even possible to use data from a prior period to formulate our hypothesis. We then test our hypothesis on the subsequent period which excludes the period where we formed our hypothesis. For the sake of illustration suppose that we never looked at the data and chose 1991 as our old period to form our hypothesis. We would have concluded that Wed (58.82%) is the best day of the week and Fri (43.40%) is the worst. Then we would have thrown out the 1991 data used to formulate our hypothesis and only used the 1992-2003 data to test our hypothesis.

 

  Monday Tuesday Wednesday Thursday Friday
1991 48.08% 51.02% 58.82% 50.00% 43.40%
Average 58.46% 49.58% 50.96% 50.62 52.84%

A quick review of Y---'s avg data shows that for the entire period Fri is now better than Wed which means that the difference in proportions is now in the wrong direction. Removing the 1991 data wouldn't change the result much. We would be forced to reject the hypothesis.