The Web Site of Victor Niederhoffer & Laurel Kenner
Dedicated to the scientific method, free markets, deflating ballyhoo, creating value, and laughter; a forum for us to use our meager abilities to make the world of specinvestments a better place.
Write to us at: (address is not clickable)
Olympic Judges by Phil McDonnell
Many Olympic sports are judged by arcane stylistic measures which sometimes seem only known to the judges. Certainly the diving and gymnastics competitions come to mind. By comparison running and speed swimming sports seem quite objective. The judges merely operate a stop watch to clock the start and finish. Often the judge is a computer - providing the ultimate objective viewpoint free of national bias. When market gurus talk about their books, systems, chart patterns or whatever it often sounds like mumbo jumbo and voodooism of the worst kind. The definitions of things like head and shoulders or relative strength are usually vague and unscientific. To be scientific something has to be reproducible - another scientist should be able to replicate the results. Instead many technical analysis proponents propound the idea that some market theory is true but only the guru, can interpret all of the details necessary to decide the matter. To a scientist that does hold water. The scientist requires a precise definition, supported by empirical evidence and the ability to replicate by other like minded scientists. Clearly the gurus who claim 'I know it when I see it' status don't qualify. But what about the case of those who say some rather precisely defined pattern has predictive value? I say submit it to the modern day Olympian judges. If you can define your proposition to a computer then it passes the first test. The computer is the modern day Olympian arbiter. As an example of such a computer definition I would offer a recent post regarding pattern recognition. Our friend , the Idaho spec, posted an excellent study on pattern recognition. The reason it was a good study is that he could clearly and objectively explain the patterns to a computer. His results could be replicated and tested by someone else and further it was backed by a good statistical analysis. The problem with patterns is that everyone can see them but it is quite difficult to define one. If we think about it the reason is quite simple. The human race has adapted to see patterns. The ability to see the line of a camouflaged leopard's back has obvious survival advantages. Those who lacked this gene died out of the human gene pool millions of years ago. It is also the case that evolution would allow us to recognize patterns that weren't there. Again the reason is clear. If we think we spot the tiger in the grass and it turns out to be just some funny pattern in the tall grass there is little harm done. Perhaps our adrenalin went up a bit but otherwise there is no evolutionary disadvantage. Think of it as a practice drill. So all the many false recognitions may serve to keep us sharper and more alert - at least on the African savannah. Contrast the minimal cost of all those false recognitions with even ONE failure to recognize a real tiger. So when people look at charts and think they see things that have predictive power and offer a few charts as proof one should be very wary. By contrast when someone masters the very difficult job of defining patterns and offers a computer tested pattern technique with statistical testing it is commendable. To understand the challenge consider that just one of the spreadsheet cells had, by my count 31 nested IF statements with 32 nested outcomes. The fact is that defining all but the simplest pattern to a computer is quite difficult. In my opinion that is why so little statistical testing has ever been done with patterns. It's just too hard to do.