sfbp, on May 1 2006, 04:36 PM, said:
Welcome to the WonderWorld of Whatif
However your simulation misses some important points: what about the hands with a six card suit and 5-6 points that no one would open in 1st or second seat? (honours in all the wrong places) I don't think this is a very good test of those hands, which are clearly in the real mix, when last players opens the bidding in real life.
It also misses that:
- precision player will open majors with 11HCP
- SAYC player don't have a weak2 in
♣
- lots of online player open any 54xy distrubution with 10 or 11 HCP
I need to compensate that.
Quote
I stand by my assessment that the only fair test (notwithstanding strong pass and misclick) is to pick deals that were passed (by all four players) at least once. The fact that Ben's data more or less coincided with this is good, but not conclusive evidence that his criteria approximate to mine.
But you have to think about the effect of arbitrarily ruling out certain patterns.
I guess that all deals interesting for this problem should be in your subset.
But hand recods from online play suffer from:
- pickup partnerships
- agressive strategie (neccesssary to win tourneys with low board numbers)
- people experimenting with psyches etc.
- enormous swings caused by the heterogenous field
- shifted averages in the result due to some extreme scores
You need to compensate that.
I think both methods are valid, if used carfeully.
Quote
Perhaps other BRBR users will agree with me, that when you actually see the data flowing past you and start to look at what real people did, as opposed to arbitrarily restricting a deal in multiple ways, is it possible to make action judgements based on percentages?
The question is:
Are your real people a good representation of the field I'm going to play in?
At club level or regionals for sure, but what about (inter)national championships?
One gets no help from data flowing by, unless they lead to information that can be transformed to knowledge.