BBO Discussion Forums: Bidding laboratory - thought about functionality - BBO Discussion Forums

Jump to content

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

Bidding laboratory - thought about functionality

#1 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,068
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2022-January-18, 15:44

I have thought a bit about a project which I might never get the time for, but who knows ... or maybe somebody else will

To test if Precision is better than 2/1, or if multi-Landy is better than Lionel, one could
- specify the two systems to be compared, say BWS with ML and BWS with Lionel
- deal a bunch of random hands.
- bid the hands in a team match between two teams, one that plays system A at both tables and one that plays B at both tables
- DD solve those of the boards where the contract is different at the two tables
- score the match

The biggest challenge is to implement the bidding systems. I envision systems being build of components: a component defines the meaning of calls in a situation. A situation would be described by what information has already been provided by the four players, the last bid, whether it was undoubled/doubled/redoubled, our side or their side, and how many subsequent passes can take place before the auction dies out. Also: which calls are already defined by high-priority components, and what hands are ruled out because a higher-priority component caters to them.
class BiddingSystemComponent {
  biddingTree make Tree (Situation)
}

For example:
class Stayman implements BiddingSystemComponent {
  biddingTree make Tree (Situation) {
    if(lastbid != NT | isdoubled | isTheirContract | level>2) return "componentDoesNotApply"
    return biddingTree (
      strength in constructive_range:
        FALSE: hearts in 3:4 and spades in 3:4 and diamonds in 4:5: 
          FALSE: return "doesNotCoverThisHand"
          TRUE: return (chepestBid(clubs),"Stayman")
        TRUE: hearts==4 | spades==4
          TRUE: return "doesNotCoverThisHand"
          FALSE: return (chepestBid(clubs),"Stayman")
      )
  }
}

class StaymanResponse implements BiddingSystemComponent {
  biddingTree make Tree (Situation) {
    if(lastbid.annotation != "Stayman != NT | isdoubled | isTheirContract) return "componentDoesNotApply"
    return biddingTree (
     hearts >= 4:
       FALSE: spade >= 4:
         FALSE: return ( cheapestBid(diamonds), "StaymanRsp:NoMajor" )
         TRUE: return ( cheapestBid(spades), "StaymanRsp:Spades" )
       TRUE: return ( cheapestBid(hearts), "StaymanRsp:Hearts" )
     )
  }
}


Does this sound like a reasonable approach? Or should it be done in a different way? Is it doable?
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#2 User is offline   DinDIP 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 117
  • Joined: 2008-December-13
  • Gender:Male
  • Location:Melbourne (the one in Australia not Florida)

Posted 2022-January-19, 01:27

It's a worthwhile objective but very hard to implement effectively. If you want to look at how not to do it, read Jan Eric Larsson's book from 2021 Good, Better, Best. He presents a wide range of challenging conclusions from his system and convention comparisons.
He has done a mountain of work and the methodology is sound. I think the sample size he uses is too small.
However, the biggest problem is the files he uses to specify the bidding systems he tests. IMO they are seriously flawed: for example, how meaningful can the competitive bidding module be when xxx Qx AQxxx Qxx is a vul overcall opposite a passed partner after RHO opens 1?

As both critics and some proponents of economic modelling say: "garbage in = garbage out".

David
2

#3 User is offline   DavidKok 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,235
  • Joined: 2020-March-30
  • Gender:Male
  • Location:Netherlands

Posted 2022-January-19, 02:12

I think this is exactly how you'd go about it. It is a very difficult challenge.

One minor remark: the downside of using DD solvers mean that any 'information leakage', a significant concern of modern bidding systems, is made irrelevant. Treatment A might beat treatment B even if the final contract is the same, or even if the entire auction (i.e. set of bids) were the same. But that's probably a challenge for some other time.
1

#4 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,068
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2022-January-19, 05:47

View PostDinDIP, on 2022-January-19, 01:27, said:

how meaningful can the competitive bidding module be when xxx Qx AQxxx Qxx is a vul overcall opposite a passed partner after RHO opens 1?

I thought as a first attempt maybe just describe hands in terms of HCP and suit length.

One could add suit quality, stoppers, controls in specific suits, number of A=2/K=1 points and number of keycards, but obviously the task of describing a bidding system is quite a big task even if reduced to just HCP and suit lengths.

Alternatively, what about making some dimension reduction (maybe just using linear regression) to get to five parameters that are more relevant than HCPs and suit lengths, say the hand's playing strength in each of the five dominations? Still not ideal as synergy with partner's hand is also important (the 8th and 9th trump are more valuable than the 10th trump; ruffing value opposite Axxx or xxxx is better than opposite KJTx). But maybe still better than just using HCP and suit lengths?
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#5 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,068
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2022-January-19, 05:55

View PostDavidKok, on 2022-January-19, 02:12, said:

One minor remark: the downside of using DD solvers mean that any 'information leakage', a significant concern of modern bidding systems, is made irrelevant. Treatment A might beat treatment B even if the final contract is the same, or even if the entire auction (i.e. set of bids) were the same. But that's probably a challenge for some other time.

Yes, instead of DD solving one could make let the opening lead be chosen based on sims of a dozen of hands consistent with leader's hand and what is known from the auction. That wouldn't be too difficult to implement once the rest of the system is in place, and I think there's some research showing that it is a reasonable surrogate for SD.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#6 User is offline   mikeh 

  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 12,836
  • Joined: 2005-June-15
  • Gender:Male
  • Location:Canada
  • Interests:Bridge, golf, wine (red), cooking, reading eclectically but insatiably, travelling, making bad posts.

Posted 2022-January-19, 23:15

Hi Helene

I commend your dedication if you do try to implement this, but I’m not convinced it can be done in any useful fashion.

I’m currently working on a system with a partner for the upcoming WC in Italy. We have been spending about 4-6 hours a week in video chat for 6 weeks now, refining a method that we agreed upon in broad terms a year ago. We have many pages of very detailed notes about many sequences.

Part of what we’ve been doing, in addition to discussions and exchange of annotations to the notes is playing in practice matches and bidding hands from the BW and other sources. We have old coaching material from Kokish in which we are given hands and auctions and asked to say what we’d do now…..as with when we first did that 20+ years ago, getting coaching for the BB, we do not always agree.

What I’m getting at, in a typically long-winded way, is that no system is ever so well defined that it prescribes every possible call in every possible situation. There are always going to be hands and auctions where what to do next is at least as much about judgment or temperament of the player.

In my partnership, I tend to be more conservative than partner, so in some situations I will take different action even though we have covered a very large number of auctions.

So when you have real people playing matches as you set out in your OP, I think it impossible for you to have so tightly defined the methods that all of the calls will be systemically mandated or that any given player’s views would be agreed as ‘correct’ (as opposed to ‘reasonable’) so that, should another player using the same system bid otherwise, such would be ‘wrong’ (as opposed to ‘also reasonable’)

Another problem is how you’d even decide what the systemic action should be. One can say ‘Precision’ but there is no universally agreed ‘Precision’. Take one text claiming to describe ‘Precision’ and it will be different from any other text claiming to describe ‘Precision’…the broad outlines will largely be the same but there will be differences. Even more importantly, I’ve never seen a text on a bidding method that went into the level of detail that a serious expert partnership gets into.

And Precision is a relatively easy topic. Trying to specify what ‘2/1’ means is impossible. I play in two serious 2/1 partnerships. If my two partners sat down to play with each other, without many hours of discussion, they’d have misunderstandings on many, many boards even though I play many sequences in common between the two partnerships.

In short (I know, this isn’t short) I don’t know how you deal with system on a granular level nor how you neutralize or account for human factors such as judgment, aggression, etc.

Also, I think you’d need a huge number of hands to generate any meaningful data even if you were able to overcome these issues.

Consider this…high-level bridge is a crucible in which methods are repeatedly tested by being played by players generally of comparable skills (at least in terms of all being far better than the average), but if you read reports on important matches what you see is that no pair ever plays precisely the same methods as any other pair. If one method was clearly superior, it would dominate and most top pairs would play it. Yet it’s a rare team on which the three pairs even play the same basic method, and even rarer that there aren’t significant differences in detail when they do.

Now, if you narrowed the focus to methods over an opponent’s 1N or multi v weak twos, I think there’s much more likelihood of generating useful data.
'one of the great markers of the advance of human kindness is the howls you will hear from the Men of God' Johann Hari
1

#7 User is offline   DinDIP 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 117
  • Joined: 2008-December-13
  • Gender:Male
  • Location:Melbourne (the one in Australia not Florida)

Posted 2022-January-19, 23:48

Maybe an easier way to undertake this project is to use an existing program like Jack or WBridge and, either by obtaining permission from the developers or by reverse engineering, add additional systems and/or conventions. The best of these programs have reasonably good or better hand evaluation modules, defensive bidding modules etc so that would save you many hours of work.

Just as an example, for simulations I often specify that second hand will pass opener's bid. My code for that runs to hundreds of lines and (a) there are still hands that do or don't get excluded where I would do the opposite and (b) the specs reflect my unique style which is different from those of all my partners, to say nothing of opponents.
1

#8 User is offline   nullve 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,228
  • Joined: 2014-April-08
  • Gender:Male
  • Location:Norway
  • Interests:partscores

Posted 2022-January-20, 05:01

View Posthelene_t, on 2022-January-18, 15:44, said:

I have thought a bit about a project which I might never get the time for, but who knows ... or maybe somebody else will

To test if Precision is better than 2/1, or if multi-Landy is better than Lionel, one could
- specify the two systems to be compared, say BWS with ML and BWS with Lionel
- deal a bunch of random hands.
- bid the hands in a team match between two teams, one that plays system A at both tables and one that plays B at both tables
- DD solve those of the boards where the contract is different at the two tables
- score the match

So the bidding could in principle be done by two teams of four humans?

Or maybe not:

View Posthelene_t, on 2022-January-18, 15:44, said:

The biggest challenge is to implement the bidding systems.


:unsure:
0

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users