BBO Discussion Forums: Google has beaten Go, what could it do to bridge? - BBO Discussion Forums

Jump to content

  • 6 Pages +
  • « First
  • 4
  • 5
  • 6
  • You cannot start a new topic
  • You cannot reply to this topic

Google has beaten Go, what could it do to bridge?

#101 User is offline   Mbodell 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,871
  • Joined: 2007-April-22
  • Location:Santa Clara, CA

Posted 2016-April-14, 03:19

View Post1eyedjack, on 2016-April-11, 06:16, said:

Thanks to Mbodell for that insight. I would have thought that the same factors that make it hard for a computer would also make it hard for a human (to do well). But I have not given it a lot of thought.


The increase in branching does make it harder for humans, but not equally so, because humans don't generally exhaustively consider every possibility. We are generally very good at patterns and strategies and consider only a few (2-4) lines of play that we focus on right away. So if you increase the branching factor our ability to select these may be slightly less optimal, but we still basically do the same thing. If a computer is searching everything possible, it impacts them more because they don't know how to consider only the few right lines as well (when they are naively thought of. A lot of the "trick" of normal game AI work is to figure out how to do this pruning and evaluation to consider the fewer better lines, normally done with explicit rules and heuristics and this is the thing about the AlphaGo that is most interesting in that it mostly didn't do this through explicit work but only through indirect deep neural net pattern matching that is more black box to its designers).

View Postawm, on 2016-April-12, 09:19, said:

System restrictions rarely impose any limits beyond openings and initial overcalls. Even the ACBL mid-chart has few restrictions beyond this; WBF has virtually none.

An advantage computers have is ability to memorize many long sequences without error. This will matter more in long and rare auctions, not the openings. So I don't see much difference made by system restrictions here.


In the world computer championships the system restrictions are draconian, even by ACBL standards. See link.
No precision, no polish club, no mini-nt, no transfers over 1M(X), no romex, no keri over nt, etc.

And I imagine that the inability to actually accurately program all the long rare auctions, particularly when playing an "individual" or when competitive bidding is in play is a similar effect on the computers. One can see this with all the threads about GIB "bugs" in bidding sequences.
0

#102 User is online   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,068
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2016-April-14, 03:29

View PostMbodell, on 2016-April-14, 03:19, said:

And I imagine that the inability to actually accurately program all the long rare auctions, particularly when playing an "individual" or when competitive bidding is in play is a similar effect on the computers. One can see this with all the threads about GIB "bugs" in bidding sequences.

Yes but the conventions targeted by system restrictions are probably not that difficult to program. GIB messes up natural common-sense stuff like the infamous "28+ TP" definitions for freebids in convoluted auctions. It doesn't mess up the transfer rebids it plays after 1M-1N-2N. Probably computer programmers would find it a lot easier to implement Todd's "Dejeuner" system than Acol.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#103 User is online   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,068
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2016-April-14, 03:42

View Postbarmar, on 2016-April-12, 09:32, said:

That's a horrible method of disclosure. Seeing a huge sample of hands doesn't tell you which features of those hands are relevant to the situation at hand. It would be like the encyclopedia entry for "dog" just having pictures of dozens of dogs, without any words explaining how they differ from wolves.

I think it would be a good method of disclosure. Computers often need to base their actions on simulations so they need to know which hands to include. Defining a SAYC 1 opening as "3+clubs, 5- (other suits), 10-21 HCP" would not be helpful. Providing a sample from the a priori distribution of 1 hands would be very helpful.

Suppose opps open a Dutch 2 opening, defined as "(6 diamonds and 6-10 HCPs) OR (20+ HCPs blah blah blah)". You have not taken that opening into account when programming your bidding engine so you need to design a defense on the fly. Maybe it is easy to decide that any action must be constructive if the lower bound of opps opening is less than 9 HCPs. But what should 2 and 3 mean, given that opps have not promised diamonds? You may decide that you parse the definition tree and realize that all weak variants contain diamonds. But the disclosure regulations should not force you to take that approach. It would also be a reasonable approach to base the definitions on the likelihood that we belong in a diamond contract, or the likelihood that we can make game in diamonds, and for that purpose a sample would be more useful than a definition. Especially since a complete definition, including all negative inference and suit quality restrictions, could be daunting.

And once you are captain and need to make a decision (not to mention cardplay), you certainly need a sample of the hands opps can have. You could take the view that they need to give you a complete definition so that you can make the sims yourself, but it is much easier for them to give you the sim results. Especially if they apply some idiosyncratic suit quality metrics and hand evaluation methods which you haven't programmed yourself.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#104 User is offline   WesleyC 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 878
  • Joined: 2009-June-28
  • Gender:Male
  • Location:Australia

Posted 2016-April-14, 05:51

View Postbarmar, on 2016-April-12, 09:32, said:

That's a horrible method of disclosure. Seeing a huge sample of hands doesn't tell you which features of those hands are relevant to the situation at hand. It would be like the encyclopedia entry for "dog" just having pictures of dozens of dogs, without any words explaining how they differ from wolves.


I don't think I explained my point very well. After the computer has simulated some large number of hands that are consistent with the auction they wouldn't just dump them in a pile. They would be completely filterable and searchable so that opponents would be able to as questions like "how many HCP is your partner showing" and receive a completely accurate answer down the to % chance of each HCP number.

If you asked about the program's style for opening 1NT with a 5cM, the computer would be able to confidently assert that in this position they open 85% of balanced hands that contain a 5c Major 1NT, and then provide examples of some hands that would be included and some hands that wouldn't.
0

#105 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,398
  • Joined: 2004-August-21
  • Gender:Male

Posted 2016-April-14, 09:04

View PostWesleyC, on 2016-April-14, 05:51, said:

I don't think I explained my point very well. After the computer has simulated some large number of hands that are consistent with the auction they wouldn't just dump them in a pile. They would be completely filterable and searchable so that opponents would be able to as questions like "how many HCP is your partner showing" and receive a completely accurate answer down the to % chance of each HCP number.

At the very least, it would also have to disclose a bunch of hands that [i]wouldn't[/b] make the bid. To continue my above analogy, the encyclopedia entry for "dog" would need to have a bunch of pictures of dogs, and also a bunch of pictures of wolves, foxes, cats, etc. Then you can try to analyze this to figure out what distinguishes dogs from other small, furry quadrupeds.

But this is really unncessary. If you disclose in a way similar to the way GIB does, the recipient of the information can turn it into something like a Dealer script to generate the hands itself (in fact, that's what GIB does internally). If there are percentages involved, that could be included in the disclosure, and then that can be used during the hand generation.

Deduction from examples is very tricky, and difficult to program. Humans do it intuitively in contexts where they assume that simple, consistent rules are in force. The scientific method is based on this, but with an extra step: performing experiments. After you make a deduction from the examples you have, you make a theory, and then try to test it to see if the rule is followed. The analogy to the above disclosure method would be if you could then give a list of hands to the opponent and ask if he'd make the same bid with all of them as well. If you've correctly discerned the bidding rule from the original list, you should get mostly affirmative responses from this question.

  • 6 Pages +
  • « First
  • 4
  • 5
  • 6
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users