BBO Discussion Forums: Team match chatGPT versus Bart - BBO Discussion Forums

Jump to content

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

Team match chatGPT versus Bart

#1 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 16,937
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2023-May-25, 15:48

View Postthepossum, on 2023-May-13, 20:05, said:




I was currious if our future overlords could bid those hands better than we mortals, so I let Bard sit south and chatGPT west. I took the North and East seat myself. The auctions proceeded:
pass-1-pass-1
2-3-dbl-a.p.

Note that it was W who opened 1 - the auction would have made a bit more sense if we rotated it 90 degress so that it was South that opened 1.

At the other table, same setup but now chatGPT South and Bard West, it went

1-2-2-4
dbl-a.p.

Again, note that it was South who opened 1.

Their reasoning for their bids was absolutely nonsense, contradicting themselves, thinking that 12 points is more than 14 points etc. chatGPT, in a single paragraph, first recited his hand correctly but then corrected himself by noting that he had been dealt 20 cards, which he apparently found normal as his version of SAYC apparently has systematic bids for hands with 20 cads. Psychedelic stuff like "I have a balanced 15-17 so I open 1 which is my longest suit as I have six clubs" (chatGPT on the hand with seven hearts). But at least they didn't make any insufficient bids.

I am disappointed with Bard who has demonstrated reasonable knowledge of the SAYC bidding system on other deals. Not this time, although at least Bard bids diamonds when he has diamonds and hearts when he has hearts, while chatGPT bids fairly randomly. chatGPT is still to show any sign of life bridge-wise, as far as I have observed.
It looks like there was a system misunderstanding regarding splinters. These AIs get more human every year --- Gilithin
4

#2 User is offline   pescetom 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 6,291
  • Joined: 2014-February-18
  • Gender:Male
  • Location:Italy

Posted 2023-May-26, 06:08

Yesterday one of our beginners (after 6 months of teaching and practice) was surprised that spades did not beat hearts in a trick of NT. So there is some hope for Bard.
1

#3 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 16,937
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2023-May-26, 06:44

When comparing Bard and chatGPT, my impression is that Bard has better reasoning skills but poorer language skills. For example, Bard will get basic VAT calculations right (chatGPT doesn't). But Bard is sensible to the exact way the question is phrased and is more likely to misunderstand questions than chatGPT is.

So whether you write the 10 as "10" or as "T" makes no difference to chatGPT, while Bard gives different answers. I think that it is the "10" which it interprets correctly but I am not 100% sure.

It is possible that Bard would bid better if I took more care, maybe refering to the other players as N, S, and E instead of LHO, RHO and partner. Or some such.

Bart can generally determine opening bids for SAYC but when I ask it to bid using Acol it confuses the notrump range. Maybe it uses a mixture of Dutch Acol and English Acol sources.

When you ask Bard to tell you a joke about for example how many bridge players are needed to change a lightbulb, it will tell you a boring joke and then elaborate on why the joke is funny.
It looks like there was a system misunderstanding regarding splinters. These AIs get more human every year --- Gilithin
0

#4 User is online   LBengtsson 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 955
  • Joined: 2017-August-10
  • Gender:Male

Posted 2023-May-26, 14:21

Artificial Intelligence will never beat Common Sense. Period.
0

#5 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,211
  • Joined: 2004-August-21
  • Gender:Male

Posted 2023-May-26, 16:33

What would possibly make you think that a predictive language model would know anything about bridge bidding? All it knows how to do is find patterns of words in its database, and put these patterns together to produce its texts.

So unless there's a web page that includes your hand diagrams and then a bidding diagram, where would it come up with bidding for this?

Repeat after me: ChatGPT doesn't actually know anything. It's just repeating things it has heard.

#6 User is offline   pilowsky 

  • pilowsky
  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,404
  • Joined: 2019-October-04
  • Gender:Male
  • Location:Australia

Posted 2023-May-28, 03:19

View Postbarmar, on 2023-May-26, 16:33, said:

Repeat after me: ChatGPT doesn't actually know anything. It's just repeating things it has heard.


So quite similar to most bridge players in a way.
1

#7 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 16,937
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2023-May-29, 13:11

View Postbarmar, on 2023-May-26, 16:33, said:

What would possibly make you think that a predictive language model would know anything about bridge bidding?


chatGPT can play chess and it can make sample size calculations for clinical trials. Basically, it can predict what a popular answer on Redit would be to a wide variety of questions.

When chatGPT plays chess, it will occasionaly move pieces that don't exist, which confirms that the rules and objectives of the game haven't been programmed explicitly, but apparently there are enough chess discussions in its inputs to allow it to play decent chess for the most part, i.e. predicting what a redit user would answer to just about any question about what the best move would be in a given situation.

I don't see any reason why it shouldn't be able to play bridge also. But of course bridge is more challenging. What bid a forum user might recommend in a given situation depends on partnership agreements and the opponents' system. And maybe bridge also has less standardized input formats than chess has, or simply less data.
It looks like there was a system misunderstanding regarding splinters. These AIs get more human every year --- Gilithin
0

#8 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,211
  • Joined: 2004-August-21
  • Gender:Male

Posted 2023-June-01, 15:06

If there were microphones distributed throughout all the bridge tournament venues, recording all the "you hold...." conversations and uploading them to web sites, ChatGPT *might* get enough information to create sensible bridge auctions.

I'll bet its chess answers get progressively less sensible as the game progresses. There are only a few openings, so it will easily learn these just as human chess players do.

This is similar to what it's like programming bridge robots. The basic opening bids are easy to program, but it gets much more complicated after 2 or 3 rounds of bidding because there are so many combinations of previous bids. Bridge programs know how to merge the information implied by each bid to create a general representation of the other players' hands, and they use this. But a generative language model like ChatGPT doesn't do that, it's just matching patterns of text.

#9 User is offline   pilowsky 

  • pilowsky
  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,404
  • Joined: 2019-October-04
  • Gender:Male
  • Location:Australia

Posted 2023-June-01, 23:00

As it happens it's completely useless at chess according to world number 4 or 5 (depending on the day) Hikaru Nakamura.
0

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users