BBO Discussion Forums: Skill rankings on BBO - BBO Discussion Forums

Jump to content

  • 3 Pages +
  • 1
  • 2
  • 3
  • You cannot start a new topic
  • You cannot reply to this topic

Skill rankings on BBO Do you find them tiresome?

#21 User is offline   pescetom 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 7,202
  • Joined: 2014-February-18
  • Gender:Male
  • Location:Italy

Posted 2019-January-01, 07:54

View Posthrothgar, on 2019-January-01, 07:31, said:

I know that the EBU has implemented a dynamic rating system.

I have not seen much convincing evidence regarding its validity.


This document goes into detail and seems forthcoming and objective about known or potential limitations. The EBU clearly put in a lot of thought and work although I don't see independent validation. There are several EBU players active on the forum, maybe they could comment on any known issues.

I think it would work in Italy pretty much off the shelf. No obvious problems of inaccessible data or unusual tournament types, maybe less mobility between clubs leading to longer times for the ranking to stabilise though.
0

#22 User is offline   IGoHomeNow 

  • PipPip
  • Group: Members
  • Posts: 25
  • Joined: 2017-February-26

Posted 2019-January-01, 08:51

ACBL masterpoints are a joke. Money is behind this.
0

#23 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2019-January-01, 09:29

View Postpescetom, on 2019-January-01, 07:54, said:

This document goes into detail and seems forthcoming and objective about known or potential limitations. The EBU clearly put in a lot of thought and work although I don't see independent validation. There are several EBU players active on the forum, maybe they could comment on any known issues.


From my perspective, the proof is in the pudding, by which I mean I am less interesting in exposition trying to justify the system as I am in its ability to be used for prediction.

  • How accurate are the predictions made by this system?
  • How do these results compare to other plausible designs?

I can't help but believe that this type of simple linear model would be significantly out-performed by a machine learning model that was able to crunch results on a board by board basis. (This looks like another classic example where the organization is unwilling to do the basic data collection that would allow it to significantly improve the results)
Alderaan delenda est
0

#24 User is offline   nige1 

  • 5-level belongs to me
  • PipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 9,128
  • Joined: 2004-August-30
  • Gender:Male
  • Location:Glasgow Scotland
  • Interests:Poems Computers

Posted 2019-January-01, 11:58

View Postpescetom, on 2019-January-01, 07:54, said:

This document goes into detail and seems forthcoming and objective about known or potential limitations. The EBU clearly put in a lot of thought and work although I don't see independent validation. There are several EBU players active on the forum, maybe they could comment on any known issues.I think it would work in Italy pretty much off the shelf. No obvious problems of inaccessible data or unusual tournament types, maybe less mobility between clubs leading to longer times for the ranking to stabilise though.

The EBU (English Bridge Union) NGS (National Grading System) seems to work well.

0

#25 User is offline   pescetom 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 7,202
  • Joined: 2014-February-18
  • Gender:Male
  • Location:Italy

Posted 2019-January-01, 12:38

View Posthrothgar, on 2019-January-01, 09:29, said:

From my perspective, the proof is in the pudding, by which I mean I am less interesting in exposition trying to justify the system as I am in its ability to be used for prediction.

  • How accurate are the predictions made by this system?
  • How do these results compare to other plausible designs?

I can't help but believe that this type of simple linear model would be significantly out-performed by a machine learning model that was able to crunch results on a board by board basis. (This looks like another classic example where the organization is unwilling to do the basic data collection that would allow it to significantly improve the results)


It would be nice to see an independent comparison and evaluation of the various designs, I agree. Review of methods by statisticians (which EBU say they did to their own satisfaction) might be even more telling.

I may be missing something but I don't see anything terribly wrong with using the final tournament results rather than board by board, which would require a uniformity and integration of software which is not always there yet in traditional bridge. Yes I might not have played against the strongest/weakest pair on the other line, or penalties might impact slightly differently, but I did play them all with the same partner and in time it should all be much the same. The EBU does use club-level results rather than national-level results in simultaneous tournaments in order to eliminate bias due to different strength of NS pairs vs EW pairs, which is something that can screw results quite significantly.
0

#26 User is offline   JJE 0 

  • Pip
  • Group: Members
  • Posts: 8
  • Joined: 2019-January-01

Posted 2019-January-01, 13:26

My own opinion is that the self-ratings just aren't that reliable.

Looking back on my recent performance I'm averaging 42.84% in ACBL matchpoint tournaments. I have about 25 ACBL masterpoints....? ("club master") -- whatever.

In some non-ACBL-sanctioned tournaments online I am often told (after making a single mistake) that I'm an "idiot/beginer(sic)/novice". Surely these "eminently intelligent" people themselves would understand that everyone makes mistakes from time to time.

INSTEAD

It would be a lot more useful and indicative to compare partnerships. It's a partnership game after all. Especially at the intermediate level there can be so many gaps and chasms between two partners' playing abilities and bidding abilities. So my partnership with Anne Boleyn might have a rating of 53% while my partnership with Charles Darwin might only average 43%, and with Edward Furlong we tend to get 48% on average.

But honestly I'd just like to see less text-screaming and "???????????????????????". People who insist on playing individual tournaments and then also insist that every single random person in the world sees what they see, knows what they know, and acts as if they would act (in particular in the dummy position and under a bit more pressure than them).

So I'm intermediate. I'm about average at my real-life clubs, and I perform approximately as can be expected with someone who has the same number of masterpoints, which is just a measure of life experience (anywhere between 0 and 99% of which is forgotten anyway).

I could demote myself to beginner I suppose, but this is inconsistent with a lot of play and bidding that elicits sincere congratulations from others. I'm the first person to claim that I'm just lucky, but when I'm cruising in the pocket and everything clicks, there is some achievement and skill there.

And what do I do about my real-life partner who is certainly an advanced player, but an absolutely awful bidder? Yelling at one's partner is not just the domain of online trolls. I've been shouted at for making mistakes in play I didn't know I made. As Randal McMurphy said to a fellow inmate at the blackjack table at the mental institution "I can't hit you because it's not your turn. You see there other people here? These are REAL PEOPLE! These are the real ones!". It's a scourge of mental illness that is endemic to the game of bridge. I particularly scoff at those who think that some others should not be permitted to play bridge at all.

Finally, sometimes I play later in the evening when I know my head's not totally in the game and my memory is not at its best. But I don't mind paying USD1.25 for an hour of 12 boards. Maybe I will win, more likely not; maybe I will learn something, quite possibly not LOL. Should it really go on my permanent record if I (or my partner for that matter, which happens often) flakes out?

Nobody has defined precisely what Intermediate means other than it's approximately the same as "most" other people on BBO. If I'm 0.8-0.9 standard deviations below the mean, then I'm intermediate too. Compared to a lot of people who don't exactly behave like civilized humans, in a manner of speaking LOLOL

Jeff
0

#27 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2019-January-01, 14:00

View Postpescetom, on 2019-January-01, 12:38, said:


I may be missing something but I don't see anything terribly wrong with using the final tournament results rather than board by board, which would require a uniformity and integration of software which is not always there yet in traditional bridge. Yes I might not have played against the strongest/weakest pair on the other line, or penalties might impact slightly differently, but I did play them all with the same partner and in time it should all be much the same.


Here's a couple reasons why using board by board results is much better.

1. All boards are not created equal. Some boards are intrinsically flat. (Imagine a board in which all roads lead to 3N and no matter how good the declarer or how awful the defense, this is always going to make precisely 10 tricks). Any time this board gets played by two pairs whose skills are significantly different from one another, the board result will have a negative impact on the strong pair and a positive impact on the weak pair. Conversely, an algorithm that has access to more complete data will be able to compensate for these types of issues and converge much more quickly to an accurate set of ratings.

2. There are much better modeling techniques than hoping that there is a linear relationship between board results and skill levels. However, these techniques typically work best with relatively large data sets. Averaging out the results of 24 or so boards into a single data point dramatically decreases the amount of data that is available and degrades the accuracy of some of your best options for modeling this relationship.
Alderaan delenda est
0

#28 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,398
  • Joined: 2004-August-21
  • Gender:Male

Posted 2019-January-01, 14:32

View PostJJE 0, on 2019-January-01, 13:26, said:

It would be a lot more useful and indicative to compare partnerships.

That's not very helpful if people on BBO are mostly encountered as singles. If you're asking to sit in the MBC, or using the Partnership Desk to play in a tourney, partnership ratings are not very helpful.

#29 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,398
  • Joined: 2004-August-21
  • Gender:Male

Posted 2019-January-01, 14:44

View Posthrothgar, on 2018-December-31, 11:17, said:

BBO has the advantage of perfect record keeping. It sees / records every single bid that you make and board that you play. As such, they have - by far - the best data set to develop a good rating system.

But trying to use all this data results in lots of apples vs. oranges comparisons. Does it really make sense to compare JEC's results in the team games he plays every day (versus mostly expert teams) to random players in MBC? And tourneys have a mix of established partnerships and last-minute pickups.

The permanent floating indy idea seems like a potential solution to this, but what do we do about people who don't play in it? Do we mark them "unknown", and declare "if you want a rating, you have to play in the PFI" (similar to our TCR policy)?

#30 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2019-January-01, 15:32

View Postbarmar, on 2019-January-01, 14:44, said:

The permanent floating indy idea seems like a potential solution to this, but what do we do about people who don't play in it? Do we mark them "unknown", and declare "if you want a rating, you have to play in the PFI" (similar to our TCR policy)?


I wouldn't label the result of the Permanent Floating Indy as a rating.
I'd simply say this is your rank in the ladder.
If people choose to use this as a proxy for a rating, they are welcome to do so.

Quote

Does it really make sense to compare JEC's results in the team games he plays every day (versus mostly expert teams) to random players in MBC?
And tourneys have a mix of established partnerships and last-minute pickups.


FWIW, I've had some discussions about this topic with Glickman.
We both believe that the best way to proceed is to develop accurate ratings for partnerships.
If you are able to do a good job with this, you can consider trying to move on to individual ratings.
Alderaan delenda est
0

#31 User is offline   lorantrit 

  • Pip
  • Group: Members
  • Posts: 3
  • Joined: 2018-October-31

Posted 2019-January-01, 16:03

How soon we forget.

The first computer bridge site failed because it "rated" players (but not very accurately). The result was maddening. It became almost impossible to sit down to a normal bridge session with three strangers because everyone was hyper-concerned with their "rating". They were gone in a second if they sensed the game would lower their "rating".

With considerable background effort it was possible to come up with reasonable rating just for one person. I once did this for "The G-Man". No surprise as to the result.
0

#32 User is offline   RD350LC 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 154
  • Joined: 2016-April-22

Posted 2019-January-01, 16:32

View PostGrahamJson, on 2019-January-01, 06:23, said:

I hav now found the guidance, which is:-

“Novice - Someone who recently learned to play bridge
Beginner - Someone who has played bridge for less than one year
Intermediate - Someone who is comparable in skill to most other members of BBO
Advanced - Someone who has been consistently successful in clubs or minor tournaments
Expert - Someone who has enjoyed success in major national tournaments
World Class - Someone who has represented their country in World Championships”

All I’m suggesting is that this, or something similar, is shown explicitly in the profile so that players can clearly see what “Expert” is supposed to mean. Most seem to think that all it requires is that you have heard of lots of conventions, not that you can actually play a decent game of bridge.

This statement is accurate. This is in the Help section, in The Rules of this Site.
I have held my own against the better players in our club, and have done reasonably well in ACBL tournaments. However, I still rank myself as Intermediate, not above that. I have played against Fred Gitelman when he was still in the Toronto area, and held my own there. I could be Advanced, but choose not to overrate myself. Certainly NOT Expert or World Class, though.
0

#33 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2019-January-01, 16:46

View Postlorantrit, on 2019-January-01, 16:03, said:

How soon we forget.

The first computer bridge site failed because it "rated" players (but not very accurately). The result was maddening. It became almost impossible to sit down to a normal bridge session with three strangers because everyone was hyper-concerned with their "rating". They were gone in a second if they sensed the game would lower their "rating".



Don't forget the other great failure of the Lehman ratings...

Spending hour after hour trying to explain how they work to Carl Hudachek...
Alderaan delenda est
0

#34 User is offline   johnu 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,833
  • Joined: 2008-September-10
  • Gender:Male

Posted 2019-January-01, 17:30

View Postlorantrit, on 2019-January-01, 16:03, said:

How soon we forget.

The first computer bridge site failed because it "rated" players (but not very accurately). The result was maddening. It became almost impossible to sit down to a normal bridge session with three strangers because everyone was hyper-concerned with their "rating". They were gone in a second if they sensed the game would lower their "rating".


If you are talking about OKBridge, it failed (but is still on life support) for a number of reasons, none of the top reasons was because of Lehman ratings.

First, there was a $99 year membership fee per year. Most bridge players are a cheap bunch who are never going to pay that high a fee per year even if it was the only online option available. That greatly limited the potential customer base. (and $99 was worth a lot more in the early days of OKB)

Second, BBO came along and offered free bridge. If one service costs $99 and the other is free with basically the same capabilities, which one would you choose. I know which one I would choose, and I was one of those who had originally paid to play on OKBridge.

Third, BBO started to offer GIB bots to play against. They've got many well known deficiencies but if you are a beginning or novice player, they are a good way to get into the game without making silly mistakes and feeling embarrassed in front of real people.

Fourth, you apparently haven't been reading all these BBO forums. There are constant reports about rude players, players quitting in the middle of a hand after a mistake, players getting booted from the table for making a mistake, gratuitous insults, and plenty of cheating and troll players disrupting games by making ridiculous bids (e.g. bidding 7NT on any random hand)
0

#35 User is offline   johnu 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,833
  • Joined: 2008-September-10
  • Gender:Male

Posted 2019-January-01, 17:39

View Postbarmar, on 2019-January-01, 14:44, said:

But trying to use all this data results in lots of apples vs. oranges comparisons. Does it really make sense to compare JEC's results in the team games he plays every day (versus mostly expert teams) to random players in MBC? And tourneys have a mix of established partnerships and last-minute pickups.


To the extent that players in JEC matches play against random players, they would have a rating based on those random players. Say their rating corresponds to scoring ~70%. If you play those JEC players and break even, then you should be ~70% if you don't play anybody else.

Sure, established partnerships should score higher than last-minute pickups. You can discount results from pickup partnerships until they have played together a certain number of times.
0

#36 User is offline   johnu 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,833
  • Joined: 2008-September-10
  • Gender:Male

Posted 2019-January-01, 17:55

View Posthrothgar, on 2019-January-01, 05:52, said:

Once upon a time there was supposed to be a linkage between self ratings and real world accomplishments


Only for "star" players.

"* Earned through participation in various World Championship events and/or success in major national and international tournaments. Please email support@bridgebase.com if you think you might qualify for this symbol. "
0

#37 User is offline   croquetfan 

  • PipPipPip
  • Group: Full Members
  • Posts: 77
  • Joined: 2010-October-29

Posted 2019-January-01, 22:38

there are "beginners" with 5000+ logins and experts who can't handle a simple ruffing of losers.
In a closed population like this, it should be very easy to calculate ratings. Who cares if someone cheats to make themself look world class. they will soon find themselves with an embarrassingly high rating without the skills to match.
Swan bridge & others manage to produce a rating: are the programmers at BBO not up to the task?
0

#38 User is offline   miamijd 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 737
  • Joined: 2015-November-14

Posted 2019-January-01, 23:59

View Postjohnu, on 2019-January-01, 17:30, said:

If you are talking about OKBridge, it failed (but is still on life support) for a number of reasons, none of the top reasons was because of Lehman ratings.

First, there was a $99 year membership fee per year. Most bridge players are a cheap bunch who are never going to pay that high a fee per year even if it was the only online option available. That greatly limited the potential customer base. (and $99 was worth a lot more in the early days of OKB)

Second, BBO came along and offered free bridge. If one service costs $99 and the other is free with basically the same capabilities, which one would you choose. I know which one I would choose, and I was one of those who had originally paid to play on OKBridge.

Third, BBO started to offer GIB bots to play against. They've got many well known deficiencies but if you are a beginning or novice player, they are a good way to get into the game without making silly mistakes and feeling embarrassed in front of real people.

Fourth, you apparently haven't been reading all these BBO forums. There are constant reports about rude players, players quitting in the middle of a hand after a mistake, players getting booted from the table for making a mistake, gratuitous insults, and plenty of cheating and troll players disrupting games by making ridiculous bids (e.g. bidding 7NT on any random hand)


I think the worst problem for OKB was its tourney structure. You paid $100 a year for all the ACBL tourneys you wanted (cheaper than BBO), but:

1. There were no instant tournaments.
2. The tourneys didn't start every hour like on BBO. They started every two hours or so -- much harder for busy folks like me to play.
3. The tourneys were too long -- 90 minutes.
4. There were no subs. So if someone lost their connection, you just sat there unable to finish the board unless a Director came by and subbed in for the missing player. A truly awful experience.

Plus their interface was lousy. No GIB to analyze hands, no interactive hand records, pretty much no nothing.
0

#39 User is offline   miamijd 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 737
  • Joined: 2015-November-14

Posted 2019-January-02, 00:02

View Postlorantrit, on 2019-January-01, 16:03, said:

How soon we forget.

The first computer bridge site failed because it "rated" players (but not very accurately). The result was maddening. It became almost impossible to sit down to a normal bridge session with three strangers because everyone was hyper-concerned with their "rating". They were gone in a second if they sensed the game would lower their "rating".

With considerable background effort it was possible to come up with reasonable rating just for one person. I once did this for "The G-Man". No surprise as to the result.


Poor Gerard. I met him in person a couple of times and thought he was a very nice man. Not much of a bridge player, but a good guy. Didn't deserve the #$%* he got; people were just cruel.
1

#40 User is offline   miamijd 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 737
  • Joined: 2015-November-14

Posted 2019-January-02, 00:03

View Posthrothgar, on 2019-January-01, 16:46, said:

Don't forget the other great failure of the Lehman ratings...

Spending hour after hour trying to explain how they work to Carl Hudachek...


You know that Carl is a very accomplished engineer, right? It's not that he couldn't understand Lehmans; it's that he thought they were stupid.
0

  • 3 Pages +
  • 1
  • 2
  • 3
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users