As an extreme example, suppose you have a group of experts who all play with each other, and a group of novices who do the same. Within the novice group, there may be a standout who regularly wins, so he would be given a high rating. Conversely, there must be an expert who isn't as good as his peers, so he would be given a low rating. But these two never play against each other, so the worst expert might be given a lower rating than the best novice, even though he's actually far better.
And for many players on BBO, this is a realistic situation. I don't think JEC enters many tourneys or plays with randoms in the MBC, he just plays his two JEC vs XXX team matches every day. He's a very good player, but he partners with some of the best, so the algorithm will assume that they're contributing more to the results.
I was thinking about this and also believe that part of the problem is that jec plays against a lot of very good opponents that don't otherwise play much on bbo and are thus very underrated.
I agree that isolated groups can not be compared to each other with any confidence. I do not agree that means they are a bad idea. If the system is any good, mixing the groups would fix the problem and if the players don't care to fix a what to them is a non problem then it is a non problem.
And obviously, that doesn't mean experts would have to play with novices, it just means experts would have to play with people who do play with people who do eventually through how ever many iterations play with novices.