BBO Discussion Forums: Smart math people, help? - BBO Discussion Forums

Jump to content

  • 7 Pages +
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic

Smart math people, help?

#41 User is offline   Trumpace 

  • Hideous Rabbit
  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,040
  • Joined: 2005-January-22
  • Gender:Male

Posted 2007-July-18, 13:51

Jlall, on Jul 18 2007, 11:36 AM, said:

2 were having the following discussion:

A: Say you have 2 envelopes filled with money. The value of one envelope is half the value of the other. These values can approach infinite $.

B: Ok.

A: Say you are handed one envelope and open it contains 5000 dollars. You are offered the choice to switch envelopes, do you?

B: Of course, the other envelope could have either 10,000 dollars or 2500 dollars. My EV is higher by switching.



Person B is obviously wrong, but I cannot find the flaw in his argument. Can you prove specifically why the last statement of person B is wrong (I'm not looking for there are 2 possible cases etc, I'm looking for specifically the flaw of person B's thinking of which there must be something and I'm missing it).

I am not sure if people have already said what I am going to say. But I will say it anyway.


The answer is, "it depends".

Unless we are given the "method" with which the numbers were chosen, we cannot tell if switching is bad.

For instance, we can pick the numbers with probabilities in such a way that, if the envelope you open contains 5000$, you are better off not switching.

By using a different "method" we can pick the numbers in such a way that, if the envelope you open contains 5000$, you are better off switching.

(If you want concrete examples, I refer you to http://www.ocf.berkeley.edu/~wwu/cgi-bin/y...14781;start=25)

Basically, there is no _always switch_ or _never switch_ which works. It is totally dependent on the underlying "method" with which the numbers were chosen.
0

#42 User is offline   han 

  • Under bidder
  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 11,797
  • Joined: 2004-July-25
  • Gender:Male
  • Location:Amsterdam, the Netherlands

Posted 2007-July-18, 13:53

Blofeld, on Jul 18 2007, 02:51 PM, said:

To see that this cannot be true, imagine you open envelope A and find 1 cent in it. Envelope B can't have half a cent in, so must have 2 cents - and the probability it has more money is 100% rather than 50%.

I don't really like this argument but I agree with the rest of your post. (although you shouldn't call it "Han's argument" because I didn't come up with it myself, it's a well-known problem).
Please note: I am interested in boring, bog standard, 2/1.

- hrothgar
0

#43 User is offline   Trumpace 

  • Hideous Rabbit
  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,040
  • Joined: 2005-January-22
  • Gender:Male

Posted 2007-July-18, 13:54

Hannie, on Jul 18 2007, 02:51 PM, said:

The original problem is not possible. It starts with an impossible assumption.

I don't think the problem makes an impossible assumption. The problem never stated any uniform distribution of the numbers. It was B who thought so.


The problem is perfectly valid, and if asked should you switch or not, you should just say "incomplete information" :D
0

#44 User is offline   han 

  • Under bidder
  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 11,797
  • Joined: 2004-July-25
  • Gender:Male
  • Location:Amsterdam, the Netherlands

Posted 2007-July-18, 13:55

Trumpace, on Jul 18 2007, 02:54 PM, said:

Hannie, on Jul 18 2007, 02:51 PM, said:

The original problem is not possible. It starts with an impossible assumption.

I don't think the problem makes an impossible assumption. The problem never stated any uniform distribution of the numbers. It was B who thought so.


The problem is perfectly valid, and if asked should you switch or not, you should just say "incomplete information" :D

You are absolutely correct.
Please note: I am interested in boring, bog standard, 2/1.

- hrothgar
0

#45 User is offline   Blofeld 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 775
  • Joined: 2005-May-05
  • Location:Oxford
  • Interests:mathematics, science fiction, Tolkien, go, fencing, word games, board games, bad puns, juggling, Mornington Crescent, philosophy, Tom Lehrer, rock climbing, jootsing, drinking tea, plotting to take over the world, croquet . . . and most other things, really.

  Posted 2007-July-18, 13:58

Hannie, on Jul 18 2007, 02:53 PM, said:

Blofeld, on Jul 18 2007, 02:51 PM, said:

To see that this cannot be true, imagine you open envelope A and find 1 cent in it. Envelope B can't have half a cent in, so must have 2 cents - and the probability it has more money is 100% rather than 50%.

I don't really like this argument but I agree with the rest of your post. (although you shouldn't call it "Han's argument" because I didn't come up with it myself, it's a well-known problem).

OK, I admit that I don't really like that argument either.
0

#46 User is offline   bid_em_up 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,351
  • Joined: 2006-March-21
  • Location:North Carolina

Posted 2007-July-18, 14:15

Blofeld, on Jul 18 2007, 02:58 PM, said:

Hannie, on Jul 18 2007, 02:53 PM, said:

Blofeld, on Jul 18 2007, 02:51 PM, said:

To see that this cannot be true, imagine you open envelope A and find 1 cent in it. Envelope B can't have half a cent in, so must have 2 cents - and the probability it has more money is 100% rather than 50%.

I don't really like this argument but I agree with the rest of your post. (although you shouldn't call it "Han's argument" because I didn't come up with it myself, it's a well-known problem).

OK, I admit that I don't really like that argument either.

I will be happy to scan a U.S. Half Cent for you, so I don't like it either. :D

(Yes, at one time they did exist...)
Is the word "pass" not in your vocabulary?
So many experts, not enough X cards.
0

#47 User is offline   BebopKid 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 230
  • Joined: 2007-January-23
  • Gender:Male
  • Location:Little Rock, Arkansas, USA

Posted 2007-July-18, 14:42

Does this make any difference? What if...

You are given three envelopes: one green and two white?

You are told that you can look in the green envelope. It contains twice as much as one of the other envelopes and half as much as one of the envelopes. You may select any of the three envelopes to leave with.


BebopKid (Bryan Lee Williams)

"I've practiced meditation most of my life. It's better than sitting around doing nothing."
(Tom Sims, from topfive.com)

0

#48 User is offline   Echognome 

  • Deipnosophist
  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,386
  • Joined: 2005-March-22

Posted 2007-July-18, 14:51

By the way, the problem as postulated by Barry Nalebuff is fairly well known from his book called Thinking Strategically. However, the way he posted the problem there was that you had a finite number of envelopes. I think the example he gave was $5, $10, $20, $40, $80, and $160. Note that here the equally likely assumption holds without issue. He then said suppose two people were given envelopes and told that one contained twice as much as the other. So say person A opens his envelope and finds $20. If he postulates that person B's envelope contains $40 half the time and $10 half the time, then he should switch. Then if person B opens and finds $40, he might postulate that the other envelope contains $20 half the time and $80 half the time.

This finite version of the problem is easy to solve via backward induction. You start with a person who opens an envelope with $160. He will *never* agree to switch. Thus, if you open an envelope with $80, you should reason that if the other person has $160, they will never agree to switch. So I shouldn't agree to switch. Thus, the person with $80 will not agree to switch. Then the person who opens the $40 envelope will reason that the person who opens $160 won't switch, therefore the person with $80 won't agree to switch, thus I should not agree to switch. And so on down to the person who finds the smallest amount and always agrees to switch.

The extension to an infinite number of envelopes takes away this endpoint. However, you can extend the reasoning for any finite number of envelopes.
"Half the people you know are below average." - Steven Wright
0

#49 User is offline   Blofeld 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 775
  • Joined: 2005-May-05
  • Location:Oxford
  • Interests:mathematics, science fiction, Tolkien, go, fencing, word games, board games, bad puns, juggling, Mornington Crescent, philosophy, Tom Lehrer, rock climbing, jootsing, drinking tea, plotting to take over the world, croquet . . . and most other things, really.

Posted 2007-July-18, 15:32

BebopKid, on Jul 18 2007, 03:42 PM, said:

Does this make any difference? What if...

You are given three envelopes: one green and two white?

You are told that you can look in the green envelope. It contains twice as much as one of the other envelopes and half as much as one of the envelopes. You may select any of the three envelopes to leave with.

Yes, this makes a difference. Now the argument with expectations works because you know you've got 50% chance each of getting half as much and twice as much, so you should pick a white envelope (assuming your aim is to maximise expected winnings).
0

#50 User is offline   david_c 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,178
  • Joined: 2004-November-14
  • Location:England
  • Interests:Mathematics;<br>20th century classical music;<br>Composing.

Posted 2007-July-18, 19:53

Blofeld, on Jul 18 2007, 08:51 PM, said:

Han is of course correct.

The fallacy is this. Although you have picked between the two envelopes randomly, it is in general false that the other envelope will have more money in 50% of the time.

To see that this cannot be true, imagine you open envelope A and find 1 cent in it. Envelope B can't have half a cent in, so must have 2 cents - and the probability it has more money is 100% rather than 50%.

This is just an illustration of the fact that our intuition in this area is often wrong, and relies on the accidental fact that money isn't infinitely subdivisible. If you allow for arbitrarily small amounts of money, the argument doesn't hold. But the one that Han gave prevents any distribution of possible amounts of money to envelopes that always has a 50% chance of having more money. In fact, what the "paradox" shows is precisely that such a distribution cannot exist!

Hmm, I believe that last sentence is incorrect, and I think people are missing the point of what's really paradoxical about this.

Owen here makes the good point that the conditional probability

P ( our envelope contains the smaller amount | our envelope contains $M )

is not necessarily the same as the a priori probability (which is 1/2 of course). We're trying to work out the expectation from switching given that our envelope contains $M, and in order to do this we need to know the probability that our envelope contains the smaller amount given that it contains $M. That is, the a priori probabilities are useless, what we need to know are the conditional probabilities.

And Han's very first post proved that it is not possible for all the conditional probabilities to be 1/2. (Or to put it another way, we can't arrange things so that all possible amounts of money are equally likely.)

BUT

Even though the conditional probabilities can't all be 1/2, we can make them as close to 1/2 as we like. For example, we can put amounts into the envelopes according to a probability distribution in such a way that all the conditional probabilities, for every possible amount $M, are between 0.49 and 0.51.

[For the smart math people: just take a very slowly decaying distribution. For example, let the probability that envelopes contain $(2^n) and $(2.2^n) be k.(1-epsilon)^(|n|).]

In this case, when analysing whether it's right to switch for a particular amount M, the calculations are so close to what they would be if the probabilities were 1/2 that it makes no difference. The conclusion is

For every amount M you see in the envelope, your expectation if you switch is greater than M.

(In fact, greater than 1.2M, say. We can get any multiple less than 1.25.)

Is this a paradox? It shouldn't be - it's true. Do you believe me?
0

#51 User is offline   Echognome 

  • Deipnosophist
  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,386
  • Joined: 2005-March-22

Posted 2007-July-18, 19:59

david_c, on Jul 18 2007, 05:53 PM, said:

P ( our envelope contains the smaller amount | our envelope contains $M )

is not necessarily the same as the a priori probability (which is 1/2 of course). We're trying to work out the expectation from switching given that our envelope contains $M, and in order to do this we need to know the probability that our envelope contains the smaller amount given that it contains $M. That is, the a priori probabilities are useless, what we need to know are the conditional probabilities.

Why is the a priori probability 1/2?

Why are the a priori probabilities useless?

The last time I checked P(A|B ) = P(A,B )/P(B ). Without the a priori probability of B, it seems rather hard to calculate.

Here the P(B ) = P(our envelope contains $M). We don't have that defined in our problem in any way. Note the conditional probability does not exist if that probability is 0. I've already stated above why it can't be uniform.
"Half the people you know are below average." - Steven Wright
0

#52 User is offline   david_c 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,178
  • Joined: 2004-November-14
  • Location:England
  • Interests:Mathematics;<br>20th century classical music;<br>Composing.

Posted 2007-July-18, 20:09

Echognome, on Jul 19 2007, 02:59 AM, said:

Here the P(B ) = P(our envelope contains $M).  We don't have that defined in our problem in any way.

Indeed - and so we can't calculate the expectation in the original problem. The fallacy in many people's approaches is to assume that using 1/2 must work, when in fact the probabilities are completely undefined. (Though one thing we do know, as you say, is that the probabilities can't all be 1/2.)

But you can resolve this problem, by specifying a distribution which we use to choose the amounts in the envelopes, and still be in the situtation where switching is best no matter what amount you found in the envelope.

Quote

Why is the a priori probability 1/2?

I was making the assumption that, after the amounts in the two envelopes had been decided, you would pick which one you were going to open at random.
0

#53 User is offline   MikeRJ 

  • PipPip
  • Group: Members
  • Posts: 43
  • Joined: 2006-November-06
  • Gender:Male

Posted 2007-July-18, 21:12

Some one may already have said this in adifferent way, but here goes..

Suppose I have not opened an envelope, what do I know?
The lowest value envelope contains x dollars.
The higher value contains 2X dollars.
My average expectation selecting an envelope randomly is 1.5X dollars.
If I changeand get X dollars I will lose 0.5X dollars against my average expectation.
If I change and get 2X dollars I will gain 0.5X against my average expectation.
On average changing will gain as much as it loses – as one would expect.

So why does this change if I know the sum of money in the envelope I have selected? Now if I do not change I have an ACTUAL expectation, not an AVERAGE expectation - I have more information.

If I open an envelope containing $5000 and I change I will end up with either 2500 or 10000 (average with $6250), therefore I should change.

The reason this works is that my envelope can never contain 1.5X dollars – only X or 2X.

Of course in the situation where you know what you have in the envelope, you will never have the chance to change again.

Mike
0

#54 User is offline   P_Marlowe 

  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 10,053
  • Joined: 2005-March-18
  • Gender:Male

Posted 2007-July-19, 00:33

Jlall, on Jul 18 2007, 12:43 PM, said:

Hannie, on Jul 18 2007, 12:00 PM, said:

B is indeed wrong, or rather, the set-up of the problem isn't possible.

Before you can answer the problem, you would have to describe how likely each combination is. The problem assumes that every combination is equally likely, i.e. $2,500+ $5,000 is just as likely to $5,000+$10,000, and likewise for every other combination. But this isn't possible.

To see why this isn't possible let's assume that we do have such a distribution of probabilities. The chance that the the lowest amount lies between 1 and 2 must be some positive number, let's say "x". But then the chance that it lies between 2 and 4 must also be "x". And the chance that it lies between 4 and 8 must also be x. But then the chance that the lowest amount lies between 1 and 2^n is x + x + x +.. = n*x. And for large n this is larger than 1! This isn't possible.

I'm playing bridge atm so I'll try to do a better job later.

Hi, I'm talking in a pure math sense, I don't care about the applied form. It is completely hypothetical in my mind in a world where infinity exists. There is no upper limit. This is also the solution Taleb offers but it seems flawed to me but obviously you know more than me. I am now reading the wiki article on this.

Hi,

No upper limit is not the same as "infinity" exists.

In a world where infinity exists, you cant compare infinity
with infinity / 2, more precise infinity is in a sense the same
as infinity / 2.

A example (has nothing to do with the original problem,
but may help explain the above statement):

Imagine two hotels with infinite rooms belonging to the same
hotel chain. Every hotel is completely booked up, i.e. no free
room.
Now in the 2nd hotel they needes to repair the electric, so every
guest had to move out.
The portier of the first hotel said: "No problem they can move into
our hotel. The guys in our hotel move to the room with the number
they get, if they multiply their original room number with two
(2n - n being the original room number).
The guys from the other hotel move to the room with the
number they get, if they multiply their original room number
with two and subtract 1 (2n-1)."
Everybody followed order, and everybody found his own room.

And the original hotel was still completly booked out.

With kind regards
Marlowe
With kind regards
Uwe Gebhardt (P_Marlowe)
0

#55 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,087
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2007-July-19, 00:35

MikeRJ, on Jul 19 2007, 05:12 AM, said:

If I open an envelope containing $5000 and I change  I will end up with either 2500 or 10000 (average with $6250), therefore I should change.

Not necesarily since the probability that the other envelope contains $10000 may be less than 50%. Or more. That's the whole point.

I still think my example with the weights of the males and females must be easier to reconcile with intuition.

After you observed that the first person weighted 100 kg, the probability that he's the male is more than 50%.

Before you made the observation, the probability is of course 50%. This means that your average gain in percentages, if you switch, is positive. The fallacy is to consider this average percentage gain to be something attractive. It's not. You're not interested in percentages, you're interested in dollars. The average probability of 50% is comprised on some high male-probabilities in case it turned out to be a heavy person and some low ones is it turns out to be a light person. If it's a 40-kg person that you correctly acess to be the female you can win a lot of percents by switching, but those are percents of a small number so the amount of dollars you gain is not more than the smaller percentage of a higher amount in case the first person turned out to be a 100-kg male. So although your expected percentage gain is +25%, your expected dollar gain is $0.

Taking averages of percentages easily leads to nonsense. Suppose we both have $100. Now I give you $10 and ten minutes afterwards you give me $10 back. I first lost 10% of my $100 and then gained 11.1% of my $90. You first won 10% of your $100 and then lost 9.1% of your $110. My net gain was 1.1% and yours 0.9%. Halelula, an easy solution to the treasury deficit!
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#56 User is offline   P_Marlowe 

  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 10,053
  • Joined: 2005-March-18
  • Gender:Male

Posted 2007-July-19, 00:44

helene_t, on Jul 18 2007, 01:49 PM, said:

<snip>

Quote

think Hannie is saying that because you cannot assign a number to infinity, you cannot define 1/2 of infinity or two times infinity, the problem is not possible to solve mathematically. Is this correct, Han?
No, 2*Inf=Inf and Inf/2=Inf. That's not a problem. The problem is that if there's an infinite number of boxes, some must have higher probabilities than others. Since otherwise each particular box would have probability 1/Inf=0 and therefore it would be impossible to open a box, events with zero probability never happen.

I know this sounds very contra-intuitive, it took me a long time fully to understand it myself.

Events with zero probability can happen, but you should
not wait for them.

An simple example: It is possible that you may hit the precise
center of a circular target with an arrow, but the probability
is zero.

With kind regards
Marlowe
With kind regards
Uwe Gebhardt (P_Marlowe)
0

#57 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,087
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2007-July-19, 00:54

P_Marlowe, on Jul 19 2007, 08:44 AM, said:

Events with zero probability can happen, but you should
not wait for them.

An simple example: It is possible that you may hit the precise
center of a circular target with an arrow, but the probability
is zero.

No! Emphatically no! Suppose your digital ArrowHitMeter ™ reports the distance from the center to your hit as 0.0000. Then all you know is that the distance was rounded of to zero by four digits after the period, i.e. it is between 0 and 0.00005. That range has positive width and presumably a positive probability. If the probability was zero it would not have happened.

This may sound like a silly measurement-technology problem but it's not. It's not even a physical problem, related to Heisenberg's uncertainty principle or some such. It's a fundamental principle in probability theory: If you have a random variable on a continous scale (such as distance from the center) the observations of that random variable are always ranges with positive width. You cannot pick a real number. You can pick an integer, or you can pick a range of real numbers.

Of course you can ask me to pick a "real" number and I'll be happy to think of sqrt(2) or pi or some such. But that's an illusion. I can only pick from the countable subset of the real numbers that can be expressed by mathematical formalism (or whatever language I think in). So effectively I'm picking an integer and then thinking of some transform of that integer which happens to be a non-integer number.

One could argue that even though the excact value of a real-valued random variable cannot be observed, the excact value could still be "out there" is some sense. That is a deep philosofical issue which we have had in a couple of other threads. For the purpose of this thread I think it suffices to say that probability theory is concerned with events that can be observed, at least in principle.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#58 User is offline   frouu 

  • PipPipPip
  • Group: Full Members
  • Posts: 90
  • Joined: 2007-January-13

Posted 2007-July-19, 01:03

david_c, on Jul 18 2007, 08:53 PM, said:


[For the smart math people: just take a very slowly decaying distribution. For example, let the probability that envelopes contain $(2^n) and $(2.2^n) be k.(1-epsilon)^(|n|).]

In this case, when analysing whether it's right to switch for a particular amount M, the calculations are so close to what they would be if the probabilities were 1/2 that it makes no difference. The conclusion is

For every amount M you see in the envelope, your expectation if you switch is greater than M.

(In fact, greater than 1.2M, say. We can get any multiple less than 1.25.)

Is this a paradox? It shouldn't be - it's true. Do you believe me?

I don't believe you because you're restricting the outcomes of the experiment to {k*2^n} sequence and you don't a priori know what "k" is.

say in one experiment you see 200 in the first envelope, you're assuming your outcomes are {25,50,100,200,400,800,..} so your k was 25. But you didn't know what this was. You'd be surprised in another experiment if you observed 300, which is not in your outcome set, and you didn't assign a probability for it.
0

#59 User is offline   Gerben42 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 5,577
  • Joined: 2005-March-01
  • Gender:Male
  • Location:Erlangen, Germany
  • Interests:Astronomy, Mathematics
    Nuclear power

Posted 2007-July-19, 01:04

The main problem with the whole dilemma is that people use a different metric for the worth of money. For someone like me, winning $1M or winning $2M is roughly the same, however winning $500k is probably much of the same also. Here I would switch as the worst that can happen is really great already!

Similarly, if my first envelope contains $10, I will also switch. $20 or $5 is also not as much or as few that I think much differently about it.

But inbetween for most people there is an amount where he will NOT switch because the amount is such that losing half it is worse than winning twice the amount.
Two wrongs don't make a right, but three lefts do!
My Bridge Systems Page

BC Kultcamp Rieneck
0

#60 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,087
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2007-July-19, 01:17

Gerben42, on Jul 19 2007, 09:04 AM, said:

The main problem with the whole dilemma is that people use a different metric for the worth of money.

No, that's not the issue. The assumption is that you maximize expected gain. Whether that assumption is realistic or not is irelevant. The problem under discussion here is the mathematical one.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

  • 7 Pages +
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users