I have found it challenging to answer this question:

Three players A, B, C play the following game. First, A picks a real number between 0 and 1 (both inclusive), then B picks a number in the same range (different from A's choice) and finally C picks a number, also in the same range, (different from the two chosen numbers). We then pick a number in the range uniformly randomly. Whoever's number is closest to this random number wins the game. Assume that A, B and C all play optimally and their sole goal is to maximise their chances of winning. Also assume that if one of them has several optimal choices, then that player will randomly pick one of the optimal choices.
Questions:
a) If A chooses 0, then what is the best choice for B?
b) What is the best choice for A?
c) What is the best choice for the first player when the game is played among four players?

Having not looked at the solution, it seems that the equilibrium of the game is for A to choose 0.25, B to choose 0.75, and C to choose 0.5. You could rotate the first two, but the expected result is that A and B get 3/8 each and C get 1/4. The approach would be to fix the choices of A and B and look at the optimal choices of C. Given that, now consider the choice of B, and then given that, the choice of A.

I probably agree. But you say "it's possible that B gets less." That's not relevant necessarily because "their sole goal is to maximise their chances of winning." And there are times when C's placement is random because there are placements that all give the same expectation. It's a probabilistic game. If it said that they all played to have best chance of winning but given the choice C would rather beat B than A this would change things.

The answer you found on that webpage is "incorrect". I will try to explain in a simple way why and why the quotes. BTW, the question is imho more about the game theory than statistics: you need to find the players' strategy.

Bearish's answer is probably what your interviewer (some IB or HF person, I presume) expected. It also nicely shows how a rational mathematical brain comes up with a procedure known as an iterative elimination of dominant strategies.

All players want to have strictly higher chances than their competitors.
A starts building his strategy from a nice symmetric point 0.5. He sees that if he chose 0.5, then B and C would surround him to maximise their chances (they both would win with probability of almost 1/2) and simultaneously reduce A's chances to any small:
|- - - BAC - - -|
Thus, he decides to consider some slightly lower value than 0.5. Then B could select 0.5, and C would "surround" B together with A:
|- - - ABC - - -|
But B is "rational" and to avoid this, B would decide to shift even lower than A. Then again A can be surrounded:
|- - BAC - - - -|
What position is safe for A? If A selected a number even lower, again B would rationally prefer to go below A rather than risking having C above:
|- BAC - - - - -|
Now you see where it's going: A drops to 0.
B reasons as follows (and A could predict it): "if I sit close to A (stealing all his chances), C will sit on my right and I will be surrounded"
|ABC - - - - - -|
"If I sit in 1, C will sit next to me the middle to take the lion share the chance (1/2, while I and A will have only 1/4 each)."
|A - - - C - - - B|
"If I sit in 0.5, C will sit on my right and have a higher chance to beat me. The best I can do is sit at such a distance from A that C's best choice is the same chance as mine".
Eventually B arrive at:
|A - - - - CB - -| which is A at 0, B at 0.3(3) and C on B's left.
This is one of the solutions / Nash equilibria you can get.

Mathematically, you can calculate is from players' utility functions:
[$]u_A = P(|y-x_A| < |y - x_B|, |y-x_c|)[$]
[$]u_B = P(|y-x_B| < |y - x_C|, |y-x_A|)[$]
[$]u_C = P(|y-x_C| < |y - x_B|, |y-x_A|)[$]
where x are the players positions and y is the drawn point.
The Nash equilibrium with players in positions [$]x^*_A[$], [$]x^*_B[$] and [$]x^*_C[$] can be found by solving [$]\partial u_A / \partial x_A[$]|_{x*A} = 0, [$]\partial u_B / \partial x_B[$]|_{x*B} = 0, [$]\partial u_C / \partial x_C[$]|_{x*C} = 0

Obviously there are many such Nash equilibria, but it's certainly not the answer from that webpage. OTOH, the confident person posting there might have some insider knowledge about e.g. the players mathematical skills, etc. Then the rational behaviour of players will change. The real-world statistics looks more like this:
Nash strategies often don't work (vide e.g. the prisoner's dilemma, tactical voting in the elections, focal points, ...), and yet everybody - from economists to AI boys - try to use them.

A version of this puzzle on a circle is a lemonade stand game. Another related problem popular in your field is Keynes's beauty contest. Or if you don't like backwards sexist rubbish, see https://theincidentaleconomist.com/word ... e-average/ It intuitively describes the difference between strategies I tried to sketch above: the theoretically rational strategy vs. common knowledge, which may be more rational in practice - in your face, John Nash

there is actually a brainteaser forum here where people are more likely to look for this kind of post

for a) If A chooses 0, then what is the best choice for B?

I'd say 2/3 because B is then guaranteed to get at least 1/3, and for any other value, it's possible that B gets less

then what is the correct answer if A chooses 0?
The link says "0.5"

(a) If A chooses 0, then the best choice for B is to choose 1/2 = 0.5.

but if B chooses 0.5, C would pick [$]0.5+\epsilon[$], like guessing a dollar more than the guy before you on The Price is Right, so that C gets [$]0.5-\epsilon/2[$] and B gets [$]0.25+\epsilon/2[$]

Last edited by ppauper on January 31st, 2019, 9:26 pm, edited 1 time in total.

It’s game theory plus probability. You can find many situations where C’s placement does not affect C’s chances but does affect A and B. So I’m not yet convinced by some of this.

“All players want to have strictly higher chances than their competitors.”

Why? That’s not the question. Or is it equivalent?

Do the A=0 case. Then add space to the left of A little by little. It won’t affect the results until it gets to a certain length. I reckon it’s an extra 1/3. Now rescale. So I get Bearish’s answer. But am doing it in my head!

I forgot to remove that sentence. Sorry, I'm a bit absent-minded these days. It's equivalent to player's rationality assumption in the Nash sense, but it's confusing - the players simply maximise their chance given the options they have. When I began to solve that, I thought players B and C could form a coalition, but it clearly doesn't give them any advantage. Bearish's answer is good too.

In my what do you both think I did it? In my ovaries? (If I was an octopus I could do this in my one of my arms - "I have all answers at my tentacle-tips.")

Last edited by katastrofa on February 1st, 2019, 9:27 am, edited 1 time in total.