Jam 2013 Round 1A

Posted on 22nd April 2015


As ever, links to: Official Contest Analysis and my code on GitHub.

I did this under timed conditions: a mild disaster, but then it was for everyone back in 2013. If I had been clinical, I could have solved A, B-small and C-small, which would have been enough.

Problem A, Bullseye: Draw some concentric rings, with a fixed amount of paint. A bit of maths shows that ring \( n \in \{1,2,3,\cdots\} \) uses \( 2r+4n-3 \) units of paint, so we want the maximal \( N \) with \[ t \geq \sum_{n=1}^N 2r+4n-3 = 2rN + 2N(N+1) - 3N. \]

You can solve this exactly, but with doubles, the needed square-root loses accuracy (they were nice enough to give an example which shows this). So I ran a couple of interations of Newton-Rapheson using integer arithematic in Python. An alternative would have been binary search with 64-bit integers.

Problem B, Manage Your Energy: Start with E energy, then each day decide how much energy to use, say \(e_i\) to "gain" \( e_i v_i \), then your energy decreases by this amount (and cannot go below 0) but then is recharged by R, but is capped at E. So we have something like \( E_0 = E, 0 \leq e_i \leq E_{i-1}, E_i = \min(E, E_{i-1} - e_i + R) \) and we want to maximise the gain \( \sum_{i=1}^N e_iv_i \).

The small case you can brute-force (in Python, a little optimisation is needed, noting that you can take \( e_1 \geq R \) and \( e_N = E_{N-1} \).) For the large case, I eventually came up with the following idea (which seems different, though slower, than the offical answer). For later recursion, suppose we also want to ensure that we finish with at least F energy, so \( E_N \geq F \), so the initial problem has \( F=0 \). Let \( v_k \) be maximal, and imagine trying to increase \( e_k \):

  • If \( e_k = E \) we can't
  • If \( e_k = E_{k-1} < E \) then be decreasing \( e_{k-1} \), or if not possible, then \( e_{k-2} \) and so on, we can increase \( E_{k-1} \). As \( v_k \) is maximal, this cannot decrease the overall "gain". In this way, we can increase \( E_{k-1} \) to the maximum it can be, which is \( E_0 + (k-1)R \).
  • If \( e_k < E_{k-1} \) then increasing \( e_k \) will decrease \( E_k \) and so maybe we'll be forced to decrease \( e_{k+1} \) etc. Again this is okay as far as overall "gain" is concerned. So the only constraint is to ensure \( E_N = e_{k-1} + (N-k+1)R \geq F \).

In this way, we can maximise \( e_k \) and then solve the intervals \( [1,k-1], [k+1,N] \) recursively. We must do these in order though, as for \( [1,k-1] \) we know the values of \( E_0, F \) but for \( [k+1,N]\) we don't know the eventual starting energy value yet. So just push onto a LIFO stack in the correct order. This gives an \( O(N^2) \) algorithm, because of having to find the maximal \( v_k \) for each sub-interval.

Problem C, Good Luck: More probability. Given chosen numbers \( (A_i)_{i=1}^N \) uniformly at random in \( \{ 2,3,\cdots,M \} \) and random subsets \( B_1,\cdots,B_K \) we're told the products \( p_i = \prod_{j\in B_i} A_j \). Try to guess the \( A_i \). Well, by Bayes, we're trying to maximise \[ \mathbb P(A|p) = \frac{\mathbb P(p|A) \mathbb P(A)}{\mathbb P(p)} \propto \mathbb P(p|A) \] which as ever makes sense: try to choose the \( A=(A_i) \) which make the products most likely to be seen.

For the small case, we can compute all choices, because we're given the constraints. For the large case, we need a better approach. This is actually easy once you realise the trick: there are seemingly \( (M-1)^N = 7^{12} \) choices for A, but the order of the \( A_i \) doesn't matter! Let's think about how to count how many unordered choices there are:

  • We need to choose how many 2s, 3s, ..., 8s there are, say \( 0 \leq x_2, \cdots, x_8 \) with \( \sum x_i = N = 12 \).
  • I'll use the Stars and Bars method.
  • Let \( y_i = x_i+1 \) so want to find choices for \( 1\leq y_i \) with \( \sum y\_i = N+(M-1) \).
  • Write down \( N+M-1\) stars and then puts \( M-2 \) bars between the stars, and then read off the \( y_i \) the the number of stars in each segment, e.g. **|*|**|*** gives \( y_2=2, y_3=1, y_4=2, y_5=3 \).
  • There are \( N+M-2 \) gaps between the stars so we have \( N+M-2 \) choose \( M-2 \) choices.
  • For the large case, this is 18 choose 6 or just 18564 choices.
  • Don't forget that now \( \mathbb P(A) \) varies, if we consider \(A\) as an unordered sequence. The number of choices is given by Multinomial coefficients so in our case, \( N! / x_2!\cdots x_M! \).

Then we generate all choices for \(A\) and then find all possible products and the relative chance of that product occuring (the number of choices for subset \(B\) which gives that product). As each choice of subset (and hence of product) is independent, \( \mathbb P(p|A) = \prod \mathbb P(p_i|A) \). So we search for which choices of \(A\) are consistent will all products, and then select the most likely.

To speed this up, my Python solution uses some dictionaries: one mapping products to possible choices for A and then one mapping pairs (product, A) to the relative chance. The IPython notebook shows that out of 8000 goes, even this "perfect" solution can only expect to right around 1300 times.


Categories
Recent posts